Building Your First IBM LLM Agent
Key Points
- The IBM React Agent framework (B‑Framework) provides a TypeScript‑based, plug‑and‑play environment for building LLM‑powered agents with support for multiple LLM adapters, tools, memory, and logging.
- You can stream responses from any supported model (e.g., Llama 3.1 70B via Watson X AI) by configuring API keys, importing the appropriate LLM class, and using the `llm.doStream` method with a simple prompt.
- After setting up a Node project (using Yarn, TSX, and the IBM Generative AI SDK), the author demonstrates how to run a basic streaming script (`flow.ts`) to verify connectivity and output generation.
- To evolve the script into a functional agent, the tutorial introduces the `BAgent` class, adds token‑based memory, and shows how function‑calling capabilities enable the agent to perform actions beyond static text generation.
- The overall workflow is broken into three steps: initialize the project, generate streaming answers, and then extend the agent with memory and function calls to create a more interactive, real‑world‑ready LLM agent.
Full Transcript
# Building Your First IBM LLM Agent **Source:** [https://www.youtube.com/watch?v=C-pZXA6Te_o](https://www.youtube.com/watch?v=C-pZXA6Te_o) **Duration:** 00:04:19 ## Summary - The IBM React Agent framework (B‑Framework) provides a TypeScript‑based, plug‑and‑play environment for building LLM‑powered agents with support for multiple LLM adapters, tools, memory, and logging. - You can stream responses from any supported model (e.g., Llama 3.1 70B via Watson X AI) by configuring API keys, importing the appropriate LLM class, and using the `llm.doStream` method with a simple prompt. - After setting up a Node project (using Yarn, TSX, and the IBM Generative AI SDK), the author demonstrates how to run a basic streaming script (`flow.ts`) to verify connectivity and output generation. - To evolve the script into a functional agent, the tutorial introduces the `BAgent` class, adds token‑based memory, and shows how function‑calling capabilities enable the agent to perform actions beyond static text generation. - The overall workflow is broken into three steps: initialize the project, generate streaming answers, and then extend the agent with memory and function calls to create a more interactive, real‑world‑ready LLM agent. ## Sections - [00:00:00](https://www.youtube.com/watch?v=C-pZXA6Te_o&t=0s) **Building an LLM Agent with IBM Framework** - The speaker walks through creating a TypeScript‑based, streaming LLM agent using IBM’s React Agent Framework—setting up environment variables, selecting adapters, and configuring a Llama 3.1 70B model for tool use, memory, and logging. ## Full Transcript
this is how to build your first llm
powerered agent using the IBM framework
so our research Labs have been cooking
up a react agent framework it allows you
to use a bunch of tools work with
different llms use memory and logging
pretty much everything needed to have a
great agent but it gets better there's a
bunch of features that are going to help
you if you're trying to do this for real
I'll come back to these in a sec I'm
going to show you everything you need to
know about it in a few minutes you're
probably thinking why wouldn't I just
use Lang chain or can I use open source
llms or is it just limited to IBM stuff
it's written in typescript and I haven't
touched the language since my startup
crash and burned so this is going to be
fun I'm going to break it down in just
three simple steps and it begins with
generating an answer using streaming now
my code just wants to belong so I'm
going to create a new file called flow.
TS to hold it there are a number of llm
adapters in the B framework including
grock Lang Chain O open Ai and what's an
xod AI I'm going to use the lad I've
already got my API key and project ID
I'll make them available to the process
by using the env. config method and
while I'm at it I'll bring in the
whatson X chat llm class I can connect
an llm on what's an next at AI now
rather than using any old model I can
specify the Llama 3.1 70b instruct
preset via the from preset class here I
can also set parameters like the
decoding method and the max token my
goal right now is to just generate
output using a prompt to do this I'm
going to use the llm do stream method
given I'm coding in typescript I need to
import the base message and the RO
primitive to form a prompt I can then
throw together an asynchronous function
and call the llm stream method to that I
can pass through my prompt who is
Nicholas or not just like that time I
ate a Carolina Reaper before a 16-hour
flight I've made a catastrophic error I
haven't initialized my project yet or
installed any dependencies let's fix
that I'll initialize the node project by
running yarn and it and install tsxv the
Bagent framework and the IBM generative
AI node SDK I'm going to make one quick
tweak to the packages. Json file in add
a script called flow which runs the
flow. typescript file using TSX if I run
Yan run flow I get an okay result from
the llm but it doesn't really know me
and it doesn't have access to the net to
find out so I've got the streaming
working but let's be you're watching
this for agents and so far I haven't
quite delivered this is all about change
in part two building an agent with
function calling let's change it up I
can bring in the B agent from the
framework and begin creating a new
instance of the llm agent the first
parameter I'll pass is my existing llm
sometimes you'd rather not know who and
what you texted after a big night out
but for our llm adding memory is going
to provide a lot of content so I'll
import the token memory class and add it
to the agent now tools I can bring in
the Duck Duck Go Search tool to access
the net and the open media tool for all
things weather again I'll add the these
as a new parameter to the L now to bring
it all together we'll get rid of the
function we wrote for the Baseline
generation and use the agent there's two
methods I need to run the agent run and
observe to the run method I'll pass the
prompt and execution parameters like
number of retries then I'll use the
emitter this allows me to see what's
happening at each stage of the agentic
workflow each time there's a completed
action I'll be able to see the status by
observing the update action in this case
I'll console log the update key and the
update value this will show things like
the output from the function and the
final response and last but not least
I'll console log the the final result
text for good measure I can ask when IBM
was founded and after searching the net
using duck ducko we get the correct
response likewise if we want the agent
to use the weather tool I can ask what
the weather is like in New York and at
the time get a valid response by
leveraging open media these tools are
slick we can search a net call an API
but what if I wanted to write some code
or execute some logic using a code
interpreter this brings me to part three
adding a python code interpreter first
up I need to bring the python tool and
the local python storage class the
python tool will be used to execute Ute
code and the storage component allows
for code to be read and output locally
now to configure the python tool I'm
setting up the code interpreter URL and
the storage locations this tells my
agent where to run python code and where
to sort any files it might create or
read given we're running our agent in
typescript we need somewhere to execute
python code the Bagent framework comes
with a standalone code interpreter which
can be run via Docker I haven't done
this yet so let's go do that the docker
file is available via the B agent get
Hub repo first clone it CD into it and
install any remaining dependencies then
I can spin up the container using yarn
run infra start code interpreter then if
I change the prompt to something like
is3 a prime number I can run it using
yarn run flow and finally we get the
correct
result hey guys editing Nick here I hope
you enjoyed the video let me know what
you thought in the comments and code
will be down below