Learning Library

← Back to Library

Building a LangGraph SQL Chat Agent

Key Points

  • The tutorial demonstrates how to create an AI agent that can query databases by leveraging LLMs’ built‑in SQL knowledge, using LangGraph’s ReAct framework, watsonx.ai models, and an in‑memory SQLite instance.
  • A Next.js front‑end is set up with the latest `create‑next‑app` CLI, opting for TypeScript and Tailwind CSS to simplify styling and component development.
  • The default Next.js page is replaced with a custom `Home` component that includes a header, an input field for user queries, a submit button, and placeholder message bubbles to display LLM responses.
  • After coding the UI, the app is launched with `npm run dev`, showing a working interface where a sample query (“tell me a joke about SQL”) produces a mock reply, ready for integration with the LLM logic.
  • The next development step (mentioned briefly) involves wiring the input and button to functions that invoke the LLM, enabling real‑time translation of natural‑language questions into SQL queries against the SQLite database.

Sections

Full Transcript

# Building a LangGraph SQL Chat Agent **Source:** [https://www.youtube.com/watch?v=U2TP0pTsSlw](https://www.youtube.com/watch?v=U2TP0pTsSlw) **Duration:** 00:26:17 ## Summary - The tutorial demonstrates how to create an AI agent that can query databases by leveraging LLMs’ built‑in SQL knowledge, using LangGraph’s ReAct framework, watsonx.ai models, and an in‑memory SQLite instance. - A Next.js front‑end is set up with the latest `create‑next‑app` CLI, opting for TypeScript and Tailwind CSS to simplify styling and component development. - The default Next.js page is replaced with a custom `Home` component that includes a header, an input field for user queries, a submit button, and placeholder message bubbles to display LLM responses. - After coding the UI, the app is launched with `npm run dev`, showing a working interface where a sample query (“tell me a joke about SQL”) produces a mock reply, ready for integration with the LLM logic. - The next development step (mentioned briefly) involves wiring the input and button to functions that invoke the LLM, enabling real‑time translation of natural‑language questions into SQL queries against the SQLite database. ## Sections - [00:00:00](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=0s) **SQL‑Powered AI Agent with LangGraph** - A tutorial walks through creating a ReAct‑style AI agent that translates natural language to SQL, using LangGraph, Next.js, watsonx.ai, and an in‑memory SQLite database. - [00:03:11](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=191s) **Setting Up LangGraph with WatsonX** - The speaker explains terminating the terminal process, installing LangGraph and LangChain community, and creating a server‑side actions.ts file that imports LangChain functions to build a ReAct agent, connect to watsonx.ai models, and handle message serialization via JSON. - [00:06:20](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=380s) **Configuring Watsonx Credentials and React Input** - The speaker demonstrates retrieving the API key and project ID from the watsonx.ai dashboard, saving them, and then building a controlled input component in a React page by creating state variables with `useState` and linking them to the input’s value and `onChange` handler. - [00:09:30](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=570s) **Setting Up Text2SQL Agent** - The speaker walks through creating a system prompt for a Text2SQL LLM agent, serializing and sending user messages via a sendMessage function, storing the AI’s response, and linking a UI button to dispatch and clear the input. - [00:12:34](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=754s) **Implementing Message Rendering Logic** - The speaker modifies the page.tsx component to replace placeholder content with actual conversation messages, checks the message history array, and uses instanceof checks to differentiate HumanMessage from AIMessage for correct UI rendering. - [00:15:38](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=938s) **Setting Up SQLite and Schema Constants** - The speaker details installing SQLite3, creating a constants.ts file with SQL create queries for customer and order tables (including foreign key relationships), and preparing this schema for integration with a language model. - [00:18:43](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=1123s) **Seeding Mock Data with React** - The speaker explains how to insert mock customer and order data via a seed function, invoke it on app mount using React's useEffect, and create a generic execute function for running SQL queries. - [00:21:49](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=1309s) **Configuring LLM Tool Integration** - The speaker explains how to set up a GetFromDB tool, supply the database schema and SQL input format, and modify the system prompt so the language model knows when and how to generate quoted SQLite queries using the tool. - [00:24:59](https://www.youtube.com/watch?v=U2TP0pTsSlw&t=1499s) **Building a Text-to-SQL Agent** - The speaker demonstrates creating a LangGraph ReAct text‑to‑SQL agent that generates and runs a join query to find the top‑ordering customer, discusses guardrails, and points to the GitHub repository. ## Full Transcript
0:00What if you could build an AI agent that's able to talk to your database? 0:03Most large language models have been trained on code, including SQL. 0:07And in this video, we're going to build an agent that's able to use that SQL knowledge to connect to your databases. 0:13We'll be using LangGraph to build a ReAct agent, Next.js for a frontend application and models running on watsonx.ai. 0:20We also will be running an in-memory database using SQLite. 0:23So, let's dive into VS Code. 0:25In VS Code, I'm going to use the Next.js CLI to set up my boilerplate project. 0:30For this, I'll be running create-next-app at latest, together with the name of my project. 0:37You need to answer a few questions, such as will you be using TypeScript, which I will, and a few other default settings. 0:43We also will be using Tailwind, so we don't have to write any CSS. 0:47If you build your own project, you might want to make slightly different choices. 0:52After the CLI is finished, we can find some boilerplate code in our Text2SQL agent directory. 0:59Make sure to move into this directory before you try to start the application. 1:04In here, we can find a couple of files that we're going to change. 1:07In pages.tsx, you can find boilerplate code for the default Next.js app. 1:11We're going to get rid of this and add our own code instead. 1:17In here, I'll be creating a component called Home, which has a div element, 1:20with some settings in Tailwind to make sure we render the page nicely. 1:24We also need to make sure that we run this as a client-side component. 1:28In Next.js, you can run components either client-side, or you can run code server-side in the background. 1:35Inside this div, we'll be adding a header, 1:38we'll be adding some placeholder messages, and finally, we'll be adding an 1:41input box that we'll use to type a message to the large language model. 1:47Let me paste in the code for the header, which includes a title for this application. 1:52Then we're going to paste in the input bar and button that we need in order to submit a question. 2:00We'll be hooking up the input element and the button element to some functions later on. 2:05And then finally, in the middle, we're going to add some placeholder messages, 2:09which, of course, we'll be populating with data coming from the large language model later on. 2:16After putting in the code for the messages, I'm going to format my code and start the Next.js application. 2:23I can do this by running npm run dev, and this should open up a new page in my browser. 2:35In the browser, you can see we have a header, we have an input bar and also a button to submit a question. 2:42You can see there's two placeholder messages. 2:44The message from us is tell me a joke about SQL, 2:47and the placeholder reply would be a couple of relational databases 2:50walked into a NoSQL bar, and they left because they couldn't find a table. 2:55If you've been building SQL or NoSQL databases, 2:57you probably understand that NoSQL databases don't have any relational tables. 3:02So let's head back to VS Code and start implementing the logic to have 3:06the LLM tell us a joke instead of coming up with a placeholder message myself. 3:14We're going to kill the process running in our terminal because we need to install some libraries. 3:19We're going to install LangGraph and we're also going to be installing 3:22LangChain community, as LangGraph is based on LangChain. 3:27With these libraries, we'll be able to create a ReAct agent, and also we'll be able to connect to models available on watsonx.ai. 3:34After the installation is finished, we can create a new file, which we're going to call actions.ts, 3:39and this is where we'll implement the logic to create the ReAct agent, and also to connect to models in watsonx.ai. 3:47We will be calling this file actions.ts, 3:50and at the bottom, at the top of this file, we need to make sure that we set it as a server-side file. 3:58And in here we can add a new function, which we are going to call message. 4:03This message function will take an input, which is a message history, and then at the end it should return the latest message. 4:11First, we need to import some functions from LangChain. 4:16The first one is a chat function in order to connect to models in watsonx.ai. 4:20The second one is a function to create a ReAct agent. 4:23We have two functions, which we need in order to build a tool. 4:27And then finally, we need to import two functions, 4:30which are used to connect the messages coming from our frontend component 4:35to the messages that we need in our backend server. 4:38In XAS, you won't be able to pass down very rich objects, such as the message history and LangChain. 4:44So instead, we need to serialize these to JSON, and then we need to deserialize these. 4:49So, there is a bit of function in here in order to do the serialization, and later the deserialization. 4:55The store message function will actually be the reply, will actually be the result type of messages in here. 5:05So, let me add the logic here. 5:06First, I'm going to deserialize the messages coming in. 5:10And as mentioned, these have been serialized in order to pass them from our frontend component to our backend component. 5:17Then we're going to set up the agent, and for this we'll be using the chat watsonx interface. 5:23You can see the model we're using is Mistral Large 5:25but, of course, you can use any of the other models available to your watsonx.ai project. 5:31We won't be passing any tools in yet, because we first need to set up our database. 5:35And then finally, we're going to return the response at the end of this function. 5:42So let me save this so the file gets cleaned up. 5:45I'm now also going to set up my environment variables for watsonx.ai. 5:49I need my project ID, I need an endpoint and I also need to pass in my API key. 5:54For this, I'm going to create a new file, which I'll be calling .env inside my project directory. 6:00And in here, I need to set the following variables. 6:07I need to set my API key, my endpoint, which includes your region, 6:10for me it's US South, the identity server, which is static, the project ID, 6:15which is coming from your watsonx.ai project, and finally a static API version. 6:20To get your API key and your project ID, you need to visit the watsonx.ai dashboard, 6:25and you can find the developer access page where these credentials are available to you. 6:30Make sure to save this file and then close it. 6:35The next step is to connect whatever we set up in actions.ts to something we can use in page.tsx. 6:43For this, I'm going to create a few state variables. 6:45First, I need to create a state variable to make our input box a controlled component. 6:51For this, I'll be creating a variable called input message, and then next to the variable to have the value of input message. 6:58We also need to create an update function, which I'll be calling set input message. 7:04In order to use local state, we need to use the useState hook from ReAct, not to be confused with ReAct agents. 7:11ReAct is a JavaScript frontend framework, and I'm going to set the initial value to an empty string. 7:19And also make sure to import useState from ReAct because otherwise you won't be able to use it. 7:24I'm also going to set up a message local state variable later on. 7:28But first let me connect input message to our input box. 7:34I'm going to scroll down to my input element, and in here I'm going to set value to equal to input message. 7:41Then I also need to hook up the onchange event, 7:44so whenever you start typing in this input box, it should update the input message with the value in the element. 7:53So, I'm going to set this to update the set input message function, 7:59and the value will be e.target.value. 8:03Let me save this. 8:05We can now use this state variable in order to send a message to the large language model. 8:11And for this, we need to first set up a local state message history. 8:15For this, I'm going to create another state variable, and this time I'll be calling it messages, 8:20which has, again, an input function, which are called setMessages. 8:26I'm using a useState hook again, and this time the useState hook is going to get a slightly different value. 8:32It's going to be an array instead of an empty string, and I also want to set some type definitions in here. 8:38But first, I need to import some things from LangChain in order to set the state. 8:46The message state variable will be of type BaseMessage, and this is LangChain's type definition of a message. 8:53It's important to use types here in order to not confuse LangChain whenever we send a message to it later on, 8:59as we are using LangChain and LangGraft to build our ReAct agent. 9:05So, the type of messages will be BaseMessage, and then, of course, we can also set a SystemMessage. 9:10You can see I imported some functions and types in order to do this. 9:14So, I have class HumanMessage, class SystemMessage, and class AIMessage. 9:19And these are the three types of messages that we have. 9:21We have a system prompt, which is the SystemMessage, 9:24we have a HumanMessage coming from us or the user, and then we have the AIMessage, which is the LLM response. 9:33In here, I'm going to set up a new system prompt by using the systemMessage class. 9:40And in here, I'm going to tell the large language model what it is and what it is supposed to do for us. 9:47So we'll be building a Text2SQL agent. 9:49It should generate SQL based on natural language and use it with any given tools. 9:53So, later on, we'll be creating a tool to connect to a database, 9:56and the large language model should be aware it can use this tool to execute the SQL. 10:04After setting this, I can then set up my function to actually send the message to the LangGraph agent. 10:12I'll be calling this message function sendMessage. 10:15It has a shadow message history, which includes all the previous messages in state, and then a new message. 10:22So, this will be your input message. 10:23Whatever you type in the input box will be its value. 10:27And then before I'm sending it to the message function, which I have in actions.ts, 10:31I'm going to make sure that I serialize it in order to transfer it to our actions.ts function. 10:38Now, finally, if there's a response from the large language model, 10:41I'm going to make sure I store it in a local state as an AIMessage because it's coming from the large language model. 10:51I need to scroll down to my button component and hook it up to send the message whenever I press it. 10:57Say in here, I can create an onclick handler, onclick, and what this will do, it will send message. 11:08And I can just save this. 11:11I probably also want to make sure that I get rid of my input message whenever I send a message here. 11:18So I can also send this value to be an empty string whenever I start to press send. 11:24Something else I want to do here probably is set a loading state. 11:27So, I can create a very simple loading state, which will make sure 11:30I don't get confused whether or not I send something to the large language model. 11:35So, I'll be creating a third state variable, which is called isLoading, and then a function to update the isLoading state. 11:43By default, I will say this will be false because we don't want the loading state whenever we render the application first. 11:51So, this will be equal to false. 11:53Whenever I send a message, I want to make sure that set isLoading is true, so I can show some render. 12:00And then finally, whenever we get our messages, I want to make sure that the loading state is set to false again. 12:09And save this. 12:11I can probably hook up isLoading state somewhere else, so whenever 12:14I start pressing the button, I want to make sure that the button says we are loading. 12:19So, in here, I can check for the isLoading state. 12:22Whenever it is loading, I can change the label to be loading, so our button will show loading instead of send. 12:28And when it's not loading, it can just say send. 12:31I can also disable the input box. 12:34That way you won't be able to send multiple messages when the LLM is already looking at your previous message. 12:44So, this should be all that's needed to set up my frontline application. 12:48I can actually run this and see what it looks like in the browser. 13:00You can see we still have the same application. 13:01We still have the two placeholder messages. 13:04This time we have an input box that should be hooked up to something, 13:07but we won't be able to render these messages yet until we make 13:10another small change. Let's go back to page.tsx 13:14and make sure that our placeholder messages are replaced with the actual messages created by the agent. 13:24In page.tsx, I need to find the place where I set the messages. 13:28And in here, I'm going to check for the existence of a message history. 13:35For this, I'm going to check if the message state is not an empty array. 13:39And if it's not, I'm going to be using it to render the placeholder messages. 13:48We have two types of messages. 13:49We have messages coming from us or the user, and then we have a message coming from the large language model. 13:54So, in here, I need to distinguish between two types of messages. 13:57The first one is the HumanMessage, and the second one 14:01is the message coming from the large language model or, in this case, actually the agent. 14:09As we are using LangChain, we can use the instance of, together with 14:14our class for HumanMessage or AIMessage, to distinguish between these two types. 14:19So, we can add some if-else statements, if message 14:23is equal to an instance of HumanMessage, we can render the first type of message. 14:30If it's equal to an instance of AIMessage, we can render the second one. 14:35And this way, we will make sure that we don't render the wrong message. 14:39We won't be rendering any of the SystemMessages because these are just meant for the LLM or the agent. 14:54Of course, we need to make sure that whatever data is in here 14:57is being rendered dynamically from the state 14:59and isn't our placeholder message. For this, we can use the message.content field. 15:11And we need to make sure that this is rendered as a string, as in theory, content could be an object. 15:16But for this case, I know for sure it will be a string, so I can use the S string definition. 15:21And I can do the same for the AIMessage. 15:24As we are using ReAct, we need to make sure whatever we return from a map function has a key. 15:30And for this, I'll be setting the key to be the message type plus the index count. 15:36And I can do the same for the AIMessage. 15:39Make sure to format this and then save it. 15:43If you would visit the application in the browser, you can see an empty screen 15:46because we don't have any messages in history that should be rendered there. 15:53With this set up, I can now do the final part of this video, and I can create my database. 15:58For this, I'll be using SQLite 3. 16:01So, I need to kill the process running on my terminal and then I can install SQLite. 16:09The library is available on npm, so I can just run npm install SQLite 3. 16:14Of course, you could also run a database in the cloud. 16:16But for this video, I decided it's easier to run something right directly in my application. 16:23After installing the library, I need to create a couple of files. 16:26I'll first, I will be creating a constants.ts file. 16:33In this file, I'll be creating the create SQL queries, which I will be needing later on 16:38because we're going to use these create SQL queries not only to see the database, 16:43but also to give the large language model additional information on the database schema. 16:49So, in here, I'll be first adding information about the customer table. 16:53So, we'll be having a customer table in our database, which has ID, email and name. 16:57And then we'll be having an order table, which has more information about orders these customers might have placed. 17:05And you can see there's a foreign key relation between customer and order, 17:09meaning that we would be able to see which customer has placed what order. 17:14Let me save this and then create another file, which I'm going to call database.ts. 17:20In the database.ts file, I'm going to create the functions to see the database 17:25and also to execute a SQL query against this database. 17:30Go on this database.ts. 17:32In here, I need to make sure that I set it up to be a server-side function. 17:36And then I can start to import the constants we just created, which will be customer table and order table. 17:43And then I can start to create the database function. 17:46First, I want to connect to the database. 17:49And for this, I need to set up a new connection to SQLite. 17:53Of course, I need to import SQLite as well. 18:00And then I can start to create a new database. 18:04As mentioned before, I'll be running this in memory because it's the easiest to do for this video. 18:10Then I need to create a seed function. 18:12In the seed function, I'm going to run both create queries. 18:16So, first, I'm going to create a table called customer. 18:18Then I'll be creating a table called order. 18:20And then, of course, I need to seed these with actual data as well. 18:25So, I'll be adding two functions. 18:27The first one will be db.run. 18:30This will be used to seed the customer table. 18:33And then I'll be running another db.run. 18:36And this one will be used to seed the orders table. 18:39So, in here, I can place the SQL query to populate these tables with data. 18:44For customers, I'm going to input some mock data. 18:49And I'll be adding 10 users, which are mock data, of course. 18:53And then I'll be adding more orders as well. 18:56And these orders are linked to customer IDs. 18:58So, later on, the agent would be able to match customers to orders. 19:04Paste this in as well. 19:06You don't have to worry about the formatting because these are template literals. 19:13I now have the seed function. 19:14I need to make sure I run this function whenever my application starts so I'm able to retrieve data from the database. 19:20There are multiple places where you could do this. 19:22You can probably set up a nice function that runs whenever you start your application. 19:26I'm going to be a bit lazy. 19:28And inside my page.tsx, I'm going to import the seed function. 19:36And whenever my application mounts, I'm going to run the seed function. 19:40For this, I'll be using the useState, the useEffect lifecycle function from ReAct. 19:46And I'm going to make sure it only renders when the application first mounts by setting an empty dependency array. 19:52And in here, I need to run the seed function. 19:57As mentioned, there are better ways to do this. 19:59But for the purposes of showing you how to build a TXS SQL agent, this is one of the easiest ways to get it done. 20:06In my database.tsx, I can also create a function to execute a SQL query against the database. 20:13For this, I will be creating a function which I'm going to call execute. 20:23This execute function will take a SQL query as input. 20:28And this will be a string. 20:30And then it should return data coming from the database. 20:33For this, I'm going to set up a promise because I need to make sure I wait for the data returned by the database. 20:41You can see I'm also returning errors. 20:43So, in case the database responds with an error, I'm going to return this error to the large language mode 20:48l so it might be able to try again and do better this time. 20:52Let me clean up this code and then save it. 20:55We've now set up our database. 20:56So, we have a seed function, and we have a function to execute. 21:00And I can use both of these to set up a tool, which I'm going to do in actions.ts. 21:07In actions.ts, I'm going to create a new tool. 21:11I already imported the tool function from LangChain. 21:15So, the only next thing for me to do is to actually define this tool. 21:19And to define this tool, I'm going to create a constant, which I call GetFromDB. 21:27And this will be using the tool function from LangChain. 21:30First, it needs to set the callback. 21:33So, this callback will be used to execute whenever the tool is being called. 21:38And for this, we'll be using the execute function we created in database.ts. 21:43So, I'll be adding this to my imports. 21:47And then I also need to set the tool definition. 21:49So, this will be used by the LLM to decide whether or not it should call your tool. 21:54For this, I'm going to paste in a little bit of placeholder code. 22:00As you can see, we have to find a tool, which we call GetFromDB. 22:03The tool description is get data from a database and then we paste in the database schema. 22:08So, this is important because we want the large language model to be aware of what data is available in the database. 22:14Make sure we import these from our constants file as well. 22:20And then finally, you can see the input schema includes SQL. 22:23And there we say it's a SQL query to get data from a database. 22:26Put quotes around the field and table name. 22:28It's important to make sure the large language model understand how it should form a SQL query. 22:36Finally, we need to hook up this GetFromDB function to our actual agent. 22:40And we can do this in the tools field. 22:43I'm going to save this. 22:44And before I'm going to rerun my application, I want to make sure that I update my system prompt 22:48so the large language model is aware of how it should use this tool. 22:53In page.tsx, we have a very simple system prompt, which I'll be 22:58making this a bit more complex by adding more guidelines for the large language model. 23:05So, instead of saying you should create a SQL query based on natural language, 23:09we're going to tell it you should create a SQLite query based on natural language. 23:14You should use the GetFromDB tool whenever possible. 23:16And you should enclose field names and table names in double quotes, even if they contain no special characters. 23:23So, this will help the large language model to create a better SQL query. 23:28Make sure to save this and run the application by running npm run dev. 23:33And we should now be able to go back to the browser and see the application in action. 23:41Make sure to refresh the page and let's see if the large language model and the 23:45agent can come up with a better joke about SQL than I did in the beginning. 23:48So, tell me a joke about SQL. 23:53For this, it doesn't need to create a SQL query. 23:56Instead, it should be able to just use its own training data. 24:01The joke is actually why did the database go to therapy? 24:04It had too many join issues. 24:06So, if you've built SQL databases before, you can understand why this could be funny. 24:10So, let's ask a more complex question, which actually involves the large language model to use our tool. 24:16So, we can ask it how many customers do I have? 24:20And for this, it should query the customer table. 24:28You can see it generated a SQL query right here we actually wanted to execute. 24:32So, give me the number of customers. 24:38And send and, hopefully, we'll come back with the actual number instead of just the SQL query I should use. 24:44You can see we should have 10 customers in our database. 24:48If we go back to VS Code, we should be able to see the SQL query generated. 24:55In a terminal, you can see I'm actually printing the response of the database. 24:59And I'm also printing the SQL query DLM generated. 25:03Let's ask a more complex one where we ask it to join both tables. 25:08We make sure to refresh so we have a clean message history, and we don't explode the context window. 25:15So, which customer placed most orders? 25:22This might take a bit longer as now it needs to join multiple tables in order to get the answer. 25:27And you can see it replied Lucas Bill placed the most orders. 25:30If we go back to VS Code, we might be able to see what SQL query it used in order to generate this answer. 25:39In VS Code, you can see it did a join so, it's actually joining the customer table 25:43with the order table and finally the result will be Lucas Bill. 25:47If you're building a text to SQL agent, you might want to think about guardrails because you don't want to give the, 25:52you don't want to give the DLM unlimited control of your database. 25:59And that's how easy it is to build a text to SQL agent. 26:01We use LangGraph to build a ReAct agent models available on watsonx.ai. 26:05And finally we use the SQLite in-memory database to return our data. 26:10If you want to know more about building this application yourself, 26:13make sure to find the link to the GitHub repository in the video description.