Learning Library

← Back to Library

Building a Generative AI Pet Naming App

Key Points

  • David Levy demonstrates building a full‑stack AI‑powered app with a React TypeScript UI, a TypeScript Express server, and a Python FastAPI backend to generate pet‑name suggestions.
  • The app collects pet descriptions, sends them to a generative LLM, and returns a creative name with an explanation (e.g., “Lady Gobbledygawk”).
  • He explains prompt engineering in watsonx.ai Prompt Lab, using clear instructions and few‑shot examples to shape LLM output, and shows how to adjust model parameters and view generated code.
  • The tutorial walks through cloning the repository, creating a Python virtual environment, and installing the FastAPI dependencies to prepare the backend for integration.

Sections

Full Transcript

# Building a Generative AI Pet Naming App **Source:** [https://www.youtube.com/watch?v=2hB3XzfpGtI](https://www.youtube.com/watch?v=2hB3XzfpGtI) **Duration:** 01:05:01 ## Summary - David Levy demonstrates building a full‑stack AI‑powered app with a React TypeScript UI, a TypeScript Express server, and a Python FastAPI backend to generate pet‑name suggestions. - The app collects pet descriptions, sends them to a generative LLM, and returns a creative name with an explanation (e.g., “Lady Gobbledygawk”). - He explains prompt engineering in watsonx.ai Prompt Lab, using clear instructions and few‑shot examples to shape LLM output, and shows how to adjust model parameters and view generated code. - The tutorial walks through cloning the repository, creating a Python virtual environment, and installing the FastAPI dependencies to prepare the backend for integration. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=0s) **Building a Generative AI Pet Naming App** - IBM Technology Engineer David Levy demonstrates how to create a React TypeScript UI, a TypeScript Express server, and a Python FastAPI backend that leverage watsonx.ai prompt engineering to generate pet name suggestions with explanatory reasons. - [00:03:15](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=195s) **Setting Up FastAPI with Watsonx** - The speaker walks through activating a virtual environment, installing dependencies, configuring a .env file with Watsonx API credentials, launching the FastAPI via Uvicorn, and verifying the health and summary endpoints on Swagger UI. - [00:06:19](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=379s) **FastAPI Structure with Prompt Lab Integration** - The speaker explains how they organize a FastAPI app by mirroring watsonx.ai Prompt Lab configurations—models, parameters, and few‑shot examples—into data directories for rapid, iterative development. - [00:09:32](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=572s) **Integrating Watsonx AI with LangChain** - The speaker explains how the `generate_text_response` utility builds a prompt, optionally includes few‑shot examples, retrieves a model via `ModelRequest.get_model` using the watsonx.ai SDK, wraps it in a `watsonxLLM` for LangChain compatibility, and assembles the full chain with an output parser. - [00:12:39](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=759s) **Defining Typed JSON Responses with FastAPI** - The speaker demonstrates importing a Pydantic class, setting it as the `response_model` for a FastAPI route, and using it to enforce a specific JSON output shape (e.g., a `generated_text` string) for easier team coordination. - [00:15:47](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=947s) **Integrating Pydantic JSON Response** - The speaker walks through importing the GeneratePetNameResponse schema, modifying the endpoint to use LangChain’s PydanticOutputParser with format instructions, and switching from a text parser to a JSON response parser for the generate_pet_name route. - [00:19:01](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1141s) **Transferring Prompt Lab to Code** - The speaker explains how to extract a prompt’s instructions, few‑shot examples, and model parameters (via Curl) from Watsonx.ai Prompt Lab and recreate them as new JSON, example, and template files, swapping the IBM Granite model for Mixtral. - [00:22:14](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1334s) **Integrating Format Instructions in API** - The speaker demonstrates adding a format_instructions field, embedding it with prompt examples and a PydanticOutputParser, returning a dictionary containing generated_text, and validating the response against Swagger documentation. - [00:25:26](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1526s) **Setting Up Env Vars for Integration** - The speaker explains running a setup command to create example environment files that link the React UI, Express server, and FastAPI, enabling each component to communicate through defined endpoints. - [00:28:31](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1711s) **Creating a New Pet Namer Route** - The speaker walks through adding a petNamerRoutes.ts file, importing Axios, loading the FastAPI API_URL from process.env, and defining an async POST handler (generate_pet_name) that forwards requests to the FastAPI with basic try/catch error handling. - [00:31:49](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=1909s) **Routing UI Data to FastAPI** - The speaker details how to capture dynamic UI input in an Express route, forward it as a POST request to a FastAPI endpoint using an environment‑based URL, and return the generated text (name and description) from the response. - [00:35:09](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2109s) **Validating POST Endpoint via Postman** - The speaker explains sending UI data with Axios to an Express route that returns generated text, then uses Postman to test the health and pet‑naming endpoints before integrating them with a FastAPI backend. - [00:38:14](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2294s) **Using Carbon React for UI** - The speaker demonstrates how to employ the Carbon design system in a React project—adding headings, combo boxes, checkboxes, and other components—while navigating the file structure and explaining the steps for a less‑experienced frontend developer. - [00:41:19](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2479s) **Simple ID/Text List Filtering** - The speaker explains how to populate a Carbon UI component with an array of items containing IDs and text, enabling built‑in filtering and state management, typically sourced from an API. - [00:44:29](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2669s) **Toggle Gender Checkbox and Tag Input** - The speaker explains a Boolean‑based gender toggle that disables during API loading and a custom Carbon‑styled input that creates descriptor tags when entered. - [00:47:33](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=2853s) **Enter-Key Input Handling and Tile Rendering** - The speaker explains adding an onKeyDown Enter listener to submit text, dynamically enabling/disabling a button based on input state, storing entries in a descriptor array, and rendering each entry as a Carbon Tile. - [00:50:36](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3036s) **Implementing Carbon Button Set** - The speaker notes a personal liking for box‑shadow, then walks through adding a Carbon React button set with primary “Submit” and secondary “Clear” buttons, configuring their kinds and spacing, and wiring simple state‑reset functions for clearing and submitting. - [00:53:41](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3221s) **Implementing Loading State with Accordion** - The speaker explains adding a loading state using accordion skeletons, constructing a comma‑separated descriptor string, and wiring the API call to display results. - [00:57:00](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3420s) **Extracting API Response Data** - The speaker explains how to use TypeScript and Axios in a React UI to access and name the returned fields—generated text, name, description, and the original request—after a successful call to the Express server. - [01:00:08](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3608s) **React to FastAPI Request Cycle** - The speaker explains how a React submit handler toggles loading, routes a request through an Express server to FastAPI, receives and displays the generated response and original payload, and outlines error handling for cases where the LLM does not return proper JSON. - [01:03:21](https://www.youtube.com/watch?v=2hB3XzfpGtI&t=3801s) **Debugging Express Server Errors** - The speaker forces a 500 response to test error handling, fixes the Express server, and urges viewers to expand the full‑stack generative AI demo with new routes, prompts, and UI. ## Full Transcript
0:00Hi, my name is David Levy. 0:02I’m a Technology Engineer with IBM, and today we’re going to build an application on AI Applied. 0:10Today I’m going to walk you through the process of creating a React TypeScript UI; 0:14...setting up a TypeScript Express Server for handling UI logic; 0:17...and integrating it with a Python FastAPI backend, 0:20...that leverages Generative AI capabilities by creating a pet naming suggestion application. 0:25Let’s get started. 0:26So when we come here, we can see right away that this is our application. 0:31It’s asking us to describe our future pet, and it’s going to give us a suggestion for its name and also a reason why it did it. 0:37So I think I’m going to get a dog. 0:40It’s going to be a girl. 0:40And she’s going to be sweet; cute; cowardly; sleeps under my bed; and keeps me up at night by talking to me. 0:54Very weird animal. 0:55So let’s see what kind of name we get back, 1:01...Lady Gobbledygawk, and then it gives us a reason why it named it such. 1:06Now in order to get an LLM to respond in such a way, we’re going to have to work with a few different things. 1:13One is prompt engineering, and the way we’re going to do that is we’re going to go to the watsonx.ai Prompt Lab. 1:20So let’s start there. 1:23First I’m going to show you a Summary Generation Prompt example. 1:28So right now I’m asking it to summarize the history of bicycles. 1:33And when I click generate, it gives me exactly what I want. 1:35And the way I’ve accomplished this is first by giving a clear and concise instruction, 1:41...and then also using something called few-shot examples. 1:45So we give a long string that we want to be summarized, and then we give the example of what we’re expecting back. 1:52So we do that a couple of times so when we ask it to do live generation, it provides the answer to us exactly the way we want it. 2:00One of the things I want to show you within the watsonx.ai Prompt Lab is that we have options for model parameters, 2:08...and also we have a view of the code, which is going to come in handy when we want to transpose this to our FastAPI. 2:16So let’s clone our repo and we’ll start working on the FastAPI. 2:21So when working with Python, it is a good idea to set up a virtual environment. 2:25The way we could do that is we could – I already have one set up and we’re going to just activate it, 2:33...and then we’re going to install the requirements file that’s located inside of the repo. 2:44Perfect. 2:46So now that we have the GitHub repo cloned on our machine, let’s first install the Python dependencies. 2:54When dealing with Python, what is good practice to do is install dependencies of any kind of project into its own virtual environment. 3:04And what this is going to do is it's going to create its own segmented environment with no packages installed at all, 3:09...which we can then use to install all the unique packages that we have in this particular repo. 3:15So now that we’ve created the virtual and Applied AI, let’s activate it. 3:26And we can see right on the screen that we’re now within that virtual environment. 3:30And the next thing we have to do is install the requirements.txt, which is just a frozen dependency requirement file. 3:42So we’re just going to hit and install it into this particular virtual environment. 3:46Now that we’ve installed the dependencies for our FastAPI, we’re going to copy the .env.example and create a new .env. 3:55In the .env we’re going to want to grab the API key, the Project ID, and the URL that we can find in the watsonx.ai Prompt Lab. 4:05So if we go to the code and we open it up, we can grab the Project ID right from the Curl. 4:18And for the API key, we go back to our Identity & Access Management from our cloud. 4:25We get there by the Manage dropdown, go to API keys, and create a new API key. 4:39Never share your API key. 4:41Don’t ever push it up to GitHub. You’ll get in trouble. 4:46And now that we have this, we can start up the application. 4:49So from our API directory in the code repo, we can run the command Uvicorn Server app and do fast reload. 4:57This is very good for development. 5:00But we can go to our Swagger UI and you could see that we have a couple of prebuilt routes in the repo we’ve provided for you. 5:10We have a health route just to make sure that the FastAPI is up, and you can see that it’s up. 5:15And then we also have that generate_summary endpoint. 5:19And we have that there just to recreate exactly what I showed you from the Prompt Lab where you have instructions, 5:26...you have a few-shot examples, you have the data, you send the data and you get a new summarization. 5:33I wanted to show how to actually apply that in the FastAPI. 5:37And with FastAPI you have these routes that you could test against, and you can see exactly what we’re looking for. 5:45So we’re looking for a template model, we’re looking for a prompt template name, and then we’re looking for additional kwargs. 5:50And I’m going to show you exactly how to use it from this Swagger UI. 5:57So we’re going to send over the exact thing that we had, the history of the bicycle. 6:03We’re going to send it over as data to our FastAPI and we’re going to ask it to summarize it. 6:10And now it’s hitting our FastAPI and it provides us a nice summarization. 6:17How do we do that? 6:19There’s a couple of components within the FastAPI that I want to show you. 6:24So if you look at the way the application is structured, we have a data directory. 6:29In the data directory, we have a couple of additional directories, 6:33...one called examples, one called models, and one called prompt templates. 6:37Each of these I have used the Prompt Lab to dictate what I put in there. 6:44So I’m basically just transposing what I’ve done in the watsonx.ai Prompt Lab into the application, 6:50...and that makes this process so much easier. 6:53So if we look at the model, we can see all the parameters; 6:57...the ID for the model that we’re using, the parameters for decoding, and min new tokens, et cetera. 7:02And if we look at our Prompt Lab and open up the view code, we can see all the same information. 7:10So basically what I did was I copied what was successful for me in the Prompt Lab, 7:15...transposed it into my application, into that model of JSON, and use that as the model parameters for my endpoint call. 7:24Similarly, we have all of these examples. 7:27We’re using few-shot to implement better responses from the LLM. 7:34So the way we can do that is, similarly name it generate summary as a text file. 7:40And we have these examples, identical to the ones we have here. 7:47And the reason I’m doing this is because we’re going to be building this iteratively. 7:51Like we want to be able to go directly from watsonx.ai, 7:54...into our code base and then integrate it with the UI and just be able to work really, really quickly. 7:59It’s just a very nice pattern to use to get really great results when dealing with something like an LLM. 8:05Lastly, we have the prompt template. 8:08Now a prompt template is something that we can use to give instructions, use the examples that we’ve placed, 8:15...and take the data that we’re sending via the API and send all of that to the LLM to return back the responses we’re looking for. 8:23So if we look at that, we have the instruction, summarize the following text; we have the examples; we have the input, 8:29...which is that data that we’re sending directly from that endpoint that we showed in the Swagger docs. 8:36And that’s exactly how we implement it. 8:38Now what we’re using it as is in the server.py file. 8:46Here you could see the generate_summary endpoint. 8:51We’re looking for three parameters; the template model, the prompt template name, and the prompt template kwargs. 9:00Now the template model is defaulting to generate_summary, 9:03...and that is because in the data directory those are the names of the files that we’re using. 9:09If you look at the functionality in the repo when you get it, you’ll see that we’re looking at these directories. 9:14We’re grabbing the names of those directories and using those, 9:17...as the key to a dictionary that we’re going to return back to us when we hit it with the correct parameter. 9:24So if we look for something that says generate_summary, it’s going to grab the examples generate_summary, 9:28...it’s going to grab the model generate_summary, and it’s going to grab the prompt template. 9:32And from here we have a utility function called generate_text_response. 9:38And from here there’s a lot of functionality that’s going on exactly what I said. 9:42It’s going to say, look at that template model that we’re sending in a parameter and grab that model information from that JSON. 9:48Same with the prompt_template_request. And same with the examples. 9:52A little bit different with the examples, though, 9:53...because you’re not going to want to use examples for every API call or every watsonx.ai call that you make. 10:01So we have the option just to leave that as blank. 10:03So if you don’t have any few-shot examples, no problem. 10:07And after this, we’re going to use this method called get_model that’s in a class called ModelRequest. 10:14Now this is where we’re going to be working directly with the watsonx.ai SDK. 10:21So you could look at exactly what we’re doing with the documentation on watsonx.ai, 10:25...but very basically you have the model_id, which is the model name, your credentials, and your parameters and your project id. 10:33Now a little bit different here is that we’re going to add this into a LangChain invocation, 10:38...so we have to wrap that model inference within a watsonxLLM wrapper, which makes it runnable within the LangChain. 10:48So if we go back to the server, we can see that we have this chain, which is starting with the prompt_template, 10:54...grabbing that model, and then ending it with the output_parser. 10:57So the output_parser is something that’s provided to us by LangChain, and it’s going to provide a string from the response. 11:02And once that is done, we call it and we return it as generated_txt. 11:15Once that is completed, the function returns back something called generated_txt, 11:20...which is just a string, which you could see in this response body. 11:25So the next thing we’re going to do is now create a new endpoint called pet namer. 11:31Now the reason this is going to be a little bit different is that we’re going to want to coerce the model to return back JSON. 11:37And if we know LLMs, they’re not the easiest to ensure that we always get the data structure we’re expecting. 11:43So what we’re going to pull in is something called a PydanticOutputParser. 11:48Now we have this here, it’s part of the LangChain package, 11:52...but Pydantic is used throughout this application and I could even show you something really quickly. 11:56So if we go to the generate_summary endpoint, 12:03...and we want to add a new class to ensure that everything that comes back as a response from generate_summary is in this JSON format, 12:10...we can add a new class using Pydantic. 12:13So let’s try that out really quickly. 12:15I’ll add a generate_summary_response.py. 12:23And we’ll open up one that already exists and we’re going to rename it from json_response to generate_summary_response. 12:37We’re going to only be using the base model. 12:39And what we want to return back is an object that has a generated_text field that returns back a string. 12:51Now what we could do here, let’s import it into our init. 12:56So from generate_summary_response we’re going to import the generate_summary_response. 13:01And if we go back to server, let’s import it as one of our schemas. 13:08And if we go back to the generate_summary, 13:11...we can say that every response we want from this route, we want it to be in that exact shape, that exact JSON shape. 13:21And now this is good for obviously a mixed team, 13:25...because if we know exactly what’s coming in and exactly what’s coming out, it makes coordinating our code much, much easier. 13:32So what we’re going to do is we’re going to take that class and we’re going to add it as a response_model to our endpoint. 13:45And when we go back to our FastAPI docs, 13:49...we can see that now the expected output is going to be this JSON object with a generate text field and guaranteeing a string output. 13:59The way we can utilize this pydantic class pattern to coerce the model to return back JSON objects, 14:07...is by using the same exact pattern, and instead of doing just the straight string output parser, 14:13...we’re going to use a pydantic output parser to say, this is what we want the response to look like. 14:18So let’s create that right now. 14:20We’re just going to copy and paste our generate_summary route and rename it to generate_pet_name. 14:31And if we look at the way that we have the GenerateSummaryResponse, we could take something that I’ve already written out, 14:38...which is called the JsonResponseTemplate, and we’re going to add another class called GeneratePetNameResponse. 14:52And we’re going to say generated_text equals this JSONResponse class. 15:01So the output of this generate_pet_name is going to be almost identical to this object, 15:10...which is going to give us a name and a description, but it’s going to be a field within an object that we’re returning. 15:15So if we want to see that in action, we can see exactly what we’re going to get back. 15:20Hold on, let me just rename this to generate_pet_name. 15:30We’re going to reclassify this route as a generate pet name based on some descriptors. 15:39And now we could look at our Swagger documentation, 15:42...refresh it, we’re going to have a brand new route in here called generate_pet_name. 15:47And if you look at the shape of the data that we’re supposed to get back – hold on, we actually have to import it first. 16:04So first we just have to grab the GeneratePetNameResponse from the schema, 16:10...and import it into our server.py to be used by our generate_pet_name endpoint. 16:21So let’s go to our GeneratePetNameResponse and add it. 16:30Everything looks good. 16:34And now we should see exactly what we’re looking for. 16:38We’re telling anyone who’s using this route that we’re going to receive something that looks exactly like this. 16:48It’s going to be a JSON object. 16:50And we’re going to have a name and a description, and the name is going to be a string and the description’s going to be a string. 16:57So the last thing we have to do is let’s use that utility function that we used for the summary, 17:08...and we’re just going to make one minor change. 17:10So let’s change this from generated_text_response to generated_json_response. 17:17And instead of using the StrOutputParser, we’re going to use the PydanticOutputParser, 17:26...and we’re going to force it to use pydantic_object equals the JSONResponseTemplate. 17:39And from there we’re going to grab – and I’ll show you exactly in the documentation where that is. 17:43So if we look at exactly the Pydantic Parser documentation from LangChain, 17:55...we’re going to grab something called the format_instructions. 17:58And the way we get that is that when we define the PydanticOutputParser and we give the pydantic_object the class we’re using, 18:05...and this time it was a JSON object, 18:08...we can just grab the get_format_instructions from that parser, and that’s what we’re going to use. 18:14So let’s call it format_instructions equals parser.get_format_instructions. 18:32Perfect. 18:33And this is going to be a string. 18:38So the next thing we have to do is now we just have to add the examples, the model, and the prompt template. 18:44So now that we’ve used the PydanticOutputParser, 18:48...to try to convince or coerce the LLM to return back a JSON object to us based on that class we just created, 18:56...we have to update the way that we’re making prompt_template and the way we’re using the examples. 19:01And of course, we have to grab the model parameters from the prompt we’ve worked on in order to get the correct responses. 19:08So let’s go back to the watsonx.ai Prompt Lab. 19:12And this is the prompt that I’m going to transpose from the Prompt Lab into our code base. 19:18So you could see that we have the instructions, we have the few-shot examples, 19:22...we have the parameters that we have set, we’re using sampling, 19:27...and we also have this Curl is where we’re going to take all that information. 19:33Let’s go to our code base. 19:35We’re going to create a new model file, so let’s name it pet_namer.json; we’re going to name a new examples file, 19:45...pet_namer.txt; and we’re going to make a new prompt_template, you guessed it, pet_namer.txt. 19:55So in the examples file, we’re going to grab the few-shot examples we have from here, 20:04...and we’re going to take the model information from this Curl – 20:11...so meaning like the parameters, what model we’re using, etcetera – and we’re going to create a new model file. 20:19And if you look at the way – we’re just going to copy and paste from the generate_summary model parameters, 20:25...and just create a new one and call it pet_namer.json. 20:29But instead of using the IBM Granite model, 20:32...and instead of using these parameters, we’re going to grab exactly what we’re using from the watsonx.ai platform. 20:42So we’re using mixtral this time. 20:46And that’s something that’s really very helpful about the watsonx.ai platform. 20:49You could just use whatever model suits your best need for whatever you’re doing. 20:53I find that very helpful in engagements. 20:56And we’re going to replace all of these parameters with the ones that we’ve been using for that prompt that worked really well. 21:05We are going to make one change from the GenerateSummaryPromptTemplate. 21:10And this time what we’re going to do is we’re going to utilize that PydanticOutputParser. 21:15And what that really is, the format instructions, if you look at it, 21:18...it’s just a really well-crafted prompt to coerce the LLM to return back JSON. 21:25And I find that very, very, very helpful. 21:28So if you look at the way that we’ve structured this prompt as opposed to the generate summary prompt, 21:32...we’re telling it now, your response should follow this format. 21:35And this format is the format instructions we have extracted from the PydanticOutputParser, 21:41...and so we’re going to add that to the prompt template, 21:44...so when it’s grabbed, it’s going to grab the examples that we use, it’s going to grab the data that we’re sending it. 21:49But before all of that, it’s going to grab the format instructions from the PydanticOutputParser, which is super, super great. 21:56Makes it a lot easier. 21:58And to be totally honest, when I was building out this application, 22:01...I was trying to coerce it myself with my own prompts and a coworker of mine, Drew, 22:06...showed me exactly how to use the PydanticOutputParser, and totally changed – 22:10...you know what? Honestly, it changed my life, if I’m being totally honest. 22:14Now we’re going to add another field, and we’re going to call it format_instructions. 22:19And we’re going to use the format_instructions that I have here. It’s just going to be a string. 22:31So now when we return this, you can see that we’re adding to our kwargs. 22:36We have the examples, which is going to be within the prompt template. 22:42The examples are going to be interpolated into that prompt, 22:45...same with the format_instructions, and the data is what we’re going to be sending via the API call. 22:50So let’s just make sure everything looks good. 22:53We have the examples, we have the new formatting structures from the PydanticOutputParser, 22:58...we have the chain, and we have the generated text. 23:01The only thing that we’re going to do differently is we’re going to return back generated – 23:08...we’re going to create a dictionary called – with a field called generated_text, and we’re going to pass in the generated_text.dict. 23:21So now this is going to be a dictionary. 23:23And let’s just ensure that what we’re returning back from the generated prompt – okay, perfect. 23:31When we look at the generate_pet_name, we’re returning back this generated_pet_response. 23:37Remember, if we look at our Swagger documentation, we’re expecting it to come back as generated text with that dictionary. 23:46So if all goes well, we’ll be able to test this out. 23:52Let’s restart our Swagger docs. Let’s try it out. 23:59And let’s provide it that data. 24:02And what we’re expecting is something like this example. 24:10So now we’re going to test out the endpoint that we just created, the generate_pet_name, 24:14...and it’s going to expect a data field with some description of an animal. 24:19And we’re going to expect a response that is wrapped in a JSON with a name and a description. 24:25So let’s see if it works. So let's see if it works. 24:36Great. We have Captain Sparklesbeak and it gives us an explanation. 24:42So now we have a working FastAPI endpoint. 24:45We saw how we can coerce the LLM to return as JSON. 24:49We figured out how to create a route in the FastAPI. 24:53And the next thing we’re going to do is integrate it into our React UI and our Express TypeScript backend. 25:02So now that we’ve finished our FastAPI endpoint, we have a generate_pet_name endpoint, 25:08...that we could send the description of our potential pet, let’s integrate it into our frontend. 25:14And so we’re going to be integrating it first into our Express Server, 25:18...that’s going to direct any calls from the React UI to our FastAPI and then return the data from it. 25:26So let’s get right into it. 25:28So now that we have checked out into Step 02-express-server, 25:34...what we want to do is we want to go into the UI directory and we’re going to install the route dependencies. 25:41Let’s run the setup command. 25:43So this setup command is going to do a couple of things. 25:47It’s going to create envs from examples that we have in both the server and the client. 25:53And these envs are going to be – when you’re working locally they are just going to be what they are in the examples. 25:59But what these envs are going to do, it’s going to say, okay, 26:02...the React UI is going to be able to talk to the Express Server, and we’re giving it that endpoint. 26:09And for the Express Server, we have an env that’s going to say, okay, this is the endpoint for the FastAPI. 26:16Because if you think about the flow of information that you saw on the application, 26:20...we’re going to have the React UI, we’re going to fill out a form, and we’re going to hit a submit. 26:24And that submit is going to send the data from the React UI to our Express Server; 26:30...the Express Server is going to send that to our FastAPI; and the FastAPI is going to communicate with the watsonx.aiLLM; 26:38...return back the response to the Express Server, which is going to return it back to the React UI. 26:43That’s the flow of information. 26:46So in order for that to work, we need these envs to tell the UI and the server and the FastAPI where to look and who to talk to. 26:54So now that we have the dependencies installed for the React UI and the Express Server, 26:58...we’re going to start running them in dev-mode. 27:02So we’re going to go into the UI directory and we’re going to run this command, 27:05...npm run dev server, and that’ll start up the Express Server. 27:09And we’re going to run npm run dev client, and that’s going to start up our React UI. 27:18And if you look at what the React UI was, 27:22...what we’re going to be starting off here is going to be pretty much totally blank, we’re going to build it up, and show you how to do it. 27:28But first, let’s get that Express Server working. 27:38So let’s go into the server directory, open it up and let’s take a look at what we actually have in the server. 27:47We have a boilerplate code in the index, which is just a basic Express Server. 27:53We also added socket.io. 27:56So having web sockets between the client and the Express Server is really very helpful and useful. 28:03We could watch databases, do anything like that. It’s nice to have so we’ve added it. 28:10And then we have this middleware that’s going to be using our routes. 28:13So right now we only have a config route and a DB route. 28:16Neither of them are going to be in use, but they’re there if you need to use it. 28:20And we see the endpoints that we’re going to – we’re able to at the API and API.db. 28:28So now we can see that the UI is blank. 28:31We’re going to build this all out. 28:33First thing we’re going to do is create the new pet namer route. 28:37So in your routes directory, let’s create a new route. 28:43We’re going to name it petNamerRoutes.ts. 28:46And let’s grab the configure route, which is perfectly fine as a point to start. 28:56We’re going to import Axios because we’re going to be making a call to the FastAPI and I personally like the Axios package. 29:07We’re also going to import .n because we’re going to be using that env to communicate with the FastAPI. 29:16And we’re just going to run .nconfig. 29:22And if you look at your env in your server, you’re going to have an API_URL, 29:27...which is the exact endpoint that we use to see our Swagger docs and the FastAPI, 29:33...so we just have to grab that and bring it into our petNamerRoute. 29:36So let’s just call it API_URL. 29:42We’re going to grab it from process.env.API URL, or it’s going to be an empty string. 29:49I’ll just make sure that it’s a string. 29:53So the next thing we’re going to do is let’s create a POST. 29:58We could copy this config. 30:01We’ll get rid of this one. 30:04We’re going to turn this into a POST. 30:06We’re going to say generate_pet_name. 30:11And because this it’s going to hit that FastAPI, we want it to be async. 30:15We’re going to call – we’re going to have – within that function, we’re going to have the request object and the response object. 30:23And what we’re going to do is we’re going to add a trycatch; just add some boilerplate here for now. 30:30We’ll log the error. 30:32And if we hit the error, we’ll send back an internal server error, so that’s status code 500, and send back an error/error. 30:43One of the really useful things about setting those pydantic classes inside the FastAPI, 30:51...and are really are articulating exactly what kind of parameters it’s expecting or what it’s going to be returning is that, 30:58...let’s say we have two people working on the same project. 31:00We have an AI engineer working on the FastAPI and we have someone like me working on the UI. 31:06I can just go directly to the FastAPI, I see exactly what we’re expecting, 31:10...and exactly what we’re expecting to get back and write the route to fit that. 31:15So we know that we’re going to receive something like this, so let’s just copy it and bring it here for reference. 31:26We also know that this is what it’s going to expect. 31:34So this is going to be the output, and this is going to be the input. 31:49So if we’re going to be sending this back, let’s just copy it and bring it directly into our route. 32:04We’ll call it body. 32:10The data is obviously going to be dynamic, so we’ll just add a data field. 32:13But before we can even get there, we have to – 32:16...we know that we’re going to be sending the data from the UI to the Express Server so let’s just get that data first, 32:22...and we’ll be getting it from the request body. 32:25So we’re just – we got data, the body, and this is the body that we’re going to be sending back. 32:36And so now let’s make the call to the FastAPI. 32:45So we’re going to be making a POST, and it’s expecting this body, and we’re going to be hitting this endpoint. 32:55So in order to hit that, first, we’re going to use the API_URL that we have in our env, the localhost 8,000, 33:06...and then we’re going to make the endpoint the generate_pet_name. 33:10And the back ticks just mean that you could just – it’s basically like an s-string in Python. 33:15We’re going use the body as the request body. 33:20And then we’re going to – let’s see what we get back. 33:26Well, we know it’s going to look like that. 33:28We’re going to have generated text. We’re going to have a name and a description. 33:30So let’s say res.status. So this is a successful call. 33:39We’re going to send back generated_text, and we’re going to say that’s result.data.generated text, because we could see right here. 33:56We’re getting this object back. 33:57It’s going to be generated text and that object’s going to have name and description, which is what we’re going to use in the UI. 34:06Now if we go and look at what the end result’s going to be, we have this part called data sent to the API, 34:14...which is really useful just to see exactly what we’re sending back. 34:18And if you look at the way this looks, this is going to be pretty much the same thing we’re sending here. 34:24So we can just grab the body that we’re sending, and we could call it request or data_sent_to_API and we’re going to call that body. 34:49Let’s just wrap these in parens. Perfect. 34:55So what this means is, okay, we’ll make an async call to the FastAPI; we’re going to send it; 35:00...we’re going to hit the generate_pet_name endpoint; 35:03...we’re going to send this body with the template model pet namer, prompt template name pet namer. 35:09In the kwargs, you’re going to have the data being the data we’re getting back from the UI. 35:14So in the trycatch, we had this async call. 35:16We say, okay, we’re using Axios package to make the POST request. 35:24We’re sending the body. 35:25And if it is successful, we’re going to send back a res.status 200, 35:29...and we’re going to send back an object with two fields; the generated text and the data sent to API. 35:36Now a really useful tool, when we’re working with APIs like this, is something called Postman. 35:45First, let’s just make sure we have the health route up. 35:48Just to show you how Postman works, we have this health, and we could just say, okay, Pet Route Up. 36:00And before we do that, we should import it and bring it into our routes. 36:09We’ll name this petNamer. 36:18We have to actually add it to the route. 36:20So now let’s see if we could actually hit it from our Postman. 36:24We got it up; Pet Route Up. 36:27So the next thing we’re going to do is just recreate what hitting the endpoint will look like from the UI. 36:33So we know that the data field in the – when we send it from the React UI, 36:38...it’s always going to be something like a male dog who is goofy and sweet and cowardly or whatever, 36:43...and we want to make sure that we can send that data to the Express Server and for it to hit the FastAPI. 36:49So let’s make sure that works. 36:52So we have a postman route. 36:54Say a male dog who’s clumsy, drooly, and snores loudly. 36:57Let’s just make sure we could hit it. 37:00I’m going to make sure we’re hitting it here. 37:01Yep. You can see everything’s – we’re hitting the FastAPI and we get a Baron Snorbs. 37:10I don’t know why it came up with that. I like the name. I would name my dog Baron Snorbs any day. 37:15But now we know for a fact that we’re able to hit the Express Server from the UI. 37:19We’re using the Postman as a tool to test that. 37:23The next thing we’re going to do is now build out the UI to look like what we had in the beginning. 37:28We have the Express Server hitting the FastAPI. We’re able to test it with Postman. 37:34Now let’s integrate it with our React UI. 37:38So if we open up our localhost 3000, we have everything running. 37:44We have our FastAPI running in dev, we have our Express Server running in dev, and we also have our client running in dev. 37:52If we look at what it looks like now, it’s blank, but the end state we want to get to is something like this. 37:59So if a designer handed me this image and said, hey, can you please build this for us using React. 38:06We need all this functionality and we need it to look like this. 38:09The most daunting part for me would be, oh God, I have to work with SCSS. 38:14I have to figure out the placement. I have to figure out how to create inputs that look like this or dropdowns that do this. 38:23And for me that’s difficult because I’m not really a great frontend developer. 38:29But what’s really helpful is using something like Carbon, for me at least. 38:34So we know from the image we’re looking at, we have something like a heading, 38:40...and we have something like a combo box, something that has a dropdown and that you could type in it and it filters it. 38:46We have the checkbox form. We have inputs. We have tags. 38:51The way I would do this, I would go to the Carbon React page and I’ll just look up stuff. 38:56So I know the first thing we’re going to use is a heading. 38:59So when we open up a heading, very simple. 39:02The code is super-duper simple. You just have heading. 39:07Let’s add it to our React UI. 39:10So now from your directory, go to UI, and then open up, and then CD into client. 39:15Now in this client directory we have a source directory, we have components, and we have a pet form. 39:24I have left in all the imports and a lot of the actual functionality for like state management and stuff like that, 39:30...because what I’m trying to show you is how I utilize the Carbon design system to build out something like this. 39:37And then the functionality, I implore you to look at, it’s documented and you could figure out exactly what I’m doing. 39:42But let’s start by looking at what we have. 39:46We have two columns, and this is all from Carbon. 39:50We have two columns; one for Pet Form and one for Results. 39:56And what to do. We want to get to this end state. 39:58So let’s start with the heading. 40:00So we already have it imported. We know it’s going to be here. 40:03So we could just add heading, and we could say – what did we name it – describe your future pet. 40:17Boom. Headings are simple. 40:19Like this stuff is okay, we could use an H1 or H2 or whatever, but having something that’s – 40:25...using a design system that already looks good and you don’t have to worry about it, you don’t have to worry about the font, 40:30...you don’t have to worry about the sizing, you can just use it is so, so helpful and it just expedites all of the work on the frontend. 40:36The next thing we have to bring in is this combo box. 40:41So let’s look at what the combo box available in Carbon is. 40:47And we can see it. They have it already here. 40:50Example of what it looks like. You have all the documentation. 40:53So let’s look at this. 40:55So we have one that filters. 40:57We know that’s what we’re going to look for. 40:59So we could look at this exact code and we could just grab something like that and bring it into our application. 41:09So for this one, we’ll just bring this. 41:16Now we don’t have items yet, so let’s just see what it would look like. 41:19We’ll have an ID with a one and a text with first item. 41:26And then for the next one, we will have an ID with two and a text with the second item. 41:34And so really it’s already built in. 41:35You have this items prop, and you just fill it, and it’s looking for an array. 41:44And if you look at, it actually has an explanation of what it’s doing. 41:47It says, they’re trying to stay as generic as possible and we could have total control over it. 41:52But for us, we want something pretty simple. We want an ID and we want a text. 41:56So if we look at our application, once we save, we already have it. 42:01It’s already built in. We have the first item and the second item. 42:04It’s already filterable. 42:06Like it’s just from the get go, you could select it. 42:11We’re actually going to add the filtering in a second. 42:13And that’s how easy it is to add like complex components that look good into a UI using Carbon. 42:18I’m a big, big fan. 42:21So now let’s just fill in all the functionality that we’re expecting. 42:24And we have an un-change, and it’s pretty simple. 42:27It’s just looking at the selected item and it’s setting it to our state. 42:33We have the ID. 42:34The items that we’re using usually – like the way I would use this in an engagement is that we would, 42:40...often have an API call to a database and store it as the state for the items. 42:47So you could have like – especially if you have a ton of rows from a database and you want to quickly filter it down, you could do that. 42:52What we did in this case is we just have a giant list of different animals that you could potentially have as a pet, like an alpaca. 43:04So now that we have all of this, you could see what it’s going to be. 43:08We have all the different animals that we have available for it to being a pet, like a tarantula. 43:14And we can move on to the next item that we need, which is a form group with a couple of checkboxes. 43:23So similarly, if we go to Carbon, we could look up checkbox. 43:32We could show – we have skeleton, we have just regular checkbox, the way it looks. 43:37So let’s grab the checkbox. We’ll put it in a form group. 43:40And the form group is also – this is something that Carbon provides for us. 43:46Something interesting about a form group, I think it needs a legend text so let’s add some legend text, and we’ll say pet gender. 44:01And if we look at our application, we have our two checkboxes in the pet gender. 44:09And if we look at what it’s supposed to look like, obviously we have to make it male and female, 44:12...so let me add the functionality and readjust the label text. 44:22So now that we’ve added all the functionality that’s included in the component, I’ll just go over like simply what it’s doing. 44:29So the checked means is just accepting – the way we’re setting it up is just a Boolean. 44:35We’re seeing if the state gender of animal equals male, that equals true, make it checked. 44:40And the on-change similarly is just saying, okay, if it is already male set it to nothing. 44:46But if it isn’t, set it to male, a really, really simple state change. 44:50We have it disabled while loading. 44:52So this is important because we don’t want – 44:55...because let’s say like an API call to the FastAPI takes like six seconds, which is pretty long for an API call. 45:01You don’t want them making edits to the form while it’s happening. 45:05So we disable all functionality within the form, 45:08...and you’ll see this over and over again as we get through – as we keep on adding functionality to this component. 45:14So now we have our pet gender. 45:18We could choose the type of animal. 45:21The next thing. Now this is an interesting component. 45:24If you remember from the beginning of the video, how this works is this is an input field, 45:28...where when you start typing the button on the right is activated. 45:32When you hit enter or you hit the button, it adds it to a tile at the bottom, and it’s adding each individual descriptor in a little tag. 45:41Now all of this is pretty custom, but we’re using the styling and the tags and the functionality is coming from Carbon. 45:49So we’re going to need an input and we’re going to need a button. 45:53So now I’m just going to paste in the functionality for this and explain exactly what we’re doing. 46:00So if you look at our application, we have the descriptor; add cute, sweet, whatever your pet descriptor will be. 46:10And you could see as we’re typing the button goes from disabled to not-disabled. 46:16When you hit it, it clears the input field. 46:20Now what you’re not seeing, obviously, is that we are adding all of this to a descriptors list, 46:27...which we’re going to use to formulate that API call to the FastAPI. 46:32And so just for clarity, I could go and I will add a use effect just to show you what is happening as we hit enter. 46:44And we’re going to go look at the descriptor state. 46:46And if you look at the descriptors, it’s just an array of strings. 46:52If descriptors or descriptors.length, return. 47:04But if not, console log. 47:12So we could watch what’s happening in here, in the console, as we’re adding it. 47:19So let’s say we add cute, hit enter. 47:21We could see that we have an array with one item, cute and sweet and nice. 47:30Now we could imagine how we’re going to utilize this when we send that API call. 47:33So let’s continue with the functionality. 47:38So in the text input we added a functionality on key down. 47:42And this is just like naturally what you do. And this is something I found. 47:45Like whenever I’m in an input field, I expect something to happen when I hit enter. 47:49And so we just added the on key down, really simple functionality, looking for a keyboard event. 47:54If the key that is hit is enter, you run the function handle, add descriptor, which adds the input text into that descriptor array. 48:06And then obviously disabled while loading. Same with the button. 48:09The disabled part is actually interesting. 48:13It’s just saying, okay, if the input value and state is nothing, like the input length is nothing, 48:20...just keep it disabled or disable while loading. 48:23So that’s the way we’re able to dynamically disable and enable a button here. 48:31So we could say cute. The second the input length is no longer zero, we have a little button, click it. 48:37Back to zero because it clears the input and adds it to the descriptor array. 48:42So what’s the next functionality we need to add? 48:44So now let’s map over that descriptor array and place it in this this tile. 48:51I like the tile. It’s something that I got from Carbon. 48:55It looks like a div with like just extra capabilities. 48:59I just happen to like to have the ability to have more capabilities, 49:03...such as like you could drop it down, you could add functionality to it, you can make it selectable. 49:09So in our case, what we’re going to do is we’re going to add a tile. 49:18And in that tile we’re going to say, okay, look at descriptors, map over them. 49:22Look at what it is, descriptor index and return back something called a tag, which I really like. 49:34Again, it looks good, it has functionality. We can add an icon to it, which is what I want it to do. 49:40If you look at the final result, we have that little tag with the descriptor and a little x. 49:45Now that filled in x obviously it gives you the impression that you could click on it and delete it. 49:51And that’s what we want it to do because if you add something that you don’t mean to, you want to be able to delete it. 49:54It’s just a nice functionality to have. 49:56So if we look at how to build out a tile with a – you say just a tag with the class name of whatever and the content inside of it. 50:07Basically all it’s doing, it takes a string, it looks at the descriptors, and we’re going to update whatever that list is, 50:14as long as it doesn’t equal the one that you just clicked. 50:17Really kind of simple. You’re just filtering it down. 50:20So now that we’ve added that, let’s look at how our application handles the additional descriptors. 50:27So cute, fast, and sweet. 50:30And on click is going to – oh, I added box shadow. 50:35Now I like box shadow. 50:36I don’t know why the modern UI design does not like box shadow, but I happen to like box shadow. 50:41I don’t know why. I think it makes it look 3D. I think it’s very cool. 50:45So we could click on the cute, click on fast, click on sweet, and you could delete it. 50:49And that’s the functionality we’re looking for. 50:51The last thing we’re looking to do is have a button set. 50:55So let’s add a button set to our component. 51:01So in the stack, we could just add button set, and we could have button clear, and we could have button submit. 51:13All of this is coming from Carbon, 51:16...so it’s already automatically going to be sized correctly and the button set is going to have a little gap. 51:21We have two different kinds of buttons, but if you look at the image, we have two different colored buttons. 51:26And within the Carbon React documentation, just go to button and you could see something called kind. 51:40So you could choose what kind of button you'd want. 51:42So we want the submit to be primary and we want the clear to be secondary. 51:47So let’s just update this; kind secondary and kind primary. 52:04And there we go. We have the two buttons. 52:05The functionality for the submit will be done relatively quickly. 52:18So now that we have all the functionality there, we have the two functions – so the handle clear function’s pretty simple. 52:24We’re looking at all the state that we have in the component. We’re just setting it to its original empty form. 52:30The disabled clear button and disable while loading; you don’t want to have – 52:34...if there’s nothing – we have some functionality and use effects that are just looking at if any of the state is filled – if nothing is filled, 52:40...you shouldn’t have the ability to clear and you shouldn’t have the ability to clear while you’re making the API call. 52:45Similarly with submit. 52:47Instead of disable when nothing is filled out, we have to disable the button when not everything is filled out. 52:54So you can’t make a request if you don’t have an animal, you don’t have a descriptor, you don’t have a gender. 52:59We need all of those in order to make that submit. 53:02So now that we have everything in the pet form, at least visually completed, let’s get everything in the result form also completed. 53:12We’ll handle the functionality in just a second. 53:14Obviously we need to add a heading for the result, which we’ll just say result. 53:21What else do we need? And then we need these accordions, which I like. 53:24I like the functionality of accordion. 53:27They’re dynamic. I think they look good. 53:30You could fill it in. You have the title, and then you have the content within it. 53:33So it’s something that I thought would look kind of nice when we’re displaying our results. 53:39Also they have this skeleton state. 53:41And I also like the way that looks so I just put it in, in that way. 53:46It doesn’t really have any necessary – you could render the response any way that looks good to you, 53:52...but in my opinion, I liked the accordion. 53:54So now that I’ve just added all the functionality, we’re looking at a loading state. 53:59Obviously when you’re making that call, you’re making that API call you would set the loading state to true. 54:06And what that does is it alerts the application and the component to switch between loading state and not loading state. 54:14And our loading state happens to be just these accordion skeletons that I just showed you here. 54:18So let’s complete that. 54:22And now we have whatever we’re looking for. 54:26We just have to wait to add the rendered results. 54:29And that’s going to be the response from the API. 54:32And in order to get that functionality working, let’s go to our handle submit, and we could see exactly what we need to do. 54:40So the first thing we’re going to do is we’re going to look at that list of descriptors. 54:44And so we’re just going to grab the strings inside that list and just create a comma-separated string by it. 54:52Pretty simple. 54:53We’re going to say descriptor list, and we’re going to say descriptors.join, comma separated. 55:01And now we’re going to have just a string that has all the descriptors a comma separated string. 55:12So really what you’re looking for, if we look at the FastAPI example, is just this input. 55:17We’re looking to recreate that in our API call. 55:23So let’s use this as reference. 55:29So now we have to interpolate all the data we collected from the form into a single string. 55:34So that should be pretty easy, right? 55:36So const string to send to API equals, we’ll use back ticks. 55:43We have the gendered animal state, so that would be either male or female. 55:49And then we’ll say the type of animal.text. 55:54So that could be a male dog who is, and then we just have the descriptor list. Great. 56:07Now all we have to do from here is make that API call. 56:11So let’s set the loading to true. 56:14And all these state variables are of the – the state has all been prepopulated in the application so all you’re doing is running the hooks. 56:21So let’s set loading to true and let’s make the API call. 56:27Similar to the way we did it in the Express API, 56:32...but we know we’re going to, if we look here, we’re going to API/pet namer/generate pet name. 56:42And we are going to send data string to send to API, making it a POST. 57:00So now that we have crafted the API call that we’re making to our Express Server, 57:07...we now have to, on success, extract the relevant data from the response. 57:12So let’s just go look at what we’re expecting as a response on success. 57:16We’re going to get a generated text and request sent to LLM. 57:24If you remember from what we’re showing, what we want the end state to be, 57:30...the request sent to LLM is going to be this API call, like this object. 57:37And then we’re also expecting a name and description in that generated text field. 57:41So let’s look at our code in the React UI and anticipate what we’re going to get. 57:47Something I really like about TypeScript is that it knows that the next result from Axios is going to be .data. 57:56I just find it very, very helpful. 57:58So it infers that this is an Axios response and it gives us at least the first property on the data response that we could possibly use. 58:07The next one, though, and it might make sense just to copy this so I could just reference it. 58:12Let’s copy it. 58:17So we know that we’re expecting a result.data.generated text. 58:25So let’s call that object. 58:29And then we have the data sent to API, which equals result.data.request sent to LLM. 58:43If there’s anything I could do better, it’s variable naming. I’m really terrible at it. 58:49So now we have the object that we want to render for the name and the description. 58:54And the next thing we have is that data sent to API, 58:57...which is we want to render just as information of how of what was sent to the FastAPI. 59:03So we have both of these. 59:06And now we want to set the response name and description to state. 59:11So if we look at the accordions that we have placed in our response, let’s see what we’re expecting. 59:24So rendered results.name for the name, 59:27...rendered results.description for the description, and then we’re going to stringify the rendered request. 59:35So all of these are state – all these have been included in state. 59:40So let’s start with object. 59:42We’ll set rendered result to object. 59:47And then for the rendered request, we’ll set rendered request to data sent to API. 59:54And then we’ll set the accordion to open. 59:59Perfect. We’ll set that to true. 1:00:04And then we’ll set loading to false finally. 1:00:08So once it goes through all of – once it tries to do all this, it sets the loading to true. 1:00:15It makes the API call the through the Express Server to the FastAPI. 1:00:21It returns back the generated text from that FastAPI to the Express and to the React handle submit function. 1:00:30We set the rendered result to the object that we’ve received. 1:00:34And then we also set like the body that we sent to FastAPI, just so we could see what it looks like. 1:00:40We set accordion open so it’s going to open up that accordion automatically. 1:00:44We’re going to log an error and we’re going to set loading to false. 1:00:49So let’s actually try the application now. 1:00:52We’re going to choose a rabbit; female; cute; sweet; fast and cowardly; and hides under my bed; and eats my books. 1:01:09Let’s submit it and see what kind of response we get back. 1:01:17Luna Bun-Bun. 1:01:19And it gives you a description and it shows us exactly what we sent. 1:01:22We sent this body to the endpoint. 1:01:27We’re sending the data; a female rabbit who is cute, sweet, cowardly, and hides under my bed. 1:01:32But what happens if we have an error? 1:01:34Now I’ve discussed that when we were making the FastAPI that when you’re trying to coerce – 1:01:38...like we’re coercing the LLM to send back a JSON format and we’re doing that through that PydanticOutputParser. 1:01:47And it’s just like a really well-crafted prompt, 1:01:50...but occasionally the LLM is just not going to send you back something that is actually JSON and you’ll get an error. 1:01:56So we want to kind of handle that, and there’s a really simple way to do it. It’s not particularly elegant, but it is simple. 1:02:04So let’s create a new state and we’ll just call it error. 1:02:12Set it to false to start with. 1:02:16Let’s go to handle submit. 1:02:24So we have this catch for this error, so we could console log it, but we could also set error to true. 1:02:31And then finally, set loading to false. 1:02:36The way I’m thinking this working is that when we retry it, 1:02:42...we’ll set the error to false so we don’t continue to render what we’re going to render. 1:02:47So let’s set error to false. 1:02:48So let’s go to that button set and let’s add a conditional rendering, error. 1:02:57And let’s add a new button. 1:03:02Instead of it being – it’ll be the exact same thing as submit, but we’ll change the kind. 1:03:11We’ll make it danger and we’ll change this to retry. 1:03:21Let’s just test this functionality. 1:03:22Instead of sending back a good response, let’s send a return.res.status 500, 1:03:40...and let’s attempt it and just make sure that it functions the way we expect it to function because occasionally it will fail. 1:03:46Alpaca. 1:03:49So hopefully we get back a failure and it populates a – perfect. 1:03:55And what we want to do is when we click it again, 1:03:58...it’s going to take it off – it’s going to stop rendering it and it’s going to do another submit. 1:04:02That’s perfect. 1:04:03So let’s fix our Express Server. 1:04:09Make sure everything is running. 1:04:13Let’s try to rerun it. 1:04:15And now this time with the Express Server actually returning back an accurate response. 1:04:20Now Snuggles the Gentle. 1:04:21The name embodies the alpaca’s undeniable cuteness and its sweet nature. 1:04:25That is a very nice name for an alpaca. 1:04:28So that was it. 1:04:29Now that you’ve seen how to build a full stack Generative AI application, why don’t you take what we did and make it better? 1:04:37Use what we’ve shown you today to create a new route, use new prompts and new examples, different model parameters, 1:04:45...and come up with something cool, something with like interesting functionality. 1:04:49Integrate it with a new UI to make it look awesome and tell us what you’re building in the comments. 1:04:55Honestly, thank you for watching. 1:04:57And if you like this video, be sure to like and subscribe.