Learning Library

← Back to Library

Turning Models into Production: Overcoming Deployment Hurdles

Full Transcript

# Turning Models into Production: Overcoming Deployment Hurdles **Source:** [https://www.youtube.com/watch?v=OejCJL2EC3k](https://www.youtube.com/watch?v=OejCJL2EC3k) **Duration:** 00:06:49 ## Sections - [00:00:00](https://www.youtube.com/watch?v=OejCJL2EC3k&t=0s) **Untitled Section** - - [00:04:48](https://www.youtube.com/watch?v=OejCJL2EC3k&t=288s) **Automated MLOps CI/CD Pipeline** - The speaker explains how CI/CD can separate training (GPU‑heavy) and deployment (container‑based) environments, add monitoring, and automatically retrigger model training when performance thresholds are breached, eliminating manual effort. ## Full Transcript
0:00Have you ever been training a model only 0:01to find that it never reaches 0:03production? Well, 68 to 80% of models 0:06that are trained and developed never 0:08actually make it to production. Well, 0:10I'd like to introduce something that 0:12that will make training a whole lot 0:14easier for you and your team and make 0:16deployments uh much easier and will have 0:19much less stress. And to illustrate 0:21that, I actually have a story to give 0:23some context. So my team was uh on a 0:26crunch time with a project with getting 0:29a model out. And so we finally got the 0:32department to approve a GPU server for 0:35us to be able to use some of these 0:36larger language models like BERT and 0:38Robera and things like that. So we got 0:40on there, we started doing our work, we 0:42went as fast as we could. We put our 0:43notebooks on there. We're getting some 0:45good results. We started getting some 0:46accur some some pretty good accuracy. 0:48And we continue developing until one day 0:50we try to SSH into the server and we 0:53can't. So only to find out that our mo 0:56our department only paid for one month 0:59on the GPU server. So what happened to 1:01all the notebooks, all the uh data, all 1:04the features that we prepared on the 1:06server? Completely gone. And so maybe a 1:09moment of silence for all the notebooks 1:10that I lost in that. But um that pretty 1:13much reflects what a lot of manual 1:14processes what a lot of manual training 1:17processes uh I mean uh look like right 1:19now. So first of all you're usually 1:20starting out with EDA and EDA is just 1:23exploratory data analysis. Can we get 1:25the data that we need to make this model 1:28a success? So you're looking at you know 1:30uh getting it from SQL databases or from 1:33different teams that can give you an 1:34export but somehow you're gathering all 1:36this data. Once you get the data you 1:38know that the data is not ready. the 1:40data has to be prepped and so you're 1:42looking at some time with data prep and 1:45uh working in some of the gaps and 1:47seeing if anything needs to be cleaned. 1:49And from there you might move to feature 1:51engineering might which might still be 1:53uh within the same process uh as as the 1:55same time you're doing your data prep 1:57and you're creating and you're 1:58transforming some of these um columns 2:01and you're turning them into new 2:03features that will help your model. 2:05Well, once you have the features, you're 2:06ready to train. And so training is 2:10usually the next step. Training is its 2:12own uh process. It's its own task 2:15because you have to look into different 2:16models. You have to look and see which 2:17one is going to give you the best 2:18accuracy. Which one is most applicable 2:21to your problem. Is it NLP? Is it a 2:23regression? Um that type of thing. And 2:26the training starts. Once you get some 2:27good models, you also have to do the 2:29hyperparameter optimization depending on 2:31the model. From there, you're ready for 2:33deployment. 2:36And deployment can be its own can of 2:38worms because it's either using some 2:40sort of API or it's got to integrate 2:42with a front end or backend. And if 2:44you're a small team like my own, you 2:46might be the people writing both the 2:49front end and the training. Um, so 2:52you're doing all of that and then 2:53finally you're ready for monitoring and 2:56looking at how this is performing. 3:00It's up on the deployment server, 3:02however you uh decided to do your 3:04endpoint and you just got to look and 3:06see if this accuracy is good enough for 3:09the business. But what'll happen is you 3:12know entropy eventually your model is 3:14not going to be as accurate as it needs 3:15to be and so this process this whole 3:17process starts again or your team is 3:18tasked with a new model. But all of that 3:21is manual and uh really adds a lot of 3:24headache only for only 60 to 80% only 3:2720% is 40 to 20% of them actually making 3:30it to production. That's a lot of work 3:33and I'm here to show you a different and 3:35better way. So MLOps as you can tell 3:38from the name implements DevOps 3:40principles, DevOps tools and DevOps 3:42practices into the machine learning 3:45workflow. And so the beginning of DevOps 3:47and the beginning of really any uh 3:49development project is you're going to 3:50start out with uh de the the the dev and 3:54the EDA work. All of that at the end of 3:56the day is code, right? You're writing a 3:58notebook, you're writing some sort of 4:00Python script or some RS script or Julia 4:02or something like that. Uh but all of 4:04that is code and you can put all of that 4:05code in a source code repository. And 4:08what that does is it opens us up for the 4:10automation that's going to come next. So 4:13we can actually go in two directions 4:15from our dev and EDA you can first of 4:17all um the deployment 4:20if that's an API that you're writing or 4:22a front end or something like that the 4:25deployment can have CI and CD tools 4:28applied to those commits that you're 4:31putting into your repository on the on 4:34the other side your training 4:37can also take benefit from CI and CD and 4:41I'm repeating those terms terms of what 4:42what that means is continuous 4:44integration and continuous deployment. 4:46Uh which just means that every time you 4:48make a commit on your uh repositories 4:51automatically you can build and 4:53automatically you can you can deploy 4:55your deployment or automatically you can 4:58push a model to start being trained. 5:01Usually if you have uh the resources 5:03you're going to want to separate your 5:05training infrastructure from your 5:06deployment in infrastructure and that's 5:08because they're doing different tasks. 5:09training, you're usually going to want a 5:11GPU, some sort of highly parallel um 5:13computation. And on deployment, you 5:15might be fine with spinning up Docker 5:17containers or something uh little little 5:19containers that might have a load 5:21balancer to just uh handle demand. But 5:24from there, both of these can benefit 5:28from uh monitoring. 5:33So in DevOps, there's naturally a 5:35monitoring tools just to make sure your 5:37deployments are still live, just to make 5:39sure your rollouts are happening. And 5:41you can also see how AB tests are doing 5:43and things like that. We can apply the 5:45same ideas to your model, how is your 5:49model accuracy, are there things that or 5:52maybe trigger, let's say you reach 80% 5:55accuracy, which is which is too low. 5:56What you can do from that is 5:58automatically trigger a new training 6:01process. uh and it'll take the code, 6:04it'll begin the new training on new data 6:07and using CI/CD and using automation, 6:10you can get a new model up on the 6:13production server without too much 6:15hassle. So just imagine how much stress 6:18is gone from do this manually to going 6:22to this automated MLOps type of 6:24pipeline. I hope this helps and I hope 6:27that you'll be able to see better 6:28accuracies and much more speed whenever 6:31you're training your models. Thank you. 6:34Thanks so much. If you like this video 6:36and you want to see more like it, please 6:38like and subscribe. And my department 6:39said if we reach 10,000 likes, they're 6:41going to pay for another month on the 6:42GPU server. If you have any questions, 6:45please drop them in the comments below.