Learning Library

← Back to Library

Cloud‑Native Migration and DevOps Pipeline

Key Points

  • The proposed cloud‑native app is divided into three logical layers—UI, a Back‑End‑For‑Front‑End (BFF) that serves UI‑friendly APIs, and a backend that may incorporate AI services and a database.
  • To migrate to a cloud‑native approach, each layer should be containerized and managed independently, allowing you to apply DevOps discipline through dedicated CI/CD pipelines.
  • A typical pipeline starts by cloning the code from a Git‑based repository, then builds the component using the appropriate toolchain (e.g., npm/Webpack for React, Maven/Gradle for Spring Boot, etc.).
  • After packaging, the pipeline runs unit tests and checks code coverage to validate the changes before proceeding to deployment stages such as image creation, registry push, and orchestration in the cloud environment.

Full Transcript

# Cloud‑Native Migration and DevOps Pipeline **Source:** [https://www.youtube.com/watch?v=FzERTm_j2wE](https://www.youtube.com/watch?v=FzERTm_j2wE) **Duration:** 00:11:12 ## Summary - The proposed cloud‑native app is divided into three logical layers—UI, a Back‑End‑For‑Front‑End (BFF) that serves UI‑friendly APIs, and a backend that may incorporate AI services and a database. - To migrate to a cloud‑native approach, each layer should be containerized and managed independently, allowing you to apply DevOps discipline through dedicated CI/CD pipelines. - A typical pipeline starts by cloning the code from a Git‑based repository, then builds the component using the appropriate toolchain (e.g., npm/Webpack for React, Maven/Gradle for Spring Boot, etc.). - After packaging, the pipeline runs unit tests and checks code coverage to validate the changes before proceeding to deployment stages such as image creation, registry push, and orchestration in the cloud environment. ## Sections - [00:00:00](https://www.youtube.com/watch?v=FzERTm_j2wE&t=0s) **Migrating a UI‑BFF‑Backend to Cloud‑Native** - The speaker describes a UI, BFF, and backend architecture (including AI and database services) and asks how to shift it to a cloud‑native approach, focusing on containerization, CI/CD pipelines, and key DevOps considerations. - [00:03:05](https://www.youtube.com/watch?v=FzERTm_j2wE&t=185s) **Unit Testing and Security Scanning in CI** - The speaker outlines how sequential unit tests, code‑coverage validation, test‑driven development, and vulnerability scanning serve as gatekeepers in a continuous integration pipeline, halting the build whenever a test or security check fails. - [00:06:07](https://www.youtube.com/watch?v=FzERTm_j2wE&t=367s) **Image Scanning and Kubernetes Deployment** - The speaker outlines a CI/CD process where code and container images undergo vulnerability scanning with alerts for remediation, followed by deploying the vetted image to an OpenShift/Kubernetes platform using Helm or operators within the developer workflow. - [00:09:15](https://www.youtube.com/watch?v=FzERTm_j2wE&t=555s) **GitOps Automation for Container Deployments** - The speaker explains how updating a Git repository with build metadata allows Argo CD to automatically retrieve images and deployment references, orchestrating repeatable, human‑free deployments to test environments. ## Full Transcript
0:00I want to start with laying out an example 0:03cloud-native application that I've architected 0:06and I know how to build it out. 0:08So, let's start with the front-end. 0:11We'll call this the UI portion here. 0:14Below that we've got the BFF ("Back-end For Front-end"). 0:18So, this is serving the API's for that UI to serve up information. 0:23So, the UI accesses the BFF 0:26and that, in turn, is going to access 0:29the microservice or the back-end layer. 0:32So, in here let's say "back end". 0:36Now, obviously for higher value services - 0:39let's say that back-end 0:40goes out to something like AI capabilities, 0:45and in addition, maybe a database. 0:49So, Matt as the expert, I'm going to hand this off to you. 0:52This is the application architecture that I want. 0:54How do I start migrating this over to a cloud-native approach 0:58and what are the DevOps considerations 1:01that I need to take into account? 1:02Ok. So, you've already laid out some of the separation of concerns. 1:05You've got a component that is focused on delivering a user experience, 1:10which, again, can be containerized and packaged. 1:13You've then maybe got a back-end for front-end which is serving 1:17UI-friendly APIs 1:18and abstracting and orchestrating across a number of back-end. 1:22So, you've got your 3 logical points. 1:24So, moving forward, what you typically do is take 1:28this component and start to break it into a pipeline 1:32that will enable you to offer some discipline 1:35around how you build, deploy, and test. 1:37So, what we typically do here is we're going to use DevOps 1:42and we're going to create a pipeline, 1:45and this pipeline is going to consist of a number of stages 1:48that will take us through 1:50the lifecycle of building and packaging this component. 1:53So, typically the first step is to clone the code 1:56from your source code management, which is typically Git 1:59or some kind of Git-based technology, GitHub, GitLab, 2:02and then the next step is to build the app. 2:05So, "Build App". 2:08In this portion, when you're actually building out the application, 2:12you have considerations for a Node.js app, 2:15you have things like NPM, 2:16Java, you have to figure out the build process for that. 2:19So, the pipeline is kind of configured 2:22to build each one of these components 2:24based on the programming language? 2:25Right. So, typically you have one pipeline per component 2:29and, as you correctly stated, 2:31if you're building a UI and it's got React in it, 2:34you're going to use a web pack to build the UI TypeScript code, 2:37package that into a form 2:39that will then be package-ready for run. 2:41So, there are steps 2:43- and, again, with a Spring app, 2:44a Spring Boot app, that you'll package it using Maven or Gradle, 2:48and we know that Node.js you'd use NPM and various other steps. 2:52So, this part of the pipeline 2:54is about packaging the source code in the way that it's needed 2:58to then be run. 3:00But then, typically, at this point the next step is to to run a set of tests. 3:05So, you run a set of unit tests against the code, 3:09you validate code coverage. 3:10And then this enables you to determine 3:13whether any code changes that have been made in the pipeline are valid. 3:16And again, these steps are sequentially moving along, 3:20but if any one of these fails it will stop the build, 3:23you'll be informed as if as a developer 3:25and then you'll go back and fix the code 3:28or fix the test. 3:29So, just to clarify at this level we're going to do 3:32unit tests, so tests within kind of the app context. 3:36Not really considering 3:39connections between the different components. 3:41Yeah. Today we're not going to cover that the integration story 3:44or performance testing, 3:46but typically when you're building a pipeline you need to 3:49test the code that you've written 3:51using various techniques. 3:53Typically, you can use test-driven development which is a concept we use in the Garage. 3:58So, you write the test first and then create the code to validate that. 4:02You can use other frameworks, 4:03most of the major programming models have good test frameworks around them, 4:07whether it's Java, Node, or other languages. 4:11So, next step: 4:13again, one of the key things to try and drive for 4:15is to get to a point of continuous delivery. 4:18This is a continuous integration pipeline, 4:21but if you fail the test then that's going to prevent 4:24this package of code moving into a test environment. 4:27So, another common technique we use is code scanning, 4:32or vulnerability scanning, or security scanning. 4:35So, what we do here is we're looking for vulnerabilities, 4:38we're looking for test coverage, 4:40we're looking for quality gates. 4:42So, if your code isn't a good enough quality, 4:44from a code analysis perspective, 4:47we could actually stop the build and say we're not going to move 4:50this microservice further along the build process. 4:53Right. So, if we were building out this 4:56- let's say the BFF application was 4:59a container-based application running in IKS (IBM Cloud Kubernetes Service), 5:03We have some capabilities to allow you to test for that scanning, right? 5:08It's the Vulnerability Advisor. 5:10So, would that exist in this phase then? 5:12So, you tested the code, then you... 5:13Yeah. Again, I'm lumping in one or two different stages here, 5:17you can do vulnerability scan, you can do code scan, 5:21it's kind of a common technique to make sure. 5:24The good thing about vulnerability scanning is you're validating 5:28that there's no security holes in the Docker image, 5:30or the container image as you build it. 5:32Got it. OK. 5:33So, now that we've got up to the scanning phase, 5:36what's our next phase - where are we going? 5:39The next step is 5:40to take the application that we built and tested and scanned, 5:43and now we're gonna build it into an image. 5:47So, we call it a "build image". 5:51So, what this is doing is using the tools 5:54to to package up the code that we built and put it inside a container. 5:59And once we've built the image 6:01we then store that image out in an image registry 6:06with a tagged version that goes with it. 6:07Right. So, I guess I got ahead of that right there 6:10- so, that's where we would actually do that vulnerability scanning: 6:13once we've tested the code itself, done some some scanning at that level, 6:18once we build the image then, something like vulnerability advisors ... 6:21Right. So, you could have that as another stage, 6:24but, again, if the vulnerability is poor 6:27then you could prevent this moving forward 6:30and that will inform the developers 6:32to either upgrade the level of base images they're using 6:35or fix a number of the packages that they've included in it. 6:38So, basically every step of the way 6:40- if anything fails you're notified of that 6:44and you can go back and fix that. 6:47Right - and at the next stage, now you have an image, and the next thing is to deploy it. 6:52So, what we're looking to do is to take that image and deploy it inside an OpenShift managed platform 6:58so it will move the container from the image registry and deploy it. 7:02And there are a number of different techniques for deployment that's are used. 7:05Some developers are using Helm, 7:08but the more modern approach is to use operators, 7:10so there's a life cycle around that component when it gets deployed. 7:13So, and then this deploy - 7:15let's say I have a a Kubernetes environment - 7:18so you would deploy an application, 7:22let's say the BFF application, 7:24into that Kubernetes environment, right?. Yep. 7:27OK, and I'm guessing at this phase this is still part of the developer flow, 7:31- would this be the development environment that you're pushing into, or the test environment? 7:35So, typically a continuous integration flow 7:40builds and packages the code up for the development environment. 7:44When we talk in a few seconds we'll more talk a bit more about 7:47how we move that package of code from the container registry 7:51out into a test environment. 7:53Got it, so right here, like that. Yep. 7:56So, the final step is to validate the health. 8:00So, what you're really asking here is, 8:03"Is the container running?" 8:05- is it sending back operational information 8:09such that you can determine that it's healthy enough 8:11to validate that, not only that the tests have run, 8:14but actually it started, 8:16and it's communicating with its dependent services, 8:18and it's going to operate in the way that you'd expect it to. 8:22Of course, yeah. So, this is where you 8:25connect it up to the different components 8:27and make sure they're all working together seamlessly. 8:30This is where you would probably 8:32find issues with integration, 8:34or how the teams are connecting up with each other, 8:38API contracts, and those kind of things, 8:40those issues will start to bubble up in this space. 8:42Yes, and again, the health input is important 8:44because you can hook that into operational tools 8:47like Sysdig and LogDNA and 8:50other monitoring that will give you 8:52a better feel of the current state of your applications as they run. 8:56So, this has got us as far through the development cycle. 8:59The next step is to - 9:02and, again, introduce - this is starting to be common in the industry, 9:06is to use a technique called GitOps 9:08where you would now say 9:10I've got my application, I built it, I packaged it, I've tested it, 9:13I've validated it. 9:15What I'm now going to do is update a Git repo 9:18with the build number, the tagged version, 9:21and the reference point to the image registry. 9:25And then GitOps can then trigger off a deployment of that image 9:29out into a test environment 9:32with all the other components that go with it, 9:34and there are a number of GitOps tools out in the market 9:37and one of the ones we use in the Garage is Argo CD, 9:40which allows you to monitor a webhook of a Git repo 9:45and then it will pull the image, 9:46it will pull the deployment reference, and then package it and deploy it 9:49ready for use in testing. 9:52So, basically the same quality that developers have been doing 9:56forever with SCMs to manage different versions of their code, 10:00now operations team are taking advantage of that same approach 10:03to basically operationalize the deployment 10:07of these actual images, containers, applications. 10:10Absolutely, and it comes back to a point we made earlier, 10:12that this is about discipline and repeatability. 10:15There's no humans hurt in this process as you go through it, 10:19and the less humans touching these steps the better. 10:22Again, one of the things we often do with clients is 10:24we'll work with them 10:25and we'll discover that there's some human process in the middle 10:28and that really slows down your ability to execute. 10:32So, it's about automation, discipline, and repeatability, 10:35and if you can get to this point 10:37and prove that this code is good enough to run in production, 10:40you can then start to move towards 10:44that golden milestone of delivering continuous delivery. 10:48Right. So, once you've automated all of this, 10:51that's when you can truly say you have CI/CD. 10:54That's that's when you can finally get to that level. 10:57OK, so, honestly Matt, this was a great overview of all the concepts we've discussed already. 11:02If you've enjoyed this video or have any comments 11:04be sure to drop a "like", or a comment below. 11:07Be sure to subscribe, 11:08and stay tuned for more videos like this in the future.