Robust Cloud API Architecture with IaC
Key Points
- Source control (typically Git) serves as the central artifact repository and infrastructure‑as‑code hub, storing server config files, API definition files, and pipeline scripts for the entire system.
- Defining all environment specifications (development, test, production) and pipeline tasks in the repository enables versioned, repeatable builds and easy reconstruction of any failed component.
- A developer pushes an updated API definition (e.g., “api 4”) to the repo, triggering a webhook that starts an automated pipeline to promote the API through the defined environments.
- The pipeline’s first task checks the Kubernetes cluster for the required test environment and creates it (with specified CPU, memory, and storage) if it does not already exist, ensuring consistent progression toward production.
Sections
- Source Control Foundations for Cloud APIs - Whitney Lee explains how to use a Git‑based artifact repository to store configuration files, API definitions, and pipeline scripts, treating the infrastructure as code to enable visibility and collaboration when building a cloud‑based API solution.
- Automated API Promotion Pipeline - Describes a webhook‑triggered CI/CD pipeline that creates a test Kubernetes environment, deploys the new API, runs tests, and prepares it for production promotion.
Full Transcript
# Robust Cloud API Architecture with IaC **Source:** [https://www.youtube.com/watch?v=sKfep-UmZeM](https://www.youtube.com/watch?v=sKfep-UmZeM) **Duration:** 00:10:30 ## Summary - Source control (typically Git) serves as the central artifact repository and infrastructure‑as‑code hub, storing server config files, API definition files, and pipeline scripts for the entire system. - Defining all environment specifications (development, test, production) and pipeline tasks in the repository enables versioned, repeatable builds and easy reconstruction of any failed component. - A developer pushes an updated API definition (e.g., “api 4”) to the repo, triggering a webhook that starts an automated pipeline to promote the API through the defined environments. - The pipeline’s first task checks the Kubernetes cluster for the required test environment and creates it (with specified CPU, memory, and storage) if it does not already exist, ensuring consistent progression toward production. ## Sections - [00:00:00](https://www.youtube.com/watch?v=sKfep-UmZeM&t=0s) **Source Control Foundations for Cloud APIs** - Whitney Lee explains how to use a Git‑based artifact repository to store configuration files, API definitions, and pipeline scripts, treating the infrastructure as code to enable visibility and collaboration when building a cloud‑based API solution. - [00:03:46](https://www.youtube.com/watch?v=sKfep-UmZeM&t=226s) **Automated API Promotion Pipeline** - Describes a webhook‑triggered CI/CD pipeline that creates a test Kubernetes environment, deploys the new API, runs tests, and prepares it for production promotion. ## Full Transcript
what is an efficient yet robust way to
architect an api solution in the cloud
my name is whitney lee i'm on the cloud
team here at ibm
let's talk about the foundation of our
system
for me that's source control source
control
is also can be called an artifact
repository
and the most popular version is git
so what do we want to store in our
repository
we want to store all the artifacts
related to our
what will eventually be our final system
so
a good example would be server
configuration files
so for example we'd want to store
a file for our development environment
one for our test environment this is our
server configuration file
and one for our production environment
that's a great start what else do we
want to store in our artifact repository
well if we're building an api solution
we're going to want to store all of the
artifacts related to our apis
so our api definition files
so we have a definition file for api
1. for api 2
and for api 76
just kidding api 3.
okay so i also know
in my final system i want to have
pipeline builds
so now's a good time to define the tasks
in the
in the pipeline file for that
so we'll build task 1
task 2
and then we're also going to want our
pipeline
run file to be defined here in our
source control
so source control is great it's also
called
infrastructure as code because we're
defining the infrastructure of our
system
before that system is even built this
provides
visibility into what's happening and
great collaboration
in the team in addition if any of the
pieces of the system
fail they can be rebuilt very easily
from our definition files
so let's build out our cluster a
kubernetes cluster just like any
ours is going to have
we'll have physical resources so we'll
have memory
and cpu for our nodes
and then we'll need physical disk space
for storage
too
so that's the physical resources behind
our cluster
and then let's think about where we want
to start let's start with our
development environment
so we'll build the development
environment in our cluster
and that's going to be built according
to the definition the specifications
we've already defined in our repository
so let's say we have a developer
and our developer is working on a new
api
and that new api very cleverly is going
to be called api 4.
and the developer is using our dev
environment to work that out
so let's say a developer feels like api
4
is ready to go what we want is a
pipeline build
that's going to take that that api 4
and promote it all the way up to
production
so that pipeline bill is going to need
to get triggered
and so we'll in this case we'll trigger
it by having our developer
when the api is ready to push that
definition file
into our source control
so that's going to trigger a web hook
which is going to trigger
our pipeline build so let's say
task one of our pipeline build maybe
that task
job is to check our kubernetes
environment
look for an environment called
called test and if there's not one there
to build one according to the specs
defined in our
test server configuration file so
our task is gonna our task one is gonna
trigger
a build of a test environment
then let's say task two what it does is
whatever api triggered the web hook
that triggered the pipeline build it's
gonna put that api into the test
environment and run a suite of tests
on that api so in that way this one
pipeline build that
was defined here has promoted our api
from dev environment to the test
environment
great now our api is ready to go to
production
or not we can do more to make sure
it is ready before it goes all the way
to our production site
so what let's say we have our production
environments built in our cluster
and that production environment already
has our apis one two and three in it
what we can do is also build a canary
environment
so we'll define our canary environment
in our repo
and we want our canary environment to be
an exact
replica of our production environment
so our canary environment also then has
a apis one two and three
already running so what we can do here
let's consider our end user
our end user wants to make a call
into our cluster that call is going to
go through
a gateway to get into the cluster
and then that gateway is going to send
the traffic
to a load balancer
the load balancer is going to
divide where the traffic goes between
prod and canary
so for a canary environment it might
route
say one percent of traffic to the canary
environment
it could be any number you choose but
it's going to be a small percentage and
then the rest of the traffic will go
into production
so in this way the api 4
once it's in the canary environment it
experiences some real-world
web traffic to make sure it is ready
before it's promoted
all the way into production
something else that i think would be
important to build into our api solution
would be logging and metrics collecting
so we can collect logging and metrics
from all of the environments that we've
built
we can and we will and we should
and with those logging in metrics we can
go one step further and use tools like
prometheus
or grafana
to put those metrics in a ui in a human
readable form
with graphs and manipulated
in a way that best serves the company's
interests
so the players that are interested in
the system would be
a business analyst for example that
person
would definitely want especially be
interested in the graphics and the
insights
that the logging in the metrics provide
an operations manager
that person would be interested in our
and metrics but also interested in the
infrastructure as code and what's going
on
technically and then we might have an
architect too
who's keeping an eye on the source
control and on the system as a whole
so what's worth mentioning here is this
is all built on a kubernetes cluster
but the tools like the pipeline build
the load balancer the logging and
metrics collecting
and displayed those are all going to be
third-party tools that are installed and
maintained
separately from the kubernetes cluster
itself
the other choice is to use a robust
platform like
openshift that's built on top of
kubernetes
and with openshift pipeline tools
load balancers logging in metrics and
wall based access
control are all built into the platform
and maintained
with the platform in addition to a host
of other benefits
so in conclusion this solution
is beneficial because it uses
infrastructure as code
which provides collaboration visibility
and a source of truth if any piece of
the system should go down
we also have rapid api promotion from
dev
all the way up through production and
you have control over that you don't
have to use pipelines
for every step of the way but you
definitely can and not only is it rapid
but it is very low risk because it's
tested in a test environment
and tested again as a canary deployment
and then finally we have the logging in
metrics collection and display
thank you if you have questions please
drop us a line below
if you want to see more videos like this
in the future please like
and subscribe and don't forget you can
grow your skills and earn a badge with
ibm
cloud labs which are free browser-based
interactive kubernetes labs