Five Steps to Trusted AI
Key Points
- The speaker likens building trustworthy AI to a home renovation, emphasizing that both require a careful, step‑by‑step process before the final product can be relied upon.
- Three major risks of generative AI are highlighted: legal exposure from evolving regulations, damage to brand reputation from mishandled outputs, and operational hazards such as leaking PII or trade secrets.
- To create trusted AI, the first principle is “know your scope” – clearly define what the model is allowed to do and set guardrails that route out‑of‑scope requests (e.g., pricing queries) to human agents.
- The second principle is “know your foundation” – understand the underlying data, infrastructure, and constraints of the system, just as a renovator must be familiar with a house’s pipes and wiring before beginning work.
Full Transcript
# Five Steps to Trusted AI **Source:** [https://www.youtube.com/watch?v=mfxgfU5Abdk](https://www.youtube.com/watch?v=mfxgfU5Abdk) **Duration:** 00:09:47 ## Summary - The speaker likens building trustworthy AI to a home renovation, emphasizing that both require a careful, step‑by‑step process before the final product can be relied upon. - Three major risks of generative AI are highlighted: legal exposure from evolving regulations, damage to brand reputation from mishandled outputs, and operational hazards such as leaking PII or trade secrets. - To create trusted AI, the first principle is “know your scope” – clearly define what the model is allowed to do and set guardrails that route out‑of‑scope requests (e.g., pricing queries) to human agents. - The second principle is “know your foundation” – understand the underlying data, infrastructure, and constraints of the system, just as a renovator must be familiar with a house’s pipes and wiring before beginning work. ## Sections - [00:00:00](https://www.youtube.com/watch?v=mfxgfU5Abdk&t=0s) **Trusted AI Like Home Renovation** - The speaker compares establishing trustworthy AI models to a kitchen remodel, highlighting legal, reputational, and operational risks before outlining five steps to secure both generative and traditional AI. ## Full Transcript
AI is everywhere but how can
we trust the models that are there for
us to
consume it's similar to a home
renovation I'm actually renovating my
kitchen and I'm so excited to get brand
new countertops and allwhite cabinets
stainless steel
appliances as well as brand new
lighting but I might want these things
tomorrow but I need to follow a process
to make sure that that kitchen is
delivered to me safely and I can trust
it for years to come it's a little bit
like generative AI models we might want
them tomorrow but we need to take the
step steps to make sure they're trusted
and secure in this video I'll cover five
ways to build trusted AI both generative
and traditional models but first let's
talk about what could go wrong let's
talk about three
risks just like in my home renovation
process there are many risks that can
occur everything from making sure the
people doing the work have legal
Protections in case something happens on
the job to redoing a floor and not
completing the right process or steps
and more money has to be spent to fix it
so let's cover three risks for
generative AI models the first risk is
legal there are a growing list of legal
implications for using AI improperly or
not following all the steps needed for
organizations to use a model there's a
number of growing regulations like the
EU act the New York hiring bias law as
well as the executive order from the
White House on generative Ai and the
number will only grow over
time next we have
reputation
risks that's what you want on your score
everything from
your brand Matters from your reputation
there's an instance of a large
organization deploying a generative AI
chatbot gave a very high value item for
only a fraction of its original cost
finally we have
operational
risks these risks can result in immense
fines or loss of productivity for a
company this could be everything from
regurgitating pii information
unintentionally or exposing Trade
Secrets now that we understand the risks
at stake let's talk about how to
build trust in our AI models here are
five simple
tips
first is know your scope just like in my
home renovation process I want to Define
specifics on what I'm going to be
working on and what contractors can and
can't do I just want them to focus on
the kitchen just like in my home
renovation process I'm going to Define
my AI model scope by setting guard rails
around that scope I'm going to say
exactly what the model can do and even
more
importantly what it
can't a good example of this is with
chatbots if I create an AI chatbot for
an organization I might not want the
chatbot to answer any questions related
to pricing pricing is outside of the
guard rails in this case I'm going to
send all of those questions outside of
the generative AI model and straight to
an agent second we have
the
foundation know your
Foundation I know all the details about
my house before I get started in the
project right I know the types of pipes
I might have as much as possible about
my house as I can so I don't run into
risks I didn't see coming I want the
same thing for my model I want to
understand the data used to build the
model what it was recommended to be used
or not used for as well as what type of
model it
is open or closed as well as the model
architecture one way to do that is
through
model
cards model cards show all the details
about a Model A large language model for
you to use everything from the data that
was used to build the model from the
architecture about the model how
training was done on the model and how
it can or can't be used so this gives
you the foundation you need to get
started and know the model you select
for
use third knowing and setting your life
cycle
governance in my home and with the help
of a contractor I'm going to document
the entire
home improvement process so that I know
the different steps and safeguards it
takes to move from one stage to the next
as well as who's doing the work the same
thing for my model I want to set
up and document a specific process so
that I know all the steps that are being
used to build the model I know versions
of the model that are being used I know
who's making model changes I know which
version is going going to production and
that should include everything from any
training data that's being used to the
different types of prompts that I'm
using to build my model as well as test
data right that's verifying that I want
to move to the next
step next we have our fourth step which
is
monitoring
risk throughout the home renovation
process I'll need to monitor the risk RS
every step of the way I need to check in
and make sure that the home is still
stable and stand and monitor that
nothing is going to go wrong with my
structure throughout the process and
that might include several tests and
tracking of that
information the same thing for my model
I'm going to want
to track steps over time and metrics
specific to
bias and
hallucination to make sure that the
model is operating in a way that I'd
like both in production as well as
throughout the testing process and this
will ensure that when an issue does
arise I'm able to quickly react and take
action on it even better if you can find
a way to
automate the process so that this can be
done seamlessly and you can move on to
other tasks and be alerted if there are
any issues at all
finally we cannot leave out
compliance it's important to know in my
home renovation process am I up to code
right what code regulations are going to
impact my renovation and to track them
over time and Link them to specific
steps of the process the same thing goes
for my generative AI model I'm going to
link different parts of the model or
different steps to potential legal
regulations as well as use cases so that
if a Law changes or a requirement
changes I can very quickly track that
back to the part or the model if I have
a number of models that is impacted by
that rule so extremely important to make
sure I can quickly react and and adjust
so that I'm not
penalized severely for that error with
my model nothing is as important to a
relationship yet as fragile as trust AI
can truly transform your customer
experience but keep these five tips in
mind to make sure the models that you're
building are models your customers can
trust and remember if you get into
trouble
there's people you can trust who can
help you along the way thanks for
watching before you leave please
remember to like And
subscribe