AI Governance and Security Essentials
Key Points
- AI offers huge benefits but also poses risks of incorrect outputs and reputational damage, making strong governance and security essential.
- A 2025 IBM report shows 63 % of organizations lack an AI governance policy, leaving a critical gap in risk mitigation.
- Governance responsibilities (typically led by the Chief Risk Officer) focus on ensuring AI is responsible, explainable, reliable, and fully documented with traceable source attribution.
- Security responsibilities (usually led by the Chief Information Security Officer) target technical vulnerabilities, protection against attacks, and control of “shadow AI” instances that could cause data leaks.
- Most governance failures stem from self‑inflicted issues such as using poorly sourced or improperly trained models, underscoring the need for rigorous policy and oversight.
Sections
- Reducing AI Risk with Governance & Security - The segment highlights AI’s promise and associated dangers, stresses the lack of governance policies in most firms, and outlines how robust AI governance and security frameworks—led by chief risk and information security officers—can mitigate reputational and business threats.
- Governance vs Security in AI Risks - The speaker contrasts governance concerns—misalignments, policy breaches, bias, and model drift—with security threats from insiders or external attackers, outlining the potential damages such as hate speech, unfairness, and intellectual‑property theft.
- AI Risk Governance and Controls - The speaker outlines essential governance measures—clear policies, accountability structures, and security practices of prevention, detection, and response—to manage AI risks and ensure proper model training and data sourcing.
- Integrated AI Governance and Security Framework - The speaker outlines the need for penetration testing, automated prompt‑injection testing, posture management, and a layered governance framework that surrounds AI with protective rings to address security risks.
- AI Security Posture Management - The speaker outlines AI security posture management, covering misconfiguration safeguards, model penetration testing, and the deployment of an AI firewall/gateway to enforce policies, detect prompt injection, and block data exfiltration.
Full Transcript
# AI Governance and Security Essentials **Source:** [https://www.youtube.com/watch?v=4QXtObc61Lw](https://www.youtube.com/watch?v=4QXtObc61Lw) **Duration:** 00:14:57 ## Summary - AI offers huge benefits but also poses risks of incorrect outputs and reputational damage, making strong governance and security essential. - A 2025 IBM report shows 63 % of organizations lack an AI governance policy, leaving a critical gap in risk mitigation. - Governance responsibilities (typically led by the Chief Risk Officer) focus on ensuring AI is responsible, explainable, reliable, and fully documented with traceable source attribution. - Security responsibilities (usually led by the Chief Information Security Officer) target technical vulnerabilities, protection against attacks, and control of “shadow AI” instances that could cause data leaks. - Most governance failures stem from self‑inflicted issues such as using poorly sourced or improperly trained models, underscoring the need for rigorous policy and oversight. ## Sections - [00:00:00](https://www.youtube.com/watch?v=4QXtObc61Lw&t=0s) **Reducing AI Risk with Governance & Security** - The segment highlights AI’s promise and associated dangers, stresses the lack of governance policies in most firms, and outlines how robust AI governance and security frameworks—led by chief risk and information security officers—can mitigate reputational and business threats. - [00:03:05](https://www.youtube.com/watch?v=4QXtObc61Lw&t=185s) **Governance vs Security in AI Risks** - The speaker contrasts governance concerns—misalignments, policy breaches, bias, and model drift—with security threats from insiders or external attackers, outlining the potential damages such as hate speech, unfairness, and intellectual‑property theft. - [00:06:10](https://www.youtube.com/watch?v=4QXtObc61Lw&t=370s) **AI Risk Governance and Controls** - The speaker outlines essential governance measures—clear policies, accountability structures, and security practices of prevention, detection, and response—to manage AI risks and ensure proper model training and data sourcing. - [00:09:22](https://www.youtube.com/watch?v=4QXtObc61Lw&t=562s) **Integrated AI Governance and Security Framework** - The speaker outlines the need for penetration testing, automated prompt‑injection testing, posture management, and a layered governance framework that surrounds AI with protective rings to address security risks. - [00:12:24](https://www.youtube.com/watch?v=4QXtObc61Lw&t=744s) **AI Security Posture Management** - The speaker outlines AI security posture management, covering misconfiguration safeguards, model penetration testing, and the deployment of an AI firewall/gateway to enforce policies, detect prompt injection, and block data exfiltration. ## Full Transcript
AI is already doing some great
things and the best is yet to come.
But with this greatness comes risk.
Risk that the system will do the wrong thing,
give incorrect answers and expose the organization
to reputational and business damage.
How can you reduce AI risk? Well,
with a strong governance and security capability. Unfortunately,
according to the 2025
IBM Cost of a Data Breach Report,
63% of organizations
had no governance policy in place.
These two areas, governance and security,
have some overlap, but mostly complement each other in important ways.
In this video, we'll take a look at what these are
and how you can leverage them to reduce AI risk.
Okay, let's take a look at the problem space.
Get that out of the way.
First of all we're going to look at compliance issues
So, in theory you should have a governance policy for AI.
And you should have a security policy for AI. Now,
who are the primary stakeholders that are really involved with this? Well,
when it comes to the governance part,
it's probably going to be the chief risk officer.
That's not the only person that's going to care,
but they may be the primary person that cares.
Versus from a security standpoint,
it's more likely to be the chief information security officer.
Again, both of these roles could care about this,
but that's who might be the lead in the primary in these.
Next thing we're gonna take a look
at in terms of what the AI does.
What are our particular concerns. From a governance standpoint,
we want to make sure that the thing is responsible.
That it's not doing things that put us in a bad light,
or put our users in a bad light or in a bad situation.
We wanna make sure that our AI is explainable.
That it doesn't just make up stuff.
Ah, that it the stuff it tells is, in fact, reliable.
And a big part of that involves documentation and source attribution.
We wanna make sure that we can trace all of this stuff back
and, therefore, make it more trustworthy.
Now, from a security standpoint,
we're looking at vulnerabilities
that might exist within the AI system itself.
Where someone's trying to attack the system.
We also are gonna be concerned about things like shadow
AI, that someone created some
AI instance without approval, without authorization.
Yet this thing is out there running,
and we might wanna make sure that it's locked down
because it might be a source of data leaks. Now,
in terms of the cause
of some of the issues that we might be trying to
to guard against with a governance policy and a security policy,
I would say in general, and this is a big generalization,
because there's definitely going to be exceptions to this.
But think of it this way, that the cause in
that we're really guarding against in a governance case,
is really self-inflicted wounds.
This is where we used a bad model.
We pulled it from a bad source.
The model wasn't trained properly.
The ingredients that went into the cake, if
as it were, were not the right ones.
They weren't pure.
Ahm. And the result of this is probably unintentional.
So, we we did all of this.
We didn't mean to make a big mess, but we did.
So we're trying to guard against that.
We're looking at things like misalignments
and policy violations and ethical lapses.
We wanna make sure those don't occur.
And that's the realm of governance.
Now, on the security side, what are we gonna gonna be caring about? Well,
in this case, the damage
and the cause of this is more done externally or by others.
So, self-inflicted versus others inflicted.
And in this case, it could be internal,
bad insiders who are doing something that they really shouldn't be doing.
Or it could be an external person
who's attacking the system, either one of those.
But in these cases, it's more intentional.
Someone is trying to break the system.
That's what the security policy is really concerned with.
Now let's take a look at the damage
that can occur in these cases. So,
if first of all, from a governance standpoint,
we're looking at things like PAP, hate, abuse and profanity.
We wanna make sure that our system doesn't say
really outrageous things that insult our users.
We wanna make sure that it is fair. It's unbiased.
It doesn't bias toward or against any particular information or or population.
We want to make sure that the model doesn't drift.
It started off true, but now it's getting a little more untrue as it starts
learning more and more things.
So we want to make sure that it's still solid.
We're looking at issues like intellectual property.
Is someone able to steal our intellectual property through this?
And also important is
is our model trained on intellectual property
that we didn't actually have rights to use?
In other words, it might be copyrighted material.
And then, when our model learned on that, it starts using that.
And now we're subject to a lawsuit.
That would be a bad thing.
Hallucinations.
We wanna make sure that it's not just making up answers,
that they need to be grounded in truth
and the reputation of our organization.
We wanna make sure that the AI, which is representing us,
is doing it in a way that we would approve of. Now,
on the security side, what are we looking at here? Well,
you've heard me if you've seen videos,
I talk about this thing called the CIA triad,
where we're looking at confidentiality, integrity and availability.
Those are the three things that we're about in every security case.
Those are the things that we're trying to make sure we get right.
So confidentiality, we wanna make sure that, for instance, the system doesn't exfiltrate.
It doesn't send sensitive information outside of our system
so that other people can access it that aren't approved to do that.
We wanna make sure that the system,
from an integrity standpoint, cannot be manipulated.
That someone can't figure out how to make the system
do other things that we don't intend it to do, that it gives bad answers.
The data has been poisoned. Things like that.
And then finally, we wanna make sure that it's available,
that someone can't do a denial of service attack
against the system and, therefore, make it
not available for the people who need it.
All of these things ultimately
then feed into this idea of risk.
Different kinds of risk and different kinds of policies related.
But it's all about AI risk. So,
now we've taken a look at those risks.
Let's take a look at what we need to do about them.
Well, one of the things is we're gonna need controls in place.
Controls are the things that let us control what's happening with the system. So,
some of the things we'll look at on the governance side
is we wanna have a set of rules that we have spelled out.
We wanna make sure that we're following them,
that we've put those into policies that are well understood.
And we're finding a lot of organizations
don't do that and have not had that.
It's pretty hard to know if you're succeeding
if you've never even defined where the finish line exists.
We need accountability structures.
Who's responsible for this and who's responsible for which parts of it?
On the security side, we're mo looking at different things.
We're trying to do prevention,
detection and response.
That is, I wanna be able to make sure
that I to the extent possible.
The system is not vulnerable to begin with,
and then be able to find out when it is,
when it's under attack and then what we're supposed to do about it.
So prevention detection and response.
Now, with our models, what specifically do
we need to do from a governance standpoint.
Well, we wanna make sure that they're trained properly.
We need to know what the sources are
so that that information is what we intend it to be.
If you use bad sources, you get bad data
and bad responses out of your AI.
We need to know the lineage of the model.
Most organizations are not going to create their own models. But if they do,
they need to be able to know where did the ingredients come from?
If I go to an open-source
model repository, where did I get it?
Do I have the latest
and greatest and authentic version, or do I have some illicit
copy that somebody's made, some bogus version of it.
And how many ah who's touched it along the way?
That's the lineage. We wanna be able to see all of that.
We need an acceptable use policy.
What are the things that we're okay with our AI doing
and what are we not okay with it doing?
And how do we want employees to understand their use of that?
And as I mentioned before, IP risk. If
we're gonna be creating models,
or even if we're gonna take a model and train it with our information,
we need to make sure that it's, in fact,
our information that we have rights to.
So, those are all things that we would look
at in terms of model
and governance of those. On the security side, well,
again, we're thinking about an attacker,
something other than us that is coming in.
And what is the number one attack type that we're concerned with,
especially with generative AI?
It's prompt injections.
These are things where people are basically socially engineering our
AI, giving it instructions to override its original instructions,
and then having it do something that we didn't intend it to do.
So I need to have protections against that.
I need to have protections against unauthorized access.
These are gonna be bigger issues
as we move toward agentic AI, as well.
I wanna make sure that that agent that has autonomy
isn't gonna just go off and do something really crazy,
because we're giving it a lot of power to do certain things.
So, unauthorized access. We don't wanna allow that.
We need to do penetration testing of these models. We bring them in.
We need to find out if they're vulnerable
to these types of attacks or not.
And many, many more. Prompt injections.
There's more of those than you can dream up.
So, we need tools that are gonna to be able to do
automated prompt injection testing.
We're also looking at a thing we call posture management
to make sure that the system hasn't been misconfigured in a way
that allows exposure of information that is sensitive to us.
So, those are some of the things that we're looking at from a governance perspective
and from a security perspective. Okay.
So, now we've taken a look at the risks
and some of the controls and things like that that we need in place.
Now, let's take a look at a solution
framework that implements all of that.
So, instead of thinking of governance
and security for AI
as separate kind of interlinked
and overlacking, overlapping rings,
in fact, we could come up with a more integrated solution where we have
layers of protection against the different types of threats that we're trying to deal with.
So, for instance, at the center is our
AI that we're trying to protect. Then, a ring around that of protection.
That's our governance layer.
And in this case, I'm gonna do discovery
and management of AI use cases.
Those are the things that if I don't define what those are,
I won't know if I've achieved the outcomes I intend or not. So,
I wanna be able da to define what those are
right up front. I need to be able to do model management.
So, I've got a whole bunch of models.
How do I make sure that they're doing what I intend?
How do I know where they came from?
All of those kinds of things. I need to do risk management.
I need to be able to figure out what the risks are.
Quantify them to the extent possible,
and at least expose what those are.
So we can map those and try to address those as we recognize them.
I need to be able to monitor
and check the performance of this system.
It won't do any good if it's taking a month
in order to get an answer back. So,
that's also a part of these concerns.
I'm also looking at compliance.
I may have be in an industry
where I need to be able to do certain things
that will get me in trouble.
I need to avoid other things
I need to do in order to perform due diligence.
So I need to make sure that the compliance of the AI system
is in line with what the expectations are. And
ultimately lifecycle management,
because this thing isn't just a set it and forget it.
These things have lifecycles.
They begin, they move through certain levels of maturity
and in certain parts of it need to go away
and other parts of it come on.
So I need to have this more holistic view
of what the system is. Now, around that layer,
we add the security protections that are necessary.
So, one of the things I wanna do here, I was talking about discovering
AI use cases.
How about we discover the AI models
that are out there in our environment, especially the shadow
AI that may be out there.
And once I've discovered it, I need to do this thing we call
AI security posture management.
That is a way to guard against misconfigurations,
to lock down and make sure that the security policy
for a particular system is being followed,
that if it's not supposed to have public facing data,
the public can't get to it, that there's strong access
controls, encryption and things like that that might need to be in place.
And we want to make sure that all of these instances of AI
that we just found are, in fact, complying with our security policy.
I need to test these models. Do pen testing
and other types of model scanning
to make sure that the models themselves have not been infected,
because if they've been infected, they might leak information out
or they might give us wrong information.
Another thing we have to look at
is maybe install something that I'll call an AI firewall, or an AI gateway
that implements guardrails,
that looks for exfiltration cases.
So this is something that you set up between the user and the AI.
You put it there and that it sees all the prompts that are coming in.
And it looks to see if they conform to policy or not.
If it looks like we're experiencing prompt injection, well,
then we can block it right there at the firewall.
If it looks like our system now has been tricked into leaking information,
we can look at the information on the way back out
and maybe redact it or block it entirely.
So we can test for these things in the model.
But then we also implement the protections in real time.
I'm also gonna look at a threat monitor.
I need to be able to understand
what things are happening to my system.
If somebody just did a bunch of stuff through this firewall
and now they're trying to violate different policies,
maybe someone should be aware of that.
And then, ultimately, I wanna be able to see all of this stuff.
I need some sort of dashboard that visualizes that for me.
That shows me, in priority,
what are the critical vulnerabilities that I have in my system?
Who's trying to hack me?
Am I in compliance from a security standpoint
against some of the things like the National Institute
of Standards Risk Management Framework, other things like that?
So, this now, if you put all of these things together, you have
what is really a much stronger solution than you would have
if this was only by itself.
So the way to think about this then
is if we have AI
and we add to it
governance plus security,
then if we do it right,
we lower risk
and that's ultimately what we're trying to do.