Learning Library

← Back to Library

AI Governance and Security Essentials

Key Points

  • AI offers huge benefits but also poses risks of incorrect outputs and reputational damage, making strong governance and security essential.
  • A 2025 IBM report shows 63 % of organizations lack an AI governance policy, leaving a critical gap in risk mitigation.
  • Governance responsibilities (typically led by the Chief Risk Officer) focus on ensuring AI is responsible, explainable, reliable, and fully documented with traceable source attribution.
  • Security responsibilities (usually led by the Chief Information Security Officer) target technical vulnerabilities, protection against attacks, and control of “shadow AI” instances that could cause data leaks.
  • Most governance failures stem from self‑inflicted issues such as using poorly sourced or improperly trained models, underscoring the need for rigorous policy and oversight.

Full Transcript

# AI Governance and Security Essentials **Source:** [https://www.youtube.com/watch?v=4QXtObc61Lw](https://www.youtube.com/watch?v=4QXtObc61Lw) **Duration:** 00:14:57 ## Summary - AI offers huge benefits but also poses risks of incorrect outputs and reputational damage, making strong governance and security essential. - A 2025 IBM report shows 63 % of organizations lack an AI governance policy, leaving a critical gap in risk mitigation. - Governance responsibilities (typically led by the Chief Risk Officer) focus on ensuring AI is responsible, explainable, reliable, and fully documented with traceable source attribution. - Security responsibilities (usually led by the Chief Information Security Officer) target technical vulnerabilities, protection against attacks, and control of “shadow AI” instances that could cause data leaks. - Most governance failures stem from self‑inflicted issues such as using poorly sourced or improperly trained models, underscoring the need for rigorous policy and oversight. ## Sections - [00:00:00](https://www.youtube.com/watch?v=4QXtObc61Lw&t=0s) **Reducing AI Risk with Governance & Security** - The segment highlights AI’s promise and associated dangers, stresses the lack of governance policies in most firms, and outlines how robust AI governance and security frameworks—led by chief risk and information security officers—can mitigate reputational and business threats. - [00:03:05](https://www.youtube.com/watch?v=4QXtObc61Lw&t=185s) **Governance vs Security in AI Risks** - The speaker contrasts governance concerns—misalignments, policy breaches, bias, and model drift—with security threats from insiders or external attackers, outlining the potential damages such as hate speech, unfairness, and intellectual‑property theft. - [00:06:10](https://www.youtube.com/watch?v=4QXtObc61Lw&t=370s) **AI Risk Governance and Controls** - The speaker outlines essential governance measures—clear policies, accountability structures, and security practices of prevention, detection, and response—to manage AI risks and ensure proper model training and data sourcing. - [00:09:22](https://www.youtube.com/watch?v=4QXtObc61Lw&t=562s) **Integrated AI Governance and Security Framework** - The speaker outlines the need for penetration testing, automated prompt‑injection testing, posture management, and a layered governance framework that surrounds AI with protective rings to address security risks. - [00:12:24](https://www.youtube.com/watch?v=4QXtObc61Lw&t=744s) **AI Security Posture Management** - The speaker outlines AI security posture management, covering misconfiguration safeguards, model penetration testing, and the deployment of an AI firewall/gateway to enforce policies, detect prompt injection, and block data exfiltration. ## Full Transcript
0:00AI is already doing some great 0:02things and the best is yet to come. 0:04But with this greatness comes risk. 0:08Risk that the system will do the wrong thing, 0:11give incorrect answers and expose the organization 0:14to reputational and business damage. 0:16How can you reduce AI risk? Well, 0:19with a strong governance and security capability. Unfortunately, 0:22according to the 2025 0:25IBM Cost of a Data Breach Report, 0:2763% of organizations 0:31had no governance policy in place. 0:33These two areas, governance and security, 0:36have some overlap, but mostly complement each other in important ways. 0:40In this video, we'll take a look at what these are 0:43and how you can leverage them to reduce AI risk. 0:47Okay, let's take a look at the problem space. 0:50Get that out of the way. 0:52First of all we're going to look at compliance issues 0:54So, in theory you should have a governance policy for AI. 0:58And you should have a security policy for AI. Now, 1:00who are the primary stakeholders that are really involved with this? Well, 1:04when it comes to the governance part, 1:06it's probably going to be the chief risk officer. 1:09That's not the only person that's going to care, 1:11but they may be the primary person that cares. 1:14Versus from a security standpoint, 1:16it's more likely to be the chief information security officer. 1:19Again, both of these roles could care about this, 1:23but that's who might be the lead in the primary in these. 1:26Next thing we're gonna take a look 1:27at in terms of what the AI does. 1:30What are our particular concerns. From a governance standpoint, 1:33we want to make sure that the thing is responsible. 1:36That it's not doing things that put us in a bad light, 1:40or put our users in a bad light or in a bad situation. 1:44We wanna make sure that our AI is explainable. 1:46That it doesn't just make up stuff. 1:48Ah, that it the stuff it tells is, in fact, reliable. 1:52And a big part of that involves documentation and source attribution. 1:56We wanna make sure that we can trace all of this stuff back 1:59and, therefore, make it more trustworthy. 2:02Now, from a security standpoint, 2:04we're looking at vulnerabilities 2:06that might exist within the AI system itself. 2:10Where someone's trying to attack the system. 2:12We also are gonna be concerned about things like shadow 2:15AI, that someone created some 2:17AI instance without approval, without authorization. 2:21Yet this thing is out there running, 2:22and we might wanna make sure that it's locked down 2:24because it might be a source of data leaks. Now, 2:27in terms of the cause 2:29of some of the issues that we might be trying to 2:32to guard against with a governance policy and a security policy, 2:36I would say in general, and this is a big generalization, 2:39because there's definitely going to be exceptions to this. 2:41But think of it this way, that the cause in 2:44that we're really guarding against in a governance case, 2:47is really self-inflicted wounds. 2:49This is where we used a bad model. 2:51We pulled it from a bad source. 2:53The model wasn't trained properly. 2:55The ingredients that went into the cake, if 2:57as it were, were not the right ones. 2:59They weren't pure. 3:01Ahm. And the result of this is probably unintentional. 3:05So, we we did all of this. 3:06We didn't mean to make a big mess, but we did. 3:09So we're trying to guard against that. 3:11We're looking at things like misalignments 3:13and policy violations and ethical lapses. 3:16We wanna make sure those don't occur. 3:18And that's the realm of governance. 3:20Now, on the security side, what are we gonna gonna be caring about? Well, 3:24in this case, the damage 3:26and the cause of this is more done externally or by others. 3:30So, self-inflicted versus others inflicted. 3:34And in this case, it could be internal, 3:37bad insiders who are doing something that they really shouldn't be doing. 3:41Or it could be an external person 3:43who's attacking the system, either one of those. 3:46But in these cases, it's more intentional. 3:49Someone is trying to break the system. 3:51That's what the security policy is really concerned with. 3:54Now let's take a look at the damage 3:56that can occur in these cases. So, 3:58if first of all, from a governance standpoint, 4:03we're looking at things like PAP, hate, abuse and profanity. 4:07We wanna make sure that our system doesn't say 4:09really outrageous things that insult our users. 4:12We wanna make sure that it is fair. It's unbiased. 4:16It doesn't bias toward or against any particular information or or population. 4:21We want to make sure that the model doesn't drift. 4:24It started off true, but now it's getting a little more untrue as it starts 4:28learning more and more things. 4:29So we want to make sure that it's still solid. 4:31We're looking at issues like intellectual property. 4:34Is someone able to steal our intellectual property through this? 4:38And also important is 4:40is our model trained on intellectual property 4:43that we didn't actually have rights to use? 4:45In other words, it might be copyrighted material. 4:47And then, when our model learned on that, it starts using that. 4:51And now we're subject to a lawsuit. 4:53That would be a bad thing. 4:54Hallucinations. 4:55We wanna make sure that it's not just making up answers, 4:58that they need to be grounded in truth 5:00and the reputation of our organization. 5:03We wanna make sure that the AI, which is representing us, 5:07is doing it in a way that we would approve of. Now, 5:10on the security side, what are we looking at here? Well, 5:13you've heard me if you've seen videos, 5:14I talk about this thing called the CIA triad, 5:17where we're looking at confidentiality, integrity and availability. 5:21Those are the three things that we're about in every security case. 5:24Those are the things that we're trying to make sure we get right. 5:27So confidentiality, we wanna make sure that, for instance, the system doesn't exfiltrate. 5:32It doesn't send sensitive information outside of our system 5:35so that other people can access it that aren't approved to do that. 5:39We wanna make sure that the system, 5:40from an integrity standpoint, cannot be manipulated. 5:44That someone can't figure out how to make the system 5:47do other things that we don't intend it to do, that it gives bad answers. 5:51The data has been poisoned. Things like that. 5:53And then finally, we wanna make sure that it's available, 5:57that someone can't do a denial of service attack 5:59against the system and, therefore, make it 6:01not available for the people who need it. 6:04All of these things ultimately 6:06then feed into this idea of risk. 6:10Different kinds of risk and different kinds of policies related. 6:14But it's all about AI risk. So, 6:16now we've taken a look at those risks. 6:18Let's take a look at what we need to do about them. 6:21Well, one of the things is we're gonna need controls in place. 6:25Controls are the things that let us control what's happening with the system. So, 6:30some of the things we'll look at on the governance side 6:33is we wanna have a set of rules that we have spelled out. 6:36We wanna make sure that we're following them, 6:38that we've put those into policies that are well understood. 6:41And we're finding a lot of organizations 6:43don't do that and have not had that. 6:46It's pretty hard to know if you're succeeding 6:47if you've never even defined where the finish line exists. 6:50We need accountability structures. 6:52Who's responsible for this and who's responsible for which parts of it? 6:57On the security side, we're mo looking at different things. 7:00We're trying to do prevention, 7:02detection and response. 7:04That is, I wanna be able to make sure 7:06that I to the extent possible. 7:09The system is not vulnerable to begin with, 7:11and then be able to find out when it is, 7:14when it's under attack and then what we're supposed to do about it. 7:17So prevention detection and response. 7:19Now, with our models, what specifically do 7:22we need to do from a governance standpoint. 7:25Well, we wanna make sure that they're trained properly. 7:28We need to know what the sources are 7:30so that that information is what we intend it to be. 7:34If you use bad sources, you get bad data 7:37and bad responses out of your AI. 7:40We need to know the lineage of the model. 7:43Most organizations are not going to create their own models. But if they do, 7:47they need to be able to know where did the ingredients come from? 7:51If I go to an open-source 7:53model repository, where did I get it? 7:55Do I have the latest 7:57and greatest and authentic version, or do I have some illicit 8:01copy that somebody's made, some bogus version of it. 8:04And how many ah who's touched it along the way? 8:07That's the lineage. We wanna be able to see all of that. 8:10We need an acceptable use policy. 8:12What are the things that we're okay with our AI doing 8:15and what are we not okay with it doing? 8:18And how do we want employees to understand their use of that? 8:21And as I mentioned before, IP risk. If 8:24we're gonna be creating models, 8:25or even if we're gonna take a model and train it with our information, 8:28we need to make sure that it's, in fact, 8:30our information that we have rights to. 8:32So, those are all things that we would look 8:35at in terms of model 8:37and governance of those. On the security side, well, 8:40again, we're thinking about an attacker, 8:43something other than us that is coming in. 8:45And what is the number one attack type that we're concerned with, 8:48especially with generative AI? 8:49It's prompt injections. 8:51These are things where people are basically socially engineering our 8:54AI, giving it instructions to override its original instructions, 8:59and then having it do something that we didn't intend it to do. 9:02So I need to have protections against that. 9:04I need to have protections against unauthorized access. 9:07These are gonna be bigger issues 9:10as we move toward agentic AI, as well. 9:12I wanna make sure that that agent that has autonomy 9:15isn't gonna just go off and do something really crazy, 9:18because we're giving it a lot of power to do certain things. 9:22So, unauthorized access. We don't wanna allow that. 9:25We need to do penetration testing of these models. We bring them in. 9:28We need to find out if they're vulnerable 9:31to these types of attacks or not. 9:33And many, many more. Prompt injections. 9:35There's more of those than you can dream up. 9:38So, we need tools that are gonna to be able to do 9:40automated prompt injection testing. 9:42We're also looking at a thing we call posture management 9:45to make sure that the system hasn't been misconfigured in a way 9:48that allows exposure of information that is sensitive to us. 9:53So, those are some of the things that we're looking at from a governance perspective 9:57and from a security perspective. Okay. 9:59So, now we've taken a look at the risks 10:02and some of the controls and things like that that we need in place. 10:05Now, let's take a look at a solution 10:07framework that implements all of that. 10:09So, instead of thinking of governance 10:11and security for AI 10:13as separate kind of interlinked 10:15and overlacking, overlapping rings, 10:18in fact, we could come up with a more integrated solution where we have 10:22layers of protection against the different types of threats that we're trying to deal with. 10:26So, for instance, at the center is our 10:28AI that we're trying to protect. Then, a ring around that of protection. 10:33That's our governance layer. 10:35And in this case, I'm gonna do discovery 10:38and management of AI use cases. 10:41Those are the things that if I don't define what those are, 10:44I won't know if I've achieved the outcomes I intend or not. So, 10:47I wanna be able da to define what those are 10:51right up front. I need to be able to do model management. 10:54So, I've got a whole bunch of models. 10:56How do I make sure that they're doing what I intend? 10:59How do I know where they came from? 11:00All of those kinds of things. I need to do risk management. 11:04I need to be able to figure out what the risks are. 11:07Quantify them to the extent possible, 11:10and at least expose what those are. 11:12So we can map those and try to address those as we recognize them. 11:16I need to be able to monitor 11:18and check the performance of this system. 11:22It won't do any good if it's taking a month 11:24in order to get an answer back. So, 11:26that's also a part of these concerns. 11:29I'm also looking at compliance. 11:31I may have be in an industry 11:34where I need to be able to do certain things 11:37that will get me in trouble. 11:38I need to avoid other things 11:40I need to do in order to perform due diligence. 11:43So I need to make sure that the compliance of the AI system 11:46is in line with what the expectations are. And 11:48ultimately lifecycle management, 11:51because this thing isn't just a set it and forget it. 11:54These things have lifecycles. 11:56They begin, they move through certain levels of maturity 11:59and in certain parts of it need to go away 12:01and other parts of it come on. 12:02So I need to have this more holistic view 12:04of what the system is. Now, around that layer, 12:08we add the security protections that are necessary. 12:12So, one of the things I wanna do here, I was talking about discovering 12:15AI use cases. 12:17How about we discover the AI models 12:19that are out there in our environment, especially the shadow 12:22AI that may be out there. 12:24And once I've discovered it, I need to do this thing we call 12:28AI security posture management. 12:30That is a way to guard against misconfigurations, 12:33to lock down and make sure that the security policy 12:36for a particular system is being followed, 12:38that if it's not supposed to have public facing data, 12:41the public can't get to it, that there's strong access 12:44controls, encryption and things like that that might need to be in place. 12:47And we want to make sure that all of these instances of AI 12:50that we just found are, in fact, complying with our security policy. 12:54I need to test these models. Do pen testing 12:57and other types of model scanning 13:00to make sure that the models themselves have not been infected, 13:03because if they've been infected, they might leak information out 13:07or they might give us wrong information. 13:09Another thing we have to look at 13:11is maybe install something that I'll call an AI firewall, or an AI gateway 13:17that implements guardrails, 13:19that looks for exfiltration cases. 13:22So this is something that you set up between the user and the AI. 13:26You put it there and that it sees all the prompts that are coming in. 13:30And it looks to see if they conform to policy or not. 13:33If it looks like we're experiencing prompt injection, well, 13:36then we can block it right there at the firewall. 13:38If it looks like our system now has been tricked into leaking information, 13:43we can look at the information on the way back out 13:45and maybe redact it or block it entirely. 13:48So we can test for these things in the model. 13:50But then we also implement the protections in real time. 13:54I'm also gonna look at a threat monitor. 13:56I need to be able to understand 13:58what things are happening to my system. 14:01If somebody just did a bunch of stuff through this firewall 14:03and now they're trying to violate different policies, 14:06maybe someone should be aware of that. 14:08And then, ultimately, I wanna be able to see all of this stuff. 14:11I need some sort of dashboard that visualizes that for me. 14:15That shows me, in priority, 14:18what are the critical vulnerabilities that I have in my system? 14:21Who's trying to hack me? 14:22Am I in compliance from a security standpoint 14:25against some of the things like the National Institute 14:28of Standards Risk Management Framework, other things like that? 14:31So, this now, if you put all of these things together, you have 14:35what is really a much stronger solution than you would have 14:39if this was only by itself. 14:41So the way to think about this then 14:43is if we have AI 14:45and we add to it 14:47governance plus security, 14:50then if we do it right, 14:53we lower risk 14:54and that's ultimately what we're trying to do.