Learning Library

← Back to Library

AI Governance: Guardrails for Responsible Deployment

Key Points

  • The AI industry is expanding explosively, with daily breakthroughs in use cases, yet many deployed systems are underperforming, causing misdirected decisions, hallucinated responses, and biased outcomes.
  • Premature or careless AI deployments expose companies to significant reputational and financial risks, highlighting why robust AI governance has become a critical priority.
  • AI governance is defined as a framework of rules, standards, and processes that act as “guardrails” to ensure AI is developed and used responsibly and ethically, balancing risk mitigation with the technology’s benefits.
  • An AI system consists of a model that processes various forms of human‑generated data (structured, semi‑structured, or unstructured) to generate outputs that mimic or augment human decision‑making, making data quality and model transparency central concerns for governance.

Full Transcript

# AI Governance: Guardrails for Responsible Deployment **Source:** [https://www.youtube.com/watch?v=Q020C-Jw0o8](https://www.youtube.com/watch?v=Q020C-Jw0o8) **Duration:** 00:09:06 ## Summary - The AI industry is expanding explosively, with daily breakthroughs in use cases, yet many deployed systems are underperforming, causing misdirected decisions, hallucinated responses, and biased outcomes. - Premature or careless AI deployments expose companies to significant reputational and financial risks, highlighting why robust AI governance has become a critical priority. - AI governance is defined as a framework of rules, standards, and processes that act as “guardrails” to ensure AI is developed and used responsibly and ethically, balancing risk mitigation with the technology’s benefits. - An AI system consists of a model that processes various forms of human‑generated data (structured, semi‑structured, or unstructured) to generate outputs that mimic or augment human decision‑making, making data quality and model transparency central concerns for governance. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Q020C-Jw0o8&t=0s) **Risks and Need for AI Governance** - The speaker highlights the rapid growth of AI alongside failures like hallucinations, bias, and poor decision‑making, arguing that robust AI governance is essential to mitigate reputational and financial risks. - [00:03:06](https://www.youtube.com/watch?v=Q020C-Jw0o8&t=186s) **Human-Generated Data Fuels AI Bias** - The speaker explains that AI models learn from human-produced data—whether structured or unstructured—but inherit hidden cognitive biases present in that data, which the model can amplify in its outcomes. - [00:06:14](https://www.youtube.com/watch?v=Q020C-Jw0o8&t=374s) **Transparency, Drift, and AI Regulation Risks** - It highlights three AI risks—lack of transparency/trust, model degradation without monitoring, and non‑compliance penalties under emerging regulations like NIST and the EU AI Act. ## Full Transcript
0:00If you have been following the news in artificial intelligence, 0:03you probably already know that this field is growing 0:06at an exponential pace. 0:08Each day we are hearing about new use cases in AI, 0:11use cases and applications that we haven't even dreamt of 0:13in the past few years. 0:15But in the same news, 0:17you probably are also hearing about AI systems 0:19that have been deployed to production 0:21that are not yielding the expected outcomes. 0:24We are hearing about chat bots that 0:26have been misdirecting customers and 0:28employees into making the wrong decisions. 0:30We also heard about chatbots 0:32that are hallucinating response to the customers. 0:34And we also heard about models that 0:36have been generating biased outcomes. 0:40There is no denying that there is huge 0:41potential in artificial intelligence, 0:44but a premature 0:46deployment and adoption of AI systems 0:49could put the companies at a huge risk 0:51of reputational and financial loss. 0:53And this exactly is the reason why 0:55artificial intelligence governance 0:57has become one of the most relevant and 0:59important topics today. 1:01So let's dive in and understand a 1:03bit more about AI governance. 1:06Let's start off by defining what it means. 1:09AI governance refers to a set of rules, 1:14standards, 1:19and processes ... 1:23... that have been set in place 1:25in order to ensure the responsible and ethical 1:28development and deployment of artificial intelligence systems. 1:32Think of it as a set of guardrails, 1:39that ensure the ethical use 1:41of these artificial intelligence systems 1:43so that we minimize the risk in the systems, 1:48while maximizing the potential benefit. 1:54Now we all know what the benefits are of artificial intelligence. 1:58It leads to reduced costs, 2:02improved efficiency, 2:06and leads to automation 2:09of repetitive and manual tasks. 2:13While these benefits make artificial intelligence 2:16a highly important topic and most sought after technology today, 2:20there is still at risk in artificial intelligence 2:23that is making AI governance 2:25an even more important topic to be discussed. 2:28In order to understand what these risks are, 2:31let's, in a very broad sense, 2:33understand what constitutes an AI system. 2:37Now, an AI system 2:39is a system that is designed to take in inputs 2:44and produce outputs that in some way mimic, 2:49augment or aid human decision making. 2:52At the heart of the system is what we call the AI model. 2:59Now, the goal of this model 3:01is to look at the input and generate an output 3:04that a human would normally do. 3:07So how does an AI model do that? 3:09And in order for this model to do exactly that, 3:12we need to supply it with data. 3:19Because we wanted to mimic or augment 3:21or aid human decision making 3:23we need to supply it data that is human generated. 3:26And this data could be in any format. 3:28It could be structured data with columns and values. 3:35Or it could be semi-structured data, 3:37such as XML files or unstructured data 3:41such as PDF documents, text documents, 3:44audio files, video files, etc. 3:47Now, this AI model, 3:49which you can think of as highly engineered code 3:52that uses complex mathematical algorithms, 3:55looks at the data, derives patterns from data, 3:58and learns how to mimic the human behavior that is expected from it. 4:04Now since we are talking about data, right? 4:06And this data is human generated 4:09and we humans unfortunately are not devoid of bias. 4:13We have several cognitive biases within us. 4:16An example of one such biases, 4:18some of us tend to put undue importance 4:20on certain factors while making decisions, 4:22while completely ignoring another set of factors. 4:26Now that can cause biases in the data. 4:28Yes, the biases are not blatantly visible. 4:31You cannot look at the data and say, 4:33"yes, I see biases", but these are latent and hidden. 4:36And this highly engineered mathematical code 4:39has a tendency to pick up on these biases. 4:42And in worst case scenarios, 4:44it reflects those biases in the outcomes. 4:46So that makes your AI systems susceptible to bias. 4:55Secondly, we are talking about data here. 4:57The data is being sent as input. 4:59Data is being used for training the model 5:02and if there is no proper oversight, 5:04this data could contain private and sensitive information. 5:09And when there is no proper oversight, 5:11this data can seep into the model 5:13and seep into the output of the model, 5:16leading to privacy infringement. 5:23Or in other cases 5:24in cases of unstructured data it could also contain copyright information 5:28and that can also be reflected in your model's output. 5:32So privacy or copyright infringement. 5:43Some of the models that we use are black box models. 5:47So the reason we use these black box models, 5:49as opposed to the glass box models, 5:52is that black box models tend to provide a higher level of accuracy in the outcomes. 5:58So when we use black box models, 5:59what it means is that the people who are creating these models 6:02and systems have little control over the inner workings of the algorithms. 6:07So when you ask them why their model is making a certain decision, 6:11they won't be able to give you an an explanation. 6:14So in that case, your system is not transparent. 6:18So that puts you at a risk of lack of transparency, or trust. 6:27So how would you trust a system that is not able to explain why and how it made a certain decision? 6:32And that is a third risk in artificial intelligence systems. 6:36Now, these AI models are not something that you once create and they continue to generate high quality outcomes. 6:45These models can deteriorate 6:47and the deterioration can happen because the incoming data 6:50could be very different from the data that the models have been trained on. 6:54And because this deterioration happens, 6:56there is a need for continuously monitoring these models. 6:59So that is another factor that can put your models at risk 7:04because if you are not continuously monitoring them, 7:11then you are not ... your model is not producing consistently high quality outcomes. 7:23So because of risks like these, 7:25there are organizations across the world that are coming up with regulations 7:30and guidelines on how to manage these systems. 7:33Now the guidelines, you can think of something like the NIST AI regulation, 7:37AI risk management framework. 7:45And as for regulations, we all heard about the EU AI act. 7:49And these regulations are much more serious because 7:52these are not just guidance for AI system deployment, 7:58but can actually penalize companies for noncompliance. 8:03So when your company or when your AI systems 8:06are not complying with the guidelines stipulated by the regulation, 8:09then you will be at a risk of reputational loss and financial loss as well. 8:14And several ethical dilemmas as well. 8:17So due to factors such as these: 8:19bias, privacy or copyright infringement, 8:22lack of trust or transparency, 8:24and lack of continuous monitoring 8:26and the presence of regulations, 8:28it is extremely important that we govern our artificial intelligence systems. 8:34I think we can all agree that the promise of AI is undeniable, 8:37but the risks it poses are very much real. 8:40And a properly governed AI system 8:42is extremely important for 8:44organizations to fully realize the 8:46potential of artificial intelligence. 8:49I hope you found this video helpful. 8:50Please let us know what you think about 8:52it in the comments. 8:54Thank you. 8:56If you liked this video and want to see more like it, 8:58please like and subscribe! 9:00To learn more, 9:01please reach out to your IBM sales team 9:04and IBM Business Partner.