Learning Library

← Back to Library

Who Owns Responsible AI?

Key Points

  • Embedding human values in AI is a socio‑technical challenge that requires a holistic approach across people, processes, and tools, not just a purely technical fix.
  • Surveys at AI summits reveal that most organizations lack clear accountability for responsible AI outcomes, with responses often being “no one,” “we don’t use AI,” or “everyone,” which effectively means nobody is truly responsible.
  • Those tasked with AI accountability now face a broadened remit: aligning values, maintaining model inventories, tracking evolving regulations, and handling ethical considerations that go beyond mere legality.
  • Effective AI governance depends on building AI literacy, especially by teaching stakeholders how to operationalize principles such as fairness, explainability, and transparency into concrete functional and non‑functional requirements.
  • Applied, hands‑on training for both AI model governors and the teams that build or procure models is the preferred method to ensure that AI systems reflect an organization’s values and are managed responsibly.

Full Transcript

# Who Owns Responsible AI? **Source:** [https://www.youtube.com/watch?v=yh-3WU1FKrk](https://www.youtube.com/watch?v=yh-3WU1FKrk) **Duration:** 00:05:53 ## Summary - Embedding human values in AI is a socio‑technical challenge that requires a holistic approach across people, processes, and tools, not just a purely technical fix. - Surveys at AI summits reveal that most organizations lack clear accountability for responsible AI outcomes, with responses often being “no one,” “we don’t use AI,” or “everyone,” which effectively means nobody is truly responsible. - Those tasked with AI accountability now face a broadened remit: aligning values, maintaining model inventories, tracking evolving regulations, and handling ethical considerations that go beyond mere legality. - Effective AI governance depends on building AI literacy, especially by teaching stakeholders how to operationalize principles such as fairness, explainability, and transparency into concrete functional and non‑functional requirements. - Applied, hands‑on training for both AI model governors and the teams that build or procure models is the preferred method to ensure that AI systems reflect an organization’s values and are managed responsibly. ## Sections - [00:00:00](https://www.youtube.com/watch?v=yh-3WU1FKrk&t=0s) **Accountability Gap in AI Governance** - The speaker argues that aligning AI with human values is a socio‑technical challenge requiring people, processes, and tools, yet most organizations lack clear accountability, often answering “no one,” “we don’t use AI,” or “everyone,” highlighting the need for defined responsibility. - [00:03:08](https://www.youtube.com/watch?v=yh-3WU1FKrk&t=188s) **Applied Training for Responsible AI** - The speaker outlines a comprehensive applied training program for teams developing or acquiring AI models, covering use‑case selection, business alignment, risk mitigation, interpretable fact sheets, audit interpretation, and the necessity of dedicated responsible‑AI leadership. ## Full Transcript
0:00The work of having human values be reflected in AI is not strictly a technical challenge with a technical solution, 0:08but one that is indeed socio technical and with any socio technical challenge it has to be approached holistically, 0:16meaning you need to be thinking about people process tools. 0:19People, meaning what is the right organizational culture that is required to curate AI responsibly, 0:24which are the right AI governance processes and the right tools and AI engineering frameworks. 0:30When I take the time to ask large audiences at AI summits, 0:35who in their organization is accountable 0:38for responsible outcomes from artificial intelligence, the top three answers that I get are pretty bad. 0:45The first answer I typically get is "no one," which is overtly terrible. 0:50The second common response that I get is "we don't use AI," 0:56although you might not be keeping track of it in a formal inventory program, 1:01absolutely you have employees that are using artificial intelligence in some way, shape or form. 1:07And then the last common response that I get is "everyone," 1:11and now would opine that if everyone is being held accountable for responsible outcomes from AI. 1:17Is anyone actually being held accountable? 1:21The job of those who are being held accountable for responsible outcomes from artificial intelligence is expanding. 1:29It's a big job, right? 1:31Not only do these people have to actually achieve value alignment within their organizations, 1:36they also have to keep track of AI model inventory. 1:40They have to keep track of regulations. 1:43Right? 1:44And there are a growing amount of regulations around the world, 1:46but there's also a recognition that you can have AI models be lawful but awful, which means their purview, 1:54their responsibility actually has the push into ethics. 1:58And as soon as you push into ethics, you have to be a pretty darn good teacher. 2:02You have to be teaching those not only who are building 2:05AI models on your behalf and governing AI models on your on your behalf, 2:10but also who are going to be procuring AI models on your behalf. 2:13You want them to be able to do this work and again, in a way that reflects your organization's values. 2:21First, I want to talk about what does A.I. literacy look like 2:24for those who are going to be governing AI models on your behalf? 2:29And the best way my favorite way of approaching this kind of training is applied training. 2:36So the way that we work with those who are going to be governing AI models, 2:41is first of all to dive into teaching people how do you operationalize 2:47principles like fairness, like explainability, like transparency, 2:52thinking through how do you make sure you can detail what are those 2:56functional requirements for what you expect to see in AI models, 3:00but also the nonfunctional requirements of what you expect to see in those systems 3:05around those, the use of those AI models. 3:08Then the second group that you would offer this applied training to 3:12are those who will be building and buying models on your behalf. 3:18And this applied training includes things like making sure that you're choosing 3:24a AI model use cases to actually work on which ones are really important to your organization, 3:30and then diving into each of those use cases, starting with how do you make sure that the investment in that AI model is actually 3:38aligned to your business strategy, 3:40teaching those teams, working on those use cases, How do you assess for the risk 3:45of that particular use case, unintended effects of that use case? 3:50How would you approach mitigating those kinds of risk holistically? 3:55Then we give an introduction to fact sheets in particular, not just how do you build a fact sheet, 3:57but how do you build one that's actually interpretable that empowers people? 4:05We give an introduction to audits, 4:07and then we teach those teams how do you actually interpret 4:11the results from an audit so that they would know what to do when they see those audit results. 4:17Those doing this kind of applied training that I'm describing truly, truly 4:23benefit from actually working with a diverse and multidisciplinary team. 4:27Now more than ever, having a leader or team that ensures the 4:32responsible use and responsible outcomes from artificial intelligence is absolutely crucial. 4:37Without a dedicated leader with a funded mandate to do this work, AI governance can absolutely fall through the cracks, 4:43leaving organizations vulnerable to the risks associated with the technology. 4:49A successful, responsible A.I. leader has a seat at the table and ensures 4:54that there are seats at the table for others, including the CISO, ensuring AI Ethics is weaved into the 5:00very fabric of the organization, not just left off at the end, but incorporated across the entire AI life cycle. 5:09They make accountability policies transparent and work across the organization to see them implemented. 5:15Finally, championing AI literacy in a holistic way is absolutely essential. 5:22Ensuring that everyone within the organization understands how to build and buy AI models 5:28that actually reflects the organization's values 5:32by investing in a responsible AI leader with that funded mandate to do the work. 5:37Organizations unlock the full potential of artificial intelligence. 5:41They drive innovation, and they create a culture of responsible and transparent AI use, 5:46ultimately leading to better decision making, improved customer experiences, and sustained business success.