Learning Library

← Back to Library

Five Pillars of Trustworthy AI

Key Points

  • The speaker’s three biggest night‑time worries are climate change, the hidden impact of AI on personal decisions (loans, jobs, college admissions), and the mistaken belief that AI is inherently unbiased or ethically perfect.
  • Over 80 % of AI proof‑of‑concept projects stall during testing, mainly because decision‑makers don’t trust the model’s outcomes.
  • Trust in an AI system can be built on five pillars: fairness to all groups, explainability of data and methods, robustness against manipulation, transparency about usage and metadata, and protection of data privacy.
  • IBM proposes three guiding AI principles: AI should augment—not replace—human intelligence; data and the insights derived from it belong to their creator; and AI systems must be designed and used responsibly.
  • To successfully deploy AI, organizations must proactively address these trust pillars and adhere to principled guidelines, ensuring that AI decisions are fair, understandable, secure, transparent, and privacy‑respecting.

Full Transcript

# Five Pillars of Trustworthy AI **Source:** [https://www.youtube.com/watch?v=aGwYtUzMQUk](https://www.youtube.com/watch?v=aGwYtUzMQUk) **Duration:** 00:06:09 ## Summary - The speaker’s three biggest night‑time worries are climate change, the hidden impact of AI on personal decisions (loans, jobs, college admissions), and the mistaken belief that AI is inherently unbiased or ethically perfect. - Over 80 % of AI proof‑of‑concept projects stall during testing, mainly because decision‑makers don’t trust the model’s outcomes. - Trust in an AI system can be built on five pillars: fairness to all groups, explainability of data and methods, robustness against manipulation, transparency about usage and metadata, and protection of data privacy. - IBM proposes three guiding AI principles: AI should augment—not replace—human intelligence; data and the insights derived from it belong to their creator; and AI systems must be designed and used responsibly. - To successfully deploy AI, organizations must proactively address these trust pillars and adhere to principled guidelines, ensuring that AI decisions are fair, understandable, secure, transparent, and privacy‑respecting. ## Sections - [00:00:00](https://www.youtube.com/watch?v=aGwYtUzMQUk&t=0s) **Three Nighttime Worries: Climate, AI, Trust** - The speaker highlights three sleepless concerns—climate change, AI-driven decisions that affect personal outcomes, and the mistaken belief that AI is inherently unbiased—explaining how mistrust stalls most AI proof‑of‑concepts and introducing a five‑pillar trust framework. - [00:03:09](https://www.youtube.com/watch?v=aGwYtUzMQUk&t=189s) **AI Trust: Principles & Challenges** - The speaker outlines five trust pillars, IBM’s three AI principles—augmenting human intelligence, creator‑owned data, and transparent explainability—and stresses that building trustworthy AI is a socio‑technical, holistic effort centered on people, culture, and team diversity. ## Full Transcript
0:00I want to start off with talking to you about  three things that keep me up at night, right? 0:05Three things: the first, and it may be, you  know, very common for you too, is climate change. 0:10Climate change absolutely keeps me up at night. The second thing that keeps me up at night 0:16is that people may have no idea that an artificial intelligence is making a decision that directly 0:25impacts their lives - what percentage interest  rate you get on your loan, whether you get that 0:32job that you applied for, whether your kid gets  into that college that they really want to go to. 0:40Today AI is making decisions  that directly impact you. 0:46The third thing that keeps me up at night  is: even when people know that an AI 0:54is making a decision about them, they may assume  that because it's not a fallible human with bias, 1:01that somehow the AI is going to make a decision that's morally or ethically squeaky clean, 1:08and that could not be farther from the truth. 1:11So, if you think about organizations  and what happens over 80% of the time 1:19proof of concepts associated with artificial  intelligence actually gets stalled in testing 1:25and more often than not it is because people do not trust the results from that AI model. 1:32So, we're going to talk a lot about trust, 1:34and when thinking about trust (I’m going to  switch colors here) there's actually five pillars. 1:42OK, when you're thinking about what does it  take to earn trust in an artificial intelligence 1:46that's being made by your organization or being procured by your organization: five pillars. 1:53The first thing to be thinking about  is fairness. How can you ensure 1:59that the AI model is fair towards everybody in particular historically underrepresented groups. 2:06OK, the second is explainable is your AI model explainable such that you'd be able to tell 2:16somebody, an end user, what data sets were being  used in order to curate that model, what methods, 2:24what expertise was the data lineage in provenance  associated with, how that model was trained. 2:30The third: robustness. Can you assure end users  that nobody can hack such an AI model such that 2:41a person could disadvantage  willfully other people and or 2:48make the results of that model benefit  one particular person over another? 2:53The fourth is transparency. Are you telling  people, right off the bat, that the AI model 3:00is indeed being used to make  that decision and are you giving 3:04people access to a fact sheet or metadata so  that they can learn more about that model? 3:10And the fifth one is: are you  assuring people's data privacy? 3:14So, those are the five pillars. OK, now  IBM has come up with three principles 3:23when thinking about AI in an organization. The first being that the purpose of artificial 3:30intelligence is really meant to be to augment  human intelligence not to replace it. 3:37The second is that data and the insights from  those data belong to the creator alone 3:46OK, and the third is that AI systems, and I would  opine the entire AI life cycle, really should be 3:53transparent and explainable, right? So, so, those are the five pillars. 3:58Now, the next thing I want you to remember as  you're thinking about this space of earning 4:03trust and artificial intelligence is that this is  not a technological challenge. It can't be solved 4:09with just throwing tools and tech over some kind  of fence. This is a socio-technological challenge. 4:17"Social" meaning people, people, people.  Socio-technological challenges because 4:23it's a socio-technological challenge it  must be addressed holistically, okay? 4:29"Holistically" meaning there's three major things  that you should think about. I mentioned people, 4:35people the culture of your organization, right?  Thinking about the diversity of your teams, 4:42you know, your data science team. Who is curating  that data to train that model? How many women are 4:47on that team? How many minorities are on that  team, right? Think about diversity. I don't 4:52know if you've ever heard of the the "wisdom of  crowds". That's actually a proven mathematical 4:58theory: the more diverse your group of people,  the less chance for error, and that is absolutely 5:05true in the realm of artificial intelligence. The second thing is process or governance, right? 5:13What is it that use your organization  what are you going to promise your 5:16both your employees as well as the market with  respect to what standards you're going to stand by 5:24for your AI model in terms of things like fairness  and explainability accountability, etc., right? 5:30And the third area is tooling, right? What are  the tools, AI engineering methods, frameworks 5:40that you can use in order to ensure these  things, ensure those five pillars, and we're 5:47gonna do a deep dive into that as well, but the  next show that I’m going to be running with you 5:52we're actually going to be talking about this  one. About people and culture. So, stay tuned. 5:58If you like this video and series, please  comment below stay tuned for more videos 6:03that are part of this series and to  get updates please like and subscribe.