AI Trust: Five Essential Pillars
Key Points
- The AI trust framework currently centers on five evolving pillars—fairness, robustness, privacy, explainability, and transparency—though the field continues to change rapidly.
- Fairness requires identifying and mitigating bias in both training data and model outcomes to avoid systematic advantages or disadvantages for any group, which can be defined by various sensitive attributes.
- Robustness focuses on maintaining reliable model performance under exceptional conditions and over time, monitoring data and accuracy drift especially when external factors like a pandemic shift user behavior.
- Privacy ensures that data and model insights remain under the control of their owners throughout the entire lifecycle, complying with data protection regulations from building to monitoring.
- Explainability and transparency together demand that stakeholders can understand why a model makes specific decisions and have full visibility into its development—who built it, what data and algorithms were used, and how it was validated.
Sections
- Key Pillars of AI Trust - The speaker outlines the five evolving trust pillars for AI—fairness, robustness, privacy, explainability, and transparency—defining each and highlighting the challenges in ensuring unbiased data, stable performance, and clear, accountable models.
- Scaling Trustworthy AI Across Organizations - The speakers discuss how companies newly adopting large-scale AI can overcome trust challenges by implementing a governance framework and beginning with assessments or pilot use‑cases to productionalize trustworthy AI throughout the enterprise.
Full Transcript
# AI Trust: Five Essential Pillars **Source:** [https://www.youtube.com/watch?v=_522RWxFS88](https://www.youtube.com/watch?v=_522RWxFS88) **Duration:** 00:09:03 ## Summary - The AI trust framework currently centers on five evolving pillars—fairness, robustness, privacy, explainability, and transparency—though the field continues to change rapidly. - Fairness requires identifying and mitigating bias in both training data and model outcomes to avoid systematic advantages or disadvantages for any group, which can be defined by various sensitive attributes. - Robustness focuses on maintaining reliable model performance under exceptional conditions and over time, monitoring data and accuracy drift especially when external factors like a pandemic shift user behavior. - Privacy ensures that data and model insights remain under the control of their owners throughout the entire lifecycle, complying with data protection regulations from building to monitoring. - Explainability and transparency together demand that stakeholders can understand why a model makes specific decisions and have full visibility into its development—who built it, what data and algorithms were used, and how it was validated. ## Sections - [00:00:00](https://www.youtube.com/watch?v=_522RWxFS88&t=0s) **Key Pillars of AI Trust** - The speaker outlines the five evolving trust pillars for AI—fairness, robustness, privacy, explainability, and transparency—defining each and highlighting the challenges in ensuring unbiased data, stable performance, and clear, accountable models. - [00:04:06](https://www.youtube.com/watch?v=_522RWxFS88&t=246s) **Scaling Trustworthy AI Across Organizations** - The speakers discuss how companies newly adopting large-scale AI can overcome trust challenges by implementing a governance framework and beginning with assessments or pilot use‑cases to productionalize trustworthy AI throughout the enterprise. ## Full Transcript
So when we're talking about trust for AI we hear about these five pillars, right. Awareness,
robustness, privacy, explainability, and transparency. So, what is all of this?
You're right Aishwarya. There are you know, at this point in time we usually talk about
five different pillars, but keep in mind that this is a fast evolving space. This field is
changing rapidly. But at this point we usually talk about fairness, robustness,
privacy, explainability, and transparency. Let's maybe talk about each of them quickly. Fairness
is probably obvious it is to make sure that the models are not behaving in a biased way. Now it
may actually start, the challenges may start way before a model is built. It might be understanding
if the data itself is biased. If it is how do you deal with that? When you build a model how do you
make sure that the model is not systematically giving an advantage or a disadvantage to a
certain group. And the definition of the group varies by industry, by use case,
it could be based on sensitive attributes like age and gender and ethnicity, but may not be
limited to any of those. You want to make sure that the system is not consistently favoring one
over the other in an unfair way. Robustness, you want to make sure that your models behave well
in exceptional conditions. How do you make sure that the model performance is good over time?
What is happening with the effective data drift? Or for example, in the context of the of the
pandemic, we know that customer behavior has changed you know customer patterns have changed,
customer touch points have changed. Is your model still behaving as expected, or if it is not
can you at least have an understanding of how the model behavior is changing,
how data is drifting, how accuracy is drifting, etc. Privacy, can you make sure that the model,
the data, the model that is built off of that model, the insights from that model,
they are all that the the model builder owns and retains control of those insights.
And how do you do this not just as in terms of consumption of the output of the model,
but across the life cycle. How do you make sure that data protection rules are in place
through the model building testing validation and monitoring stages. Explainability is probably
pretty obvious. How can you explain the behavior of a model. Why was someone approved for a loan,
why was someone rejected. When somebody applied for a job and that person was selected but someone
with very similar qualifications applied that person was rejected, can you explain the behavior
to the end user or to a decision maker. Transparency, you want to be able to inspect
everything about a model. Can you understand all the facts surrounding the model. Who
built it, what data is being used, what algorithms, what packages are being used,
who approved it, who validated it. All of these aspects of the model, facts about the model,
should be easily available. Just like you know, you have you buy a food product and there is a,
there's a label on it, you know, it has the nutritional facts, when was it manufactured,
where was it manufactured, all of that. Just like that for a model, you should be able to
get the facts of that model very quickly. So these I would say are sort of the fundamental
pillars of Trustworthy AI. The challenge is making sure these can be done in a systematic
way regardless of what tools are used to build the models and where the models are deployed.
So John, in the recent past, we have seen that as AI systems were new to a lot of organizations,
organizations have very recently adopted such large-scale AI applications or systems in
their workflow. And that's where we started seeing these side effects of AI, right. And
that's where we pinpointed that, hey, like these were some of the aspects which we need to target
to make sure that AI doesn't have an ill effect on the community, or doesn't have an ill effect on it
on an entire perspective. So when we see that organizations are facing such challenges,
when they are seeing such like roadblocks with respect to building trust for the AI,
what is the recommended methodology on making sure that building such AI systems, or like building
such trusted AI systems is easily done throughout different business units of the organization and
doesn't surely, you know, it doesn't really streamline to just one particular department
or team. How can we make it a big thing and how can an entire organization productionalize this
streamlined work? So Aishwarya, you know, you're talking about expanding this across our company,
sort of setting up this governance framework and that was one of the patterns we talked about. Many
companies may not start there, but they may start with one of the other patterns we talked about,
which is let's start with assessment or building out a new use case, a new application that follows
Trustworthy AI principles, but yeah some companies may want to look at a top-down approach and
and set up the governance framework taking into account that there are multiple streams of
data science and AI activities going on concurrently. But in all of these,
you know, regardless of which approach you take, I think three elements need to come together.
And I would say these these are these three elements are technology,
people, and process. Technology is probably obvious, we need to have guardrails across
each of the stages of the life cycle. When you're working with data, how do you
check for bias in the data, how do you correct that. That's a guardrail at the data exploration
time. When you're building the model you need a guardrail in place for model building
for checking the robustness of the model, for providing an explanation in development time.
You need a guardrail which will allow you to go into valid, through validation into deployment,
and you need an outermost guard where you think of it as a one-time guardrail
which can continue to do monitoring of your model and look at how it is behaving against thresholds,
whether the thresholds are being breached, etc. Now, so technology provides these guardrails
for all of the different five pillars that we talk about. Now technology in itself is not
sufficient that's why I was mentioning people and process. People because you need a set of skills
to come together. It is not just data science skills. The MLOps paradigm requires you to have
the operational skills come together with data science skills. You might have risk and compliance
expertise coming into the picture. You might have business analysts and business stakeholders coming
into the picture, and so on. So the right level of expertise, personas who are collaborating to
achieve this common goal is important. And then finally, process. In a process, that term process,
you know, people may not always like that, but the reality is you need a set of best practices
for each stage of the life cycle. Whether it is coping and building, or it is validation
or deployment or monitoring over time, you need a set of best practices. So technology, people,
best practices coming together make it possible to loot Trustworthy AI at scale and operationalize
it. Great, thank you so much John, like it was very insightful for me because to
understand kind of the AI systems we build, from understanding it from a data science perspective,
to how it can be productionized and run successfully in these large organizations. It is
very important that organizations are responsible to the people who are using it, right. So it was,
it was really insightful that we got to learn so many different things from you. In the meanwhile,
I feel like there's a lot of other resources which is available for us to dig deeper and
learn about fairness, robustness, transparency, privacy, and explainability. So everyone who's
watching this you can find the right resources in the description below, and soon we'll be
posting more series of videos talking deeper into each of these pillars. Thank you so much.