Cultivating Inclusive, Consent-Driven AI Ethics
Key Points
- Ethics, derived from the Greek “ethos,” shapes culture and underpins a consent‑based approach to AI, which IBM formalizes in its ethical principles.
- Feeding AI with data obtained through explicit consent yields far superior outcomes than using data collected without permission.
- Building AI teams that are diverse in gender, race, ethnicity, age, neurodiversity, worldview, and skills reduces error rates and prevents elitist, exclusionary biases.
- Gaining trust in AI is a sociotechnical challenge that requires systemic‑empathy frameworks and design‑thinking practices to create intentional, human‑augmenting systems.
- To move AI projects beyond proof‑of‑concept, IBM uses a four‑stage design‑thinking workflow—intent, data sourcing, evaluation (with “tech ethics by design” and layered effect analysis), and rollout—ensuring alignment with business strategy and model trustworthiness.
Sections
- Building Consent‑Driven, Inclusive AI - The speaker outlines how an ethical culture rooted in consent, diversity, and systemic empathy is essential for trustworthy AI, referencing IBM’s principles and sociotechnical design practices.
- Framework for Ethical AI Evaluation - The speaker outlines a strategic AI investment process that assesses data sources and model impacts using layered effect analysis, dichotomy mapping, and ethical hacking to embed tech‑ethics‑by‑design principles.
- Ethical AI Through Design Thinking - The speaker emphasizes establishing ethical standards, leveraging diverse teams, and applying empathy‑driven design thinking to democratize AI before any programming begins.
Full Transcript
# Cultivating Inclusive, Consent-Driven AI Ethics **Source:** [https://www.youtube.com/watch?v=DK7w9-DGRO0](https://www.youtube.com/watch?v=DK7w9-DGRO0) **Duration:** 00:07:03 ## Summary - Ethics, derived from the Greek “ethos,” shapes culture and underpins a consent‑based approach to AI, which IBM formalizes in its ethical principles. - Feeding AI with data obtained through explicit consent yields far superior outcomes than using data collected without permission. - Building AI teams that are diverse in gender, race, ethnicity, age, neurodiversity, worldview, and skills reduces error rates and prevents elitist, exclusionary biases. - Gaining trust in AI is a sociotechnical challenge that requires systemic‑empathy frameworks and design‑thinking practices to create intentional, human‑augmenting systems. - To move AI projects beyond proof‑of‑concept, IBM uses a four‑stage design‑thinking workflow—intent, data sourcing, evaluation (with “tech ethics by design” and layered effect analysis), and rollout—ensuring alignment with business strategy and model trustworthiness. ## Sections - [00:00:00](https://www.youtube.com/watch?v=DK7w9-DGRO0&t=0s) **Building Consent‑Driven, Inclusive AI** - The speaker outlines how an ethical culture rooted in consent, diversity, and systemic empathy is essential for trustworthy AI, referencing IBM’s principles and sociotechnical design practices. - [00:03:08](https://www.youtube.com/watch?v=DK7w9-DGRO0&t=188s) **Framework for Ethical AI Evaluation** - The speaker outlines a strategic AI investment process that assesses data sources and model impacts using layered effect analysis, dichotomy mapping, and ethical hacking to embed tech‑ethics‑by‑design principles. - [00:06:16](https://www.youtube.com/watch?v=DK7w9-DGRO0&t=376s) **Ethical AI Through Design Thinking** - The speaker emphasizes establishing ethical standards, leveraging diverse teams, and applying empathy‑driven design thinking to democratize AI before any programming begins. ## Full Transcript
Ethics is based on the Greek word Ethos.
Culture is an expression of the Ethos
or the atmosphere that is established through ethics,
the unwritten rules of a group of people.
Our culture is able to establish a consent-based understanding of Artificial Intelligence.
The outcomes of AI are so much better
if we feed it with data given with consent as opposed to just taking the data without consent.
We at IBM have written these rules as part of our ethical principles.
Now, apart from consent,
a culture that nurtures responsible AI truly values diversity and inclusivity.
AI ethics, remember, is a team sport.
Exclusion breeds elitism
that, in turn, breeds the very toxic notion that one human being is better than another human being.
We actually know via this mathematical model
that the wider the variance, the more standard the mean.
Or put another way, the more diverse a group of people actually trying to tackle a really complicated problem,
the less chance for error.
So it's really important to consider things like gender, race, ethnicity,
age, neurodiversity, worldview, skill set, as you're starting your team.
So now that you've got this amazingly diverse team, what next?
Remember, earning trust in artificial intelligence is not a technological challenge,
it's a sociotechnological challenge.
So thus it's really important to adopt frameworks for systemic empathy.
We have a map, we have exercises, we have design thinking as a means to do this,
and we use this approach to generate artificial intelligence
that is intentional in its efforts to augment a human being.
We have an amazing design thinking practice.
Our design practice is based on what is somebody thinking, seeing, hearing and doing.
The very expression of culture.
Now, about 80% of efforts in artificial intelligence actually get stuck in proof of concept,
and this is for a wide variety of different kinds of reasons.
Some of the top ones are that oftentimes
the investments in the AI aren't tied directly to business strategy,
or people simply don't trust the results of the model.
So we actually use design thinking to walk both C-suite as well as technologists through four different stages.
We start with intent.
What is the intent behind the investment in this AI model and how is it tied directly to strategy?
We identify the sources of data that they have access to and how it is being collected.
We evaluate the data sources and the effects of the proposed AI model,
and we plan how to roll out the effort.
In the evaluate phase, we embed frameworks for systemic empathy
called "tech ethics by design" on an as-needed basis.
This is based on three different steps.
The first step is called "layers of effect".
Looks kind of like this, right?
These would be your primary and your secondary effects, both intended and known.
And this third one is actually tertiary effects, unintended and possibly known effects of your AI model.
And this is actually where your team of people who will be doing this workshop together
will actually might be coming up with ideas on what could potential harm look like, right?
So, it's really important you've got the right people in the room.
The second step for tech ethics by design is called "dichotomy mapping".
And in this step, you take these ideas around potential tertiary effects, again unintended and possibly known,
and you split them up into what could be potentially beneficial, and what could be potentially harmful?
And then the third step is called "ethical hacking".
In this step, the first thing to do as an organization is think about
what are your principles for artificial intelligence?
Again, what is your Ethos?
What are you going to stand by?
The second step, so again, these are principles or values,
then this part is given these principles, given these values, what are the rights of the individual?
What are the rights of the end user?
And then given that given those rights and this particular harm,
which might have come up in the prior example, in the prior step,
how would a team, how would your team mitigate against any potential harm
so that it's actually you're designing intentionally in order to protect against this particular harm?
I cannot underscore enough how valuable this framework is,
it has really unlocked true epiphanies on teams.
In closing, we have the ability to set up the correct Ethos,
right, the ethical standards to augment humans and truly democratize artificial intelligence.
In order to achieve systemic equity in AI, we need to work with truly diverse teams, right?
And the way that these folks work together is to use design thinking
to tie AI models to business intent and crack the empathy code
well before any programing code is written.
Thank you.
If you like this video and series, please comment below.
Stay tuned for more videos that are part of the series and to get updates, please like and subscribe.