Learning Library

← Back to Library

Cultivating Inclusive, Consent-Driven AI Ethics

Key Points

  • Ethics, derived from the Greek “ethos,” shapes culture and underpins a consent‑based approach to AI, which IBM formalizes in its ethical principles.
  • Feeding AI with data obtained through explicit consent yields far superior outcomes than using data collected without permission.
  • Building AI teams that are diverse in gender, race, ethnicity, age, neurodiversity, worldview, and skills reduces error rates and prevents elitist, exclusionary biases.
  • Gaining trust in AI is a sociotechnical challenge that requires systemic‑empathy frameworks and design‑thinking practices to create intentional, human‑augmenting systems.
  • To move AI projects beyond proof‑of‑concept, IBM uses a four‑stage design‑thinking workflow—intent, data sourcing, evaluation (with “tech ethics by design” and layered effect analysis), and rollout—ensuring alignment with business strategy and model trustworthiness.

Full Transcript

# Cultivating Inclusive, Consent-Driven AI Ethics **Source:** [https://www.youtube.com/watch?v=DK7w9-DGRO0](https://www.youtube.com/watch?v=DK7w9-DGRO0) **Duration:** 00:07:03 ## Summary - Ethics, derived from the Greek “ethos,” shapes culture and underpins a consent‑based approach to AI, which IBM formalizes in its ethical principles. - Feeding AI with data obtained through explicit consent yields far superior outcomes than using data collected without permission. - Building AI teams that are diverse in gender, race, ethnicity, age, neurodiversity, worldview, and skills reduces error rates and prevents elitist, exclusionary biases. - Gaining trust in AI is a sociotechnical challenge that requires systemic‑empathy frameworks and design‑thinking practices to create intentional, human‑augmenting systems. - To move AI projects beyond proof‑of‑concept, IBM uses a four‑stage design‑thinking workflow—intent, data sourcing, evaluation (with “tech ethics by design” and layered effect analysis), and rollout—ensuring alignment with business strategy and model trustworthiness. ## Sections - [00:00:00](https://www.youtube.com/watch?v=DK7w9-DGRO0&t=0s) **Building Consent‑Driven, Inclusive AI** - The speaker outlines how an ethical culture rooted in consent, diversity, and systemic empathy is essential for trustworthy AI, referencing IBM’s principles and sociotechnical design practices. - [00:03:08](https://www.youtube.com/watch?v=DK7w9-DGRO0&t=188s) **Framework for Ethical AI Evaluation** - The speaker outlines a strategic AI investment process that assesses data sources and model impacts using layered effect analysis, dichotomy mapping, and ethical hacking to embed tech‑ethics‑by‑design principles. - [00:06:16](https://www.youtube.com/watch?v=DK7w9-DGRO0&t=376s) **Ethical AI Through Design Thinking** - The speaker emphasizes establishing ethical standards, leveraging diverse teams, and applying empathy‑driven design thinking to democratize AI before any programming begins. ## Full Transcript
0:00Ethics is based on the Greek word Ethos. 0:09Culture is an expression of the Ethos 0:13or the atmosphere that is established through ethics, 0:17the unwritten rules of a group of people. 0:20Our culture is able to establish a consent-based understanding of Artificial Intelligence. 0:30The outcomes of AI are so much better 0:33if we feed it with data given with consent as opposed to just taking the data without consent. 0:39We at IBM have written these rules as part of our ethical principles. 0:44Now, apart from consent, 0:47a culture that nurtures responsible AI truly values diversity and inclusivity. 0:53AI ethics, remember, is a team sport. 1:00Exclusion breeds elitism 1:03that, in turn, breeds the very toxic notion that one human being is better than another human being. 1:11We actually know via this mathematical model 1:15that the wider the variance, the more standard the mean. 1:19Or put another way, the more diverse a group of people actually trying to tackle a really complicated problem, 1:26the less chance for error. 1:28So it's really important to consider things like gender, race, ethnicity, 1:33age, neurodiversity, worldview, skill set, as you're starting your team. 1:39So now that you've got this amazingly diverse team, what next? 1:45Remember, earning trust in artificial intelligence is not a technological challenge, 1:50it's a sociotechnological challenge. 1:52So thus it's really important to adopt frameworks for systemic empathy. 1:57We have a map, we have exercises, we have design thinking as a means to do this, 2:03and we use this approach to generate artificial intelligence 2:07that is intentional in its efforts to augment a human being. 2:12We have an amazing design thinking practice. 2:16Our design practice is based on what is somebody thinking, seeing, hearing and doing. 2:27The very expression of culture. 2:35Now, about 80% of efforts in artificial intelligence actually get stuck in proof of concept, 2:42and this is for a wide variety of different kinds of reasons. 2:45Some of the top ones are that oftentimes 2:50the investments in the AI aren't tied directly to business strategy, 2:54or people simply don't trust the results of the model. 2:57So we actually use design thinking to walk both C-suite as well as technologists through four different stages. 3:06We start with intent. 3:11What is the intent behind the investment in this AI model and how is it tied directly to strategy? 3:18We identify the sources of data that they have access to and how it is being collected. 3:27We evaluate the data sources and the effects of the proposed AI model, 3:37and we plan how to roll out the effort. 3:43In the evaluate phase, we embed frameworks for systemic empathy 3:48called "tech ethics by design" on an as-needed basis. 3:52This is based on three different steps. 3:55The first step is called "layers of effect". 4:00Looks kind of like this, right? 4:03These would be your primary and your secondary effects, both intended and known. 4:11And this third one is actually tertiary effects, unintended and possibly known effects of your AI model. 4:20And this is actually where your team of people who will be doing this workshop together 4:25will actually might be coming up with ideas on what could potential harm look like, right? 4:31So, it's really important you've got the right people in the room. 4:34The second step for tech ethics by design is called "dichotomy mapping". 4:43And in this step, you take these ideas around potential tertiary effects, again unintended and possibly known, 4:51and you split them up into what could be potentially beneficial, and what could be potentially harmful? 5:02And then the third step is called "ethical hacking". 5:10In this step, the first thing to do as an organization is think about 5:15what are your principles for artificial intelligence? 5:18Again, what is your Ethos? 5:20What are you going to stand by? 5:23The second step, so again, these are principles or values, 5:28then this part is given these principles, given these values, what are the rights of the individual? 5:36What are the rights of the end user? 5:39And then given that given those rights and this particular harm, 5:46which might have come up in the prior example, in the prior step, 5:50how would a team, how would your team mitigate against any potential harm 5:57so that it's actually you're designing intentionally in order to protect against this particular harm? 6:06I cannot underscore enough how valuable this framework is, 6:12it has really unlocked true epiphanies on teams. 6:16In closing, we have the ability to set up the correct Ethos, 6:22right, the ethical standards to augment humans and truly democratize artificial intelligence. 6:28In order to achieve systemic equity in AI, we need to work with truly diverse teams, right? 6:37And the way that these folks work together is to use design thinking 6:42to tie AI models to business intent and crack the empathy code 6:48well before any programing code is written. 6:51Thank you. 6:53If you like this video and series, please comment below. 6:57Stay tuned for more videos that are part of the series and to get updates, please like and subscribe.