Learning Library

← Back to Library

Congressional Testimony on AI Ethics

Key Points

  • In May 2023, Christina Montgomery testified before Congress, marking the first major public debate on AI ethics and highlighting the urgency for trustworthy AI governance.
  • She defines AI ethics as a consistent set of moral principles that guide the responsible development, deployment, and use of AI to maximize benefits while minimizing risks and adverse outcomes.
  • Montgomery stresses that AI acts both as a “force multiplier” and a “risk multiplier,” requiring organizations to embed ethical guardrails institution‑wide rather than treating AI as a standalone liability shield.
  • Existing consumer‑protection, privacy, and emerging AI‑specific regulations already impose duties on companies, and proactively adopting ethical standards can help firms stay ahead of increasingly robust regulatory frameworks.
  • Competing regulatory philosophies surfaced during her testimony, notably proposals for a licensing regime that would restrict AI development to a few actors—potentially consolidating market power—versus more open, principle‑based approaches.

Full Transcript

# Congressional Testimony on AI Ethics **Source:** [https://www.youtube.com/watch?v=n0WapyCr0tk](https://www.youtube.com/watch?v=n0WapyCr0tk) **Duration:** 00:09:17 ## Summary - In May 2023, Christina Montgomery testified before Congress, marking the first major public debate on AI ethics and highlighting the urgency for trustworthy AI governance. - She defines AI ethics as a consistent set of moral principles that guide the responsible development, deployment, and use of AI to maximize benefits while minimizing risks and adverse outcomes. - Montgomery stresses that AI acts both as a “force multiplier” and a “risk multiplier,” requiring organizations to embed ethical guardrails institution‑wide rather than treating AI as a standalone liability shield. - Existing consumer‑protection, privacy, and emerging AI‑specific regulations already impose duties on companies, and proactively adopting ethical standards can help firms stay ahead of increasingly robust regulatory frameworks. - Competing regulatory philosophies surfaced during her testimony, notably proposals for a licensing regime that would restrict AI development to a few actors—potentially consolidating market power—versus more open, principle‑based approaches. ## Sections - [00:00:00](https://www.youtube.com/watch?v=n0WapyCr0tk&t=0s) **Untitled Section** - - [00:03:08](https://www.youtube.com/watch?v=n0WapyCr0tk&t=188s) **Regulate AI Use, Not Technology** - The speaker advocates for context‑specific, outcome‑based regulation that holds AI creators accountable while promoting open, inclusive innovation, highlighting IBM and Meta’s role in the AI Alliance and support for the US AI Safety Institute. - [00:06:15](https://www.youtube.com/watch?v=n0WapyCr0tk&t=375s) **EU AI Act: Oversight, Fairness, Trust** - The speaker explains how the EU AI Act mandates human‑in‑the‑loop controls, rigorous data provenance and fairness standards, bans discrimination, imposes fines up to €35 million or 7 % of revenue, and serves as a GDPR‑style global model while emphasizing that genuine ethical AI also requires corporate character and earned trust. ## Full Transcript
0:00[Music] 0:02In May of 2023 I was asked to testify before Congress. 0:06This was just a few months after the ChatGPT moment. 0:09Generative AI was new to the public. 0:12Lawmakers and regulators were scrambling to understand the implications. 0:16I didn’t anticipate the attention this hearing would attract. 0:18I’d just spent three years building accountability for AI at IBM, 0:22...trying to make sure that what’s invented, used and sold was trustworthy. 0:26In a way, I took it for granted. 0:28But as I listened to the questions and other testimony that day, 0:30...and heard calls for strict regulation to govern the behavior of AI companies, it clicked. 0:35Not everyone is ready for this. 0:37There would be a national debate, a global debate, 0:40...and AI ethics was about to become the most important conversation of our time. 0:44Welcome to AI Academy. 0:46My name is Christina Montgomery. 0:48I’m the Chief Privacy and Trust Officer at IBM and Co-Chair of IBM’s AI Ethics Board. 0:53There’s a rich philosophical history around ethics, but I’m going to boil it down to this; 0:57...ethics are a set of moral principles that guide decision-making. 1:01We all have instincts about what is right and wrong, 1:04...but a consistent set of principles can help us work through complex decisions or novel scenarios. 1:10It seems like every day we hear something new that AI can do. 1:13So every day we have to revisit the question of what AI should do and when and where and how we should use it. 1:21AI ethics are the principles that guide the responsible development, deployment and use of AI, 1:26...to optimize its beneficial impact while reducing risks and adverse outcomes. 1:31Like most technology, AI is a lever, 1:34...a force multiplier allowing each individual to do a lot more than they could without a system, which is great. 1:39But the flipside is that AI is also a consequence multiplier, a risk multiplier. 1:45So as you scale AI in your business for greater reach and impact, you need to be thinking about AI ethics at an institutional level, 1:52...so that everyone can operate from a shared set of principles with defined guardrails. 1:57And AI regulations are already here, 1:59...either in standalone legislation or as part of existing consumer protection and privacy laws, for example. 2:05AI is not a shield to liability. 2:07You can’t just blame AI if your company hiring decisions discriminate, for example. 2:11By taking account of AI ethics, you can get ahead of regulations, 2:15...which is good, because more robust regulation is coming. 2:19There are different regulatory philosophies that are sort of competing right now. 2:23And these divergent views became apparent during my testimony last year. 2:27Some of the most visible players in the AI space are saying that we should regulate the fundamental technology of AI itself. 2:33That a licensing regime should be established to control what and how AI gets built and by whom, 2:39...effectively dictating who can participate in the AI marketplace. 2:43This approach could consolidate the market around a small handful of companies. 2:47And while that’s a winning proposition for companies with the resources to comply, 2:51...it’s a losing proposition for everyone else. 2:54An AI licensing regime would be a serious blow to open innovation. 2:58And from an ethical perspective, you have to ask whether it’s just or fair, 3:03...for a few companies to have such an outsized influence on people’s daily lives. 3:08Again, AI is going to touch every aspect of business in society, so shouldn’t it be built by the many and not the few? 3:14And shouldn’t we hear from not just the loudest voices, but from many voices? 3:19It’s also just not very practical to regulate technology granularly in the face of rapid innovation. 3:26Before the ink is dry on a new piece of regulation, 3:28...technologists will have rolled out many alternative approaches to achieve the same outcome. 3:33And it’s the outcomes that really matter. 3:36That’s why I support a regulatory approach based not on the restriction of core technology, 3:41...but on the responsible application of technology. 3:44Regulate the use of technology, not the technology itself. 3:48Not all uses of AI carry the same level of risk and because each AI application is unique, 3:54...it’s critical that regulation must account for the context in which AI is deployed. 3:59We also believe that those who create and deploy AI should be accountable, not immune from liability. 4:06It’s essential to find the right balance between innovation and accountability. 4:11The support for this regulatory perspective is one of the reasons IBM and Meta cofounded the AI Alliance, 4:18...with a group of corporate partners, startups and academic and research institutions. 4:24It’s why we joined the consortium to support the US AI Safety Institute at NIST. 4:30Whatever comes next for AI, it’s going to be safer if it’s open, transparent and inclusive. 4:36So you can have research universities; you can have regulators and independent 3rd parties poking holes and testing. 4:42You can have an open community of experts from around the globe, different voices, different perspectives, 4:48...all vetting the technology instead of one company saying no, trust me, it’s safe. 4:53And while the debate around these competing regulatory approaches is still very active, 4:57...we now have a practical example of a risk-based regulatory approach that I think is likely to be a model for the rest of the world. 5:04IBM has supported the EU AI Act for a few reasons. 5:09First, the law introduces a risk-based approach to regulate AI systems. 5:13Most generally available AI today, like AI-enabled video games or spam filters are unregulated. 5:21Something like a chatbot is a limited risk application and will have light touch regulatory requirements. 5:26Some applications like the creation of facial recognition databases, 5:30...through the untargeted scraping of facial images from the internet, for social scoring systems, 5:36...these compose a significant threat to human rights and are prohibited. 5:41And then you have activities and uses that pose some risk to human health safety or fundamental rights, but are allowed. 5:47That’s where some business activities will fall, and those uses will face high standards for compliance. 5:53Some of the requirements would be things you would probably expect. 5:57For example, there’ll be a requirement for transparency that will require users be provided, 6:02...with clear and understandable information about the systems purpose, functionality and intended use. 6:09This includes information about any biases or limitations that may affect the systems performance. 6:15There’ll be requirements for human oversight, such as human-in-the-loop systems, 6:19...to ensure that AI systems remain aligned with human values and expectations. 6:24And there’ll be standards for data quality and fairness. 6:27Data governance and data provenance are crucial for AI ethics. 6:32And that means understanding where the data used to train a model came from; 6:36...ensuring you have the right to use it; ensuring that the data isn’t biased and that it respects copyright law. 6:42These are all issues addressed by the Act. 6:45We talked earlier about AI not being a shield to liability. 6:48And the Act makes it clear these systems cannot be used to discriminate against people, 6:52...based on attributes like race, ethnicity, religion or sexual orientation, 6:57...and then things like safety and security as well. 7:01You have to be able to demonstrate compliance with these standards or face serious consequences. 7:06Fines can be up to 35 million euros of 7% of a company’s annual revenue, whichever is higher. 7:13And in the same way that the General Data Protection Regulation was a landmark legislation for data privacy and protection, 7:20...the EU AI Act is landmark legislation for AI. 7:24And also like the GDPR, this EU law will be influential in serving as a model for other jurisdictions. 7:32But there is more to ethics than compliance. 7:35There’s your corporate character; there’s good corporate citizenship; and there’s trust. 7:40There’s a saying that trust is earned in drops but lost in buckets. 7:43And it’s absolutely true. 7:45Trust is central to our company’s brand, 7:47...and maybe the biggest part of my job is working to ensure that the technology IBM makes and uses, 7:52...the things people interact with every day, are things they can trust. 7:56It’s one thing to have ethical principles, but they’re meaningless without a mechanism for holding yourself accountable. 8:03I propose that any organization using AI at scale needs an AI Ethics Board or equivalent governing mechanism. 8:11I Co-Chair IBM’s Board and I can’t tell you how important it is to make your AI decisions, 8:17...in an environment of open consideration and debate, 8:20...with a diverse group of others who are viewing the business through the lens of ethics, 8:25...and who bring different backgrounds, domain expertise and experiences into that debate. 8:30On our Board, for example, we have lawyers, policy professionals, 8:34...communications professionals, HR professionals, researchers, sellers, product teams and more. 8:41And then through that Board you work to build an ethics framework into your corporate practices and instill a culture of trustworthy AI, 8:48...and ensure you have mechanisms to hold your company accountable. 8:52The specific use cases of AI in your businesses might be different than ours, 8:56...but I bet that once you start defining your own principles and pillars, you’ll find that we all have a lot in common. 9:02We all want to build strong, trusted brands. 9:05We all want to do the right thing. 9:07Because the future of ethical AI is something we all need to build together. 9:12[Music]