Learning Library

← Back to Library

Transparency in Open AI Governance

Key Points

  • The episode of “Smart Talks with IBM” explores the theme of openness in AI, examining its possibilities, misconceptions, and impact on industry and society.
  • Host Jacob Goldstein interviews Rebecca Finlay, CEO of the Partnership on AI, about the nonprofit’s role in fostering accountable AI governance through diverse stakeholder collaboration.
  • Finlay emphasizes that transparency is essential for responsibly scaling AI technologies and for building the infrastructure and community needed to support open‑source models.
  • Drawing on her experience at the Canadian Institute for Advanced Research, she explains how early, long‑term AI research evolved into today’s deep‑learning era and highlighted emerging concerns about data bias and societal impact.
  • The conversation underscores the importance of open collaboration—such as with the AI Alliance—to develop resources and standards that guide the future development and deployment of AI.

Sections

Full Transcript

# Transparency in Open AI Governance **Source:** [https://www.youtube.com/watch?v=VNWXOYf73tI](https://www.youtube.com/watch?v=VNWXOYf73tI) **Duration:** 00:34:19 ## Summary - The episode of “Smart Talks with IBM” explores the theme of openness in AI, examining its possibilities, misconceptions, and impact on industry and society. - Host Jacob Goldstein interviews Rebecca Finlay, CEO of the Partnership on AI, about the nonprofit’s role in fostering accountable AI governance through diverse stakeholder collaboration. - Finlay emphasizes that transparency is essential for responsibly scaling AI technologies and for building the infrastructure and community needed to support open‑source models. - Drawing on her experience at the Canadian Institute for Advanced Research, she explains how early, long‑term AI research evolved into today’s deep‑learning era and highlighted emerging concerns about data bias and societal impact. - The conversation underscores the importance of open collaboration—such as with the AI Alliance—to develop resources and standards that guide the future development and deployment of AI. ## Sections - [00:00:00](https://www.youtube.com/watch?v=VNWXOYf73tI&t=0s) **Open AI Governance with Rebecca Finlay** - Malcolm Gladwell introduces an episode where Jacob Goldstein talks to Partnership on AI CEO Rebecca Finlay about the importance of transparency, open‑source collaboration, and accountable governance in the rapidly evolving AI landscape. - [00:03:20](https://www.youtube.com/watch?v=VNWXOYf73tI&t=200s) **Building Interdisciplinary AI Impact Program** - The speaker recounts launching a program in the early 2000s‑2010s to study AI’s societal effects—uniting ethicists, lawyers, economists, and sociologists to tackle bias, job impacts, and the necessity of diverse perspectives in an increasingly divided world. - [00:06:30](https://www.youtube.com/watch?v=VNWXOYf73tI&t=390s) **Balancing Openness and Safety in AI** - The speaker describes collaborative working groups that create open frameworks, best practices, and resources for responsibly deploying large foundation models, while discussing the challenges of the open versus closed AI debate. - [00:09:38](https://www.youtube.com/watch?v=VNWXOYf73tI&t=578s) **Balancing Open AI Innovation & Safety** - The speaker outlines how an open‑innovation ecosystem for AI can combine transparent, peer‑reviewed research and thorough documentation with safeguards to ensure responsible downstream deployment. - [00:12:47](https://www.youtube.com/watch?v=VNWXOYf73tI&t=767s) **Responsible AI Deployment Framework** - The speaker outlines a responsibility framework for generative AI—covering consent, disclosure, and watermarking—showcasing collaborative pledges from tech firms, startups, civil society, and media, and describing how case studies and an online resource are being used to drive ethical deployment practices. - [00:15:56](https://www.youtube.com/watch?v=VNWXOYf73tI&t=956s) **Transparency as Foundation for Accountability** - The speaker argues that open disclosure about AI development, data protection, performance metrics, and auditing is essential for responsible, ethical deployment and serves as the first step toward full accountability. - [00:19:04](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1144s) **Ethics, Policy, and AI Deployment** - The speaker outlines sociotechnical AI ethics, the need for context‑specific safeguards, and how the Partnership on AI’s frameworks for synthetic media and responsible foundation‑model deployment have shaped industry policies and practices. - [00:22:16](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1336s) **Open Collaboration for Responsible AI** - The speaker outlines an AI Alliance that leverages open datasets, open technology, and global expertise to promote transparent, safe innovation, then identifies the biggest hurdle to responsible AI adoption as companies’ limited understanding of their own AI deployments and urges them to audit current AI use across products and services. - [00:25:29](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1529s) **AI Innovation Requires Worker-Centric Regulation** - The speaker stresses that AI systems should be developed with workers at the center and that effective regulation—viewed as essential guardrails rather than obstacles—enables responsible innovation. - [00:28:43](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1723s) **Transparency, Ethics, and Misconceptions in AI** - The speaker stresses the importance of openly disclosing AI use to clients, complying with legal requirements, and clarifies that AI is not inherently good or bad but is shaped by human choices, addressing common misunderstandings and the ethical questions that will define its future impact. - [00:31:45](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1905s) **Open‑Minded Collaboration for AI Ethics** - The speaker stresses openness and cross‑sector dialogue as essential for the Partnership on AI’s effort to create ethical guardrails and foster responsible innovation. ## Full Transcript
0:13Hello, hello. 0:14Welcome to Smart Talks with IBM, a podcast from Pushkin 0:17Industries, iHeartRadio, and IBM. 0:20I’m Malcolm Gladwell. 0:22This season, we’re diving back into the world of artificial intelligence, 0:26but with a focus on the powerful concept of “open”—its possibilities, 0:31implications, and misconceptions. 0:34We’ll look at openness from a variety of angles and explore how the concept 0:38is already reshaping industries, ways of doing business, and our 0:42very notion of what’s possible. 0:45In today’s episode, Jacob Goldstein sits down with Rebecca Finlay, the CEO 0:50of the Partnership on AI—a nonprofit group grappling with important 0:55questions around the future of AI. 0:57Their conversation focuses on Rebecca’s work bringing together a 1:01community of diverse stakeholders to help shape the conversation 1:05around accountable AI governance. 1:09Rebecca explains why transparency is so crucial for scaling 1:13the technology responsibly. 1:15And she highlights how working with groups like the AI Alliance can provide 1:19valuable insights in order to build the resources, infrastructure, and community 1:24around releasing open source models. 1:27So, without further ado, let’s get to that conversation. 1:38Can you say your name and your job? 1:40My name is Rebecca Finlay. 1:42I am the CEO of the Partnership on AI to Benefit People and Society, 1:46often referred to as “PAI.” 1:50How did you get here? 1:51What was your job before you had the job that you have now? 1:55I came to PAI about three years ago, having had the opportunity to work for 2:02the Canadian Institute for Advanced Research, developing and deploying 2:08all of their programs related to the intersection of technology and society. 2:14And one of the areas that the Canadian Institute had been funding since 1982 was 2:22research into artificial intelligence. 2:25Wow. 2:25Early. 2:26They were early. 2:28It was a very early commitment and an ongoing commitment at the institute 2:34to fund long-term, fundamental questions of scientific importance in 2:40interdisciplinary research programs that, um, were often, uh, committed 2:47and funded—to—for well over a decade. 2:49The AI, Robotics, and Society program that kicked off the work at the 2:54institute eventually became a program very much focused on deep learning 3:01and reinforcement learning neural networks—all of the current iteration 3:07of AI, or certainly the pre–generative AI iteration of AI that led to this 3:13transformation that we’ve seen in terms of online search and all sorts of ways 3:19in which predictive AI has been deployed. 3:20So I had the opportunity to see the very early days of 3:25that research coming together. 3:27And when, in the early, sort of, 2000, 3:322010s, when compute capability came together with data capability through some 3:38of the internet companies and otherwise, and we really saw this technology start 3:43to take off, I had the opportunity to start up a program specifically 3:48focused on the impacts of AI in society. 3:52There was, as you know, at that time, some concerns both about the potential 3:57for the, the technology, but also in terms of what we were seeing around 4:01datasets and bias and discrimination and potential impact on future jobs. 4:07And so bringing a whole group of experts, whether they were ethicists 4:12or lawyers or economists, sociologists, into the discussion about AI was core 4:19to that new program and continues to be core to my commitment to 4:22bringing diverse perspectives together to solve the challenges and 4:26opportunities that AI offers today. 4:29So specifically, what is your job now? 4:31What is the work you do? 4:32What is the work that PAI does? 4:35I like to answer that question by asking two questions: First and 4:40foremost, do you believe that the world is more divided today than 4:45it ever has been in recent history? 4:48And do you believe that if we don’t create spaces for very different 4:53perspectives to come together, we won’t be able to solve the challenges 4:57that are in front of the world today? 5:00My answer to both of those questions is yes, we’re more divided. 5:04And two, we need to seek out those spaces where those very different 5:10perspectives can come together to solve those great challenges. 5:14And that’s what I get to do as CEO of the Partnership on AI. 5:18We were begun in 2016 with a fundamental commitment to bringing together experts, 5:26whether they were in industry, academia, civil society, or philanthropy, 5:31coming together to identify what are the most important questions when we 5:36think about developing AI centered on people and communities, and then how 5:41do we begin to develop the solutions to make sure we benefit appropriately? 5:45So that’s a very big-picture set of ideas. 5:50Um, I’m curious, on a, sort of, more day-to-day level—I mean, you 5:53talk about collaborating with all these different kinds of people, 5:56all these different groups. 5:57What does that actually look like? 5:59Like, what are some specific examples of how you do this work? 6:02So right now we have about 120 partners in 16 countries. 6:08They come together through working groups that we look at through a 6:13variety of different perspectives. 6:14It could be AI, labor, and the economy. 6:18It could be, How do you build a healthy information ecosystem? 6:23It could be, How do you bring more-diverse perspectives into the inclusive 6:27and equitable development of AI? 6:30It could be, what are the emerging opportunities with these very, very 6:35large foundation model applications, and how do you deploy those safely? 6:40And these groups come together, most importantly, to say, what are the 6:43questions we need to answer collectively? 6:46So they come together in working groups. 6:48I have an amazing staff team who “hold the pen” on synthesizing research and 6:53data and evidence, developing frameworks, best practices, resources, all sorts 6:59of things that we can offer up to the community, be they in industry or in 7:04policy, to say this is how we can—this is what “good” looks like, and this is 7:09how we can do it on a day-to-day basis. 7:11So that’s what we do. 7:11And then we publish our materials. 7:13It’s all open. 7:14We make sure that we get them into the hands of those 7:17communities that can use them. 7:19And then we drive and work with those communities to put them into practice. 7:23You used the word “open” there in describing your publications. 7:27Uh, I know in the world of AI, on the, sort of, technical side, there’s 7:31a debate, say, or discussion about, kind of open versus closed AI. 7:38And I’m curious how you, kind of, encounter that particular discussion. 7:43What is your view on open versus closed AI? 7:46So the current discussion between open and closed release of AI models 7:53came once we saw ChatGPT and other very large generative-AI systems 8:00being deployed out into the hands of consumers around the world. 8:05And there emerged some fear about the potential of these models to act 8:13in all sorts of catastrophic ways. 8:15So there were concerns that the models could be deployed with regard to, you 8:20know, different—development of viruses or biomedical weapons or even nuclear weapons 8:26or—through manipulation or otherwise. 8:29So this emerged—about over the last 18 months—this real concern that these 8:36models, if deployed openly, could lead to some level of truly catastrophic risk. 8:43And what emerged is actually that we discovered that—through a whole bunch 8:48of work that’s been done over the last little while—that releasing them openly 8:52has not led and doesn’t appear to be leading in any way to catastrophic risk. 8:57In fact, releasing them openly allows for much more—greater—scrutiny and 9:03understanding of the safety measures that have been put into place. 9:07And so what happened was, sort of, the pendulum swung very much towards 9:12concern about really catastrophic risk and safety over the last year. 9:15And over the last year, we’ve seen it swing back as we learn more and more about 9:19how these models are being used and how they’re being deployed into the world. 9:24My feeling is we must approach this work openly. 9:30And it’s not just open release of models, or what we think of as 9:34traditional open source forms of model development, or otherwise. 9:39But we really need to think about how do we build an open innovation 9:42ecosystem that fundamentally allows both for the innovation to be shared 9:48with many people but also for safety and security to be rigorously upheld? 9:53So when you talk about this, kind of, broader idea of open innovation, 9:58beyond open source or, you know, transparency in models, like, what—what 10:03do you mean, sort of, specifically? 10:05How does that look in the world? 10:07So I have three particular points of view when it comes to open 10:11innovation, because I think we need to think both, both upstream, around 10:15the research that is driving these models, and downstream, in terms of 10:19the benefits of these models to others. 10:21So first and foremost, what we have known in terms of how AI has been developed—and 10:26yes, I had an opportunity to see it when I was at the Canadian Institute for Advanced 10:31Research—is a very open form of scientific publication and rigorous peer review. 10:38And what happens when we release openly is: you have an opportunity 10:42for the research to be interrogated to determine the quality and 10:46significance of that, but then also for it to be picked up by many others. 10:50And then secondly, openness for me is about transparency. 10:55We released a set of very strong recommendations last year around the 10:59way in which these very large foundation models could be deployed safely. 11:04They’re all about disclosure. 11:06They’re all about disclosure and documentation, right? 11:09From the early days, pre–R&D development of these systems, right? 11:13In terms of thinking about what’s in the training data and how’s it 11:16being used, all the way through to postdeployment monitoring and disclosure. 11:22So I really think that this is important: transparency throughout. 11:25And then the third piece is openness in terms of who is around the table 11:29to benefit from this technology. 11:32We know that if we’re really going to see these new models having—being successful, 11:36deployed into education or healthcare or climate and sustainability, we need to 11:41have those experts and those communities at the table charting this and making sure 11:46that the technology is working for them. 11:48So those are the three ways I think about openness. 11:52Is there, like, a particular project that you’ve worked on that 11:55you feel, like, you know, reflects your approach to responsible AI? 12:01So there’s a really interesting project that we have underway at 12:04PAI that is looking at responsible practices squarely when it comes 12:09to the use of synthetic media. 12:12And what we heard from our community was that they were looking for a clear 12:17code of conduct about what does it mean to be responsible in this space? 12:22And so what happened is: we pulled together a number of 12:25working groups to come together. 12:26They included industry representatives. 12:28They also included civil society organizations like WITNESS, a number of 12:34academic institutions, and otherwise. 12:37And what we heard was that there were clear requirements that creators could 12:43take, that developers of the technology could take—and then also distributors. 12:47So when we think about those generative-AI systems being deployed 12:51across platforms, and otherwise, and—we came up with a framework 12:55for what responsibility looks like. 12:57What does it mean to have consent? 12:59What does it mean to disclose responsibly? 13:02What does it mean to embed technology into it? 13:06So, for example, we’ve heard many people talk about the importance 13:09of watermarking systems, right? 13:11And making sure that we have a way to watermark them. 13:14But what we know from the technology is: that is a very, very 13:17complex and complicated problem. 13:19And what might work on a technical level certainly hits a whole new set 13:24of complications when we start labeling and disclosing out to the public about 13:28what that technology actually means. 13:30All of these, I believe, are solvable problems, but they all needed to have 13:34a clear code underneath them that was saying, This is what we will commit to. 13:39And we now have a number of organizations—many, many of the 13:42large technology companies, but also many of the small startups who are 13:47operating in this space, civil society and media organizations like the 13:50BBC and the CBC—who have signed on. 13:54And one of the really exciting pieces of that is that we’re now 13:58seeing how it’s changing practice. 14:00So a year in, we asked each of our partners to come up with a clear case 14:05study about how that work has changed the way they are making decisions, 14:10deploying technology, and ensuring that they’re being responsible in their use. 14:15And that is creating, now, a whole resource online that we’re able to 14:18share with others about what does it mean to be responsible in this place? 14:23There’s so much more work to be done. 14:25And the exciting thing is, once you have a foundation like this in place, 14:28we can continue to build on it. 14:30So much interest now in the policy space, for example, about this work as well. 14:36Are there any specific examples of those, sort of, case studies or the, 14:40you know, real-world experiences that, say, media organizations had that are 14:45interesting, that are illuminating? 14:47Yes. 14:47So, for example, what we saw with the, with the BBC is that they’re developing 14:54a lot of content as a, as a public broadcaster, both in terms of their news 14:59coverage, but also in terms of some of the resources that they are developing, 15:03uh, for the British public as well. 15:05And what they talked about was the way in which they had used synthetic 15:09media in a very, very sensitive environment, where they were hearing 15:15from individuals talk about personal experiences, but wanted to have some 15:20way to change the face entirely in terms of the individuals who were speaking. 15:25So that’s a very complicated ethical question, right? 15:28How do you do that responsibly? 15:30And what is the way in which you use that technology, and most 15:34importantly, how do you disclose it? 15:36So their case study looked at that in some real detail, about the 15:40process they went through to make the decision responsibly to do what 15:44they chose—uh, how they intended to use the technology in that space. 15:49As you describe your work and some of these studies, the idea of 15:53transparency seems to be a theme. 15:57Talk about the importance of transparency in this kind of work. 16:00Yeah, transparency is fundamental to responsibility. 16:05I always like to say it’s not accountability in the—in a complete 16:08sense, but it is a first step to driving accountability more fully. 16:13So when we think about how these systems are developed, they’re often 16:17developed behind closed doors inside companies who are making decisions about 16:24what and how these products will work from a, from a business perspective. 16:28And what disclosure and transparency can provide is some sense of the decisions 16:34that were made leading up to the way in which those, those models were deployed. 16:38So this could be ensuring that individuals’ private information was 16:43protected through the process and won’t be inadvertently disclosed, or otherwise. 16:48It could be providing some sense of how well the system performs against 16:53a whole level of quality measures. 16:55So we have all of these different types of evaluations and measures 16:58that are emerging about the quality of these systems as they’re deployed. 17:03Being transparent about how they perform against these systems is 17:06really crucial to that as well. 17:08We have a whole ecosystem that’s starting to emerge around 17:12auditing of these systems. 17:13So what does that look like? 17:15We think about auditors in all sorts of other sectors of the economy. 17:18What does it look like to be auditing these systems to ensure that they’re 17:22meeting all of those—both legal, but additional ethical requirements that 17:26we want to make sure that are in place? 17:29What are some of the hardest ethical dilemmas you’ve come 17:34up against in AI policy? 17:37Well, the interesting thing about AI policy—right?—is: what works very 17:42simply in one setting can be highly complicated in another setting. 17:47And so, for example, I have an app that I adore. 17:50It’s an app on my phone that allows me to take a photo of a bird and it will 17:55help me to better understand, you know, what that bird is, and give me all 17:59sorts of information about that bird. 18:01Now it’s probably right most of the time, and it’s certainly right enough 18:06of the time to give me great pleasure and delight when I’m out walking. 18:10You could think about that exact same technology applied—so, for example, 18:14now you’re a security guard and you’re working in a shopping plaza and you’re 18:19able to take photos of individuals who you may think are acting suspiciously in 18:24some way, and match that photo up with some sort of a database of individuals 18:29that may have been found, you know, to have some sort of connection to other 18:34criminal behavior in the past, right? 18:35So what goes from being a delightful “Oh, isn’t this an interesting bird?” 18:40to a very, very creepy “What is this?” What does this say about surveillance 18:44and privacy and access to public spaces? 18:48And that is the nature of AI. 18:50So much of the concern about the ethical use and deployment of AI 18:55is how an organization is making the choices within the social 19:01and systemic structure they sit. 19:04So, so much about the ethics of AI is understanding: What is the use case? 19:09How is it being used? 19:11How is it being constrained? 19:13How does it start to infringe upon what we think of as the human 19:17rights of an individual to privacy? 19:21And so you have to constantly be thinking about ethics. 19:24What could work very well in one situation absolutely doesn’t work in another. 19:28We often talk about these as sociotechnical questions, right? 19:32Just because the technology works doesn’t actually mean that 19:36it should be used and deployed. 19:38What’s an example of where the Partnership on AI influenced changes 19:44either in policy or in industry practice? 19:48We talked a little bit about the framework for synthetic media and 19:52how that has allowed companies and media organizations and civil society 19:56organizations to really think deeply about the way in which they’re using this. 20:00Another area that we focused on has been around responsible deployment 20:07of foundation and large-scale models. 20:09So as I said, we issued a set of recommendations last year that 20:13really laid out, for these very large developers and deployers of 20:18foundation and frontier models, what were—What does “good” look like?—right? 20:23From, uh, R&D through to deployment monitoring. 20:27And it has been very encouraging to see that that work has been 20:31picked up by companies and really articulated as part of the fabric of 20:36the deployment of their foundation models and systems moving forward. 20:41You know, so much of this work is around creating clear definitions of 20:44what we’re meaning as the technology evolves, and clear sets of responsibility. 20:49So it’s great to see that work getting picked up. 20:51The NTIA in the United States just released a, uh, uh, report on open 20:57models and the release of open models. 20:59Great to see our work cited there as contributing to that analysis. 21:03Great to see some of our definitions in synthetic media getting picked up 21:07by legislators in different countries. 21:09Really, just—it’s important, I think, for us to build capacity, knowledge and 21:13understanding in our policymakers in this moment, as the technology is evolving 21:19and accelerating in its development. 21:22What’s the AI Alliance and why did Partnership on AI decide to join? 21:27So you had asked about the debate between open versus closed models, um, and how 21:33that has evolved over the last year. 21:35And the AI Alliance was a community of organizations that came together 21:41to really think about, “Okay, if we support open release of models, 21:47what does that look like, and what does the community need?” And so 21:50that’s about a hundred organizations. 21:53IBM, one of our founding partners, is also one of the founding 21:56partners of the AI Alliance. 21:58It’s a community that brings together a number of academic institutions, many 22:03countries around the world, and they’re really focused on, how do you build 22:09the resources and infrastructure and community around what open source in 22:14these large-scale models really means? 22:16So that could be open datasets. 22:19It could be open technology development, really building on that understanding 22:24that we need an infrastructure in place and a community engaged 22:28and thinking about safety and innovation through the open lens. 22:33This approach brings together organizations and experts from 22:37around the globe with different backgrounds, experiences, and 22:41perspectives - to transparently and openly address the challenges 22:46and opportunities that AI poses. 22:48The collaborative nature of the AI Alliance encourages 22:51discussion, debate, and innovation. 22:54Through these efforts, IBM is helping to build a community around 22:58transparent, open technology. 23:02So I want to talk about the future for a minute. 23:05I’m curious what you see as the biggest obstacles to widespread 23:10adoption of responsible AI practices. 23:14One of the biggest obstacles today is an inability—and really, a lack 23:20of understanding about how—to use these models and how they can most 23:25effectively drive forward a company’s commitment to whatever products 23:30and services it might be deploying. 23:33So I always recommend a couple of things for companies to really—to 23:37think about this and to get started. 23:39One is: think about how you are already using AI across all of your 23:45business products and services. 23:47Because already AI is integrated into our workforces and into our 23:52work streams and into the way in which companies are communicating 23:55with their clients every day. 23:57So understand how you are already using it, and understand how you are integrating 24:02oversight and monitoring into those. 24:04One of the best and clearest ways in which a company can really understand how to use 24:09this responsibly is through documentation. 24:11It’s one of the areas where there’s a clear consensus in the community. 24:15So how do you document the models that you are using, making sure 24:19that you’ve got a registry in place? 24:21How do you document the data that you are using and where that data comes from? 24:25This is, sort of, the first system, first line of defense in terms of understanding 24:29both what is in place and what you need to do in order to monitor it moving forward. 24:34And then secondly, once you’ve got an understanding of how you’re already 24:37using the system, look at ways in which you could begin to pilot or 24:41iterate, in a low-risk way, using these systems to really begin to see 24:45how—and what structures you need to have in place—to use it moving forward. 24:50And then thirdly, make sure that you structure a team in place 24:54internally that’s able to do some of this cross-departmental monitoring, 24:59knowledge sharing and learning. 25:01Boards are very, very interested in this technology. 25:04So thinking about how you could have a system or a team in place 25:07internally that’s reporting to your board, giving them a sense of 25:11both, um, the opportunities that it identifies for you and the additional 25:15risk mitigation and management you might be putting into place. 25:18And then, you know, once you have those things into place, you’re 25:22really going to need to understand how you work with the most valuable 25:27asset you have, which is your people. 25:29How do you make sure that AI systems are working for the workers, making 25:34sure that they’re going into place? 25:35The most important and impressive implementations we see are those where 25:39you have the workers who are going to be engaged in this process central to 25:44figuring out how to develop and deploy it in order to really enhance their work. 25:49It’s a core part of a set of shared prosperity guidelines 25:52that we issued last year. 25:55And then from the side of policymakers—um, how should policymakers think about the 26:03balance between innovation and regulation? 26:07Yeah, it’s so interesting—isn’t it?—that we always think of, you know, 26:10innovation and regulation as being two sides of a coin, when in fact so much 26:17innovation comes from having a clear set of guardrails and regulation in place. 26:24We think about all of the innovation that’s happened in 26:27the automotive industry, right? 26:29We can drive faster because we have brakes. 26:33We can drive faster because we have seat belts in place. 26:36So I think—it’s often interesting to me that we think about the two as 26:40being on either sides of the coin. 26:41But in actual fact, you can’t be innovative without 26:46being responsible as well. 26:49And so I think, from a policymaker perspective, what we have been really 26:53encouraging them to do is to understand that you’ve got foundational regulation 26:59in place that works for you nationally. 27:01This could be ensuring that you have strong privacy protections in place. 27:06It could be ensuring that you are understanding potential online harms, 27:11particularly to vulnerable communities, and then look at what you need to 27:14be doing internationally to being both competitive and sustainable. 27:20There’s all sorts of mechanisms that are in place right now at the 27:22international level to think about “How do we build an interoperable space for 27:27these technologies moving forward?” 27:29We’ve been talking in various ways about what it means to responsibly 27:36develop AI, and if you’re gonna boil that down, you know, the essential 27:41concerns that people should be thinking about—like, what are the key things 27:45to think about in responsible AI? 27:49So if you are a company, if we’re talking specifically through the company lens 27:55when we’re thinking about responsible use of AI, the most important difference 28:01between this form of AI technologies and other forms of technologies 28:06that we have used previously is the integration of data, and the training 28:12models that go on top of that data. 28:14So when we think about responsibility, first and foremost 28:17you need to think about your data. 28:19Where did it come from? 28:21What consent and disclosure requirements do you have on it? 28:25Are you privacy protecting? 28:27You can’t be thinking about AI within your company without thinking about data. 28:31And that’s both your training data—but then once you’re using 28:35your systems and integrating and interacting with your consumers, how 28:39are you protecting the data that’s coming out of those systems as well? 28:44And then secondly is: when you’re thinking about how to deploy that AI 28:49system, the most important thing you want to think about is, Are we being 28:54transparent about how it’s being used with our clients and our partners? 28:59So, you know, the idea that if I’m a customer, I should know when I’m 29:04interacting with an AI system; I should know when I’m interacting with a human. 29:10So I think those two pieces are the fundamentals. 29:13And then, of course, you want to be thinking carefully about, uh, 29:16you know, making sure that whatever jurisdiction you’re operating in, 29:20you’re meeting all of the legal requirements with regard to the services 29:25and products that you’re offering. 29:26Let’s finish with a speed round. 29:29Complete the sentence: In five years, AI will… 29:34Will drive equity, justice and shared prosperity if we choose to set that 29:41future trajectory for this technology. 29:44What is the number one thing that people misunderstand about AI? 29:48AI is not good and AI is not bad, but AI is also not neutral. 29:57It is a product of the choices we make as humans about how 30:02we deploy it in the world. 30:05What advice would you give yourself 10 years ago to better 30:09prepare yourself for today? 30:14Ten years ago, I wish that I had known just how fundamental the 30:22enduring questions of ethics and responsibility would be as we develop 30:30this technology moving forward. 30:32So many of the questions that we ask about AI are questions about ourselves 30:38and the way in which we use technology and the way in which technology 30:43can advance the work we’re doing. 30:46How do you use AI in your day-to-day life today? 30:50I use AI all day every day. 30:52So whether it’s my bird app when I go out for my morning walk, helping 30:58me to better identify birds that I see, or whether it is my mapping app 31:03that’s helping me to get more speedily through traffic to whatever meeting I 31:07need to go to, I use AI all the time. 31:11I really enjoy using some of the generative-AI chatbots, more for fun 31:16than for anything else, as a creative partner in thinking through ideas. 31:20And integrating it into all aspects of our lives is just so much about 31:25the way in which we live today. 31:28So people use the word “open” to mean different things, even just 31:33in the context of technology. 31:35How do you define “open” in the context of your work? 31:39So there is the question of “open” as it is applied to technology, 31:43which we’ve talked a lot about. 31:45But I do think a big piece of PAI is “open-minded.” We need to be 31:51open-minded truly to listen to, for example, what a civil society advocate 31:57might say about what they’re seeing in terms of the way in which AI is 32:01interacting in a particular community. 32:04Or we need to be open-minded to hear from a technologist about 32:08their hopes and dreams of where this technology might go, moving forward. 32:12And we need to have those conversations, listening to each other, to really 32:16identify how we’re going to meet the challenge and opportunity of AI today. 32:21So “open” is just fundamental to, uh, the Partnership on AI. 32:28I often call it an experiment in open innovation. 32:33Rebecca. 32:34Thank you so much for your time. 32:36It is my pleasure. 32:37Thank you for having me. 32:40Thank you to Rebecca and Jacob for that engaging discussion about some of the most 32:44pressing issues facing the future of AI. 32:48As Rebecca emphasized, whether you’re thinking about data privacy or 32:51disclosure, transparency and openness are key to solving challenges and 32:56capitalizing on new opportunities. 33:00By developing best practices and resources, Partnership on AI is building 33:05out the guardrails to support the release of open source models and the 33:10practice of post-deployment monitoring. 33:14By sharing their work with the broader community, Rebecca and 33:17PAI are demonstrating how working responsibly, ethically, and 33:21openly can help drive innovation. 33:26Smart Talks with IBM is produced by Matt Romano, Joey Fischground, Amy 33:31Gaines McQuade and Jacob Goldstein. 33:34We’re edited by Lidia Jean Kott. 33:36Our engineers are Sarah Bruguiere and Ben Tolliday. 33:40Theme song by Gramoscope. 33:41Special thanks to the EightBar and IBM teams, as well as 33:45the Pushkin marketing team. 33:47Smart Talks with IBM is a production of Pushkin Industries 33:50and Ruby Studio at iHeartMedia. 33:52To find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts, 33:57or wherever you listen to podcasts. 34:00I’m Malcolm 34:06Gladwell. 34:08This is a paid advertisement from IBM. 34:11The conversations on this podcast don’t necessarily represent IBM’s 34:15positions, strategies or opinions.