Learning Library

← Back to Library

Generative AI's Business Revolution with Gil

Key Points

  • Malcolm Gladwell introduces the “Smart Talks with IBM” podcast season focusing on how generative AI can act as a transformative multiplier for businesses.
  • He interviews IBM Research SVP Dr. Darío Gil, a 20‑year veteran of IBM’s research labs, to discuss the rise of generative AI and its implications for business and society.
  • Gil explains that while AI research dates back to the 1950s, the term “AI” has historically carried mixed reputations, oscillating between hype cycles and periods of skepticism due to limited successes.
  • He emphasizes that organizations that successfully leverage AI to create measurable value will become the dominant players in the near future.

Sections

Full Transcript

# Generative AI's Business Revolution with Gil **Source:** [https://www.youtube.com/watch?v=Si5rhjifbZs](https://www.youtube.com/watch?v=Si5rhjifbZs) **Duration:** 00:52:02 ## Summary - Malcolm Gladwell introduces the “Smart Talks with IBM” podcast season focusing on how generative AI can act as a transformative multiplier for businesses. - He interviews IBM Research SVP Dr. Darío Gil, a 20‑year veteran of IBM’s research labs, to discuss the rise of generative AI and its implications for business and society. - Gil explains that while AI research dates back to the 1950s, the term “AI” has historically carried mixed reputations, oscillating between hype cycles and periods of skepticism due to limited successes. - He emphasizes that organizations that successfully leverage AI to create measurable value will become the dominant players in the near future. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Si5rhjifbZs&t=0s) **AI, Business, and Generative Futures** - Malcolm Gladwell introduces a Smart Talks episode where IBM Research SVP Darío Gil discusses the rise of generative AI and its transformative potential for businesses and society. - [00:03:09](https://www.youtube.com/watch?v=Si5rhjifbZs&t=189s) **IBM AI Evolution and Jeopardy Milestone** - Dario explains how the term “AI” faded and re‑emerged with deep learning, highlighting IBM’s long‑standing involvement—from the Dartmouth conference to early game‑playing research—and identifying the late‑2000s Jeopardy! project as the pivotal moment that brought modern AI into the spotlight. - [00:06:15](https://www.youtube.com/watch?v=Si5rhjifbZs&t=375s) **From Backend to Frontline AI** - The speaker explains how foundation models have turned AI from invisible back‑end tools into an interactive, widely accessible technology—comparable to the web’s early democratizing surge—enabling anyone to build and use AI applications. - [00:09:41](https://www.youtube.com/watch?v=Si5rhjifbZs&t=581s) **AI Democratization vs Value Creation** - Dario explains that although AI usage will become universal, lasting wealth and competitive advantage will accrue only to those who develop and embed AI, warning that merely being a user will widen the gap between haves and have‑nots. - [00:13:27](https://www.youtube.com/watch?v=Si5rhjifbZs&t=807s) **Workflow Integration and Data Curation Hurdles** - The speakers argue that the chief obstacles to widespread AI adoption are ensuring equitable, diverse data sets, redesigning organizational workflows to embed AI naturally, and mastering data curation and external data integration. - [00:16:53](https://www.youtube.com/watch?v=Si5rhjifbZs&t=1013s) **Three AI Impact Lenses** - Dario outlines three perspectives for advising colleges on AI—operational efficiency, curriculum and educational mission implications, and outward‑oriented student outreach via personalized chatbots—before Malcolm questions the feasibility of assigning traditional essays. - [00:20:18](https://www.youtube.com/watch?v=Si5rhjifbZs&t=1218s) **Rethinking Education in the AI Age** - Malcolm recounts his father's switch from calculator‑based exams to conceptual problem‑solving, and Dario relates that historic shift to today's push for deeper thinking over mere button‑pressing in learning. - [00:23:33](https://www.youtube.com/watch?v=Si5rhjifbZs&t=1413s) **AI Bridging Academic Silos** - A dialogue explores how AI can break down siloed structures in academia and corporations, fostering interdisciplinary collaboration and methodological innovation. - [00:26:49](https://www.youtube.com/watch?v=Si5rhjifbZs&t=1609s) **Advising Writers on AI Contracts** - Dario suggests breaking AI issues into technical possibilities and their industry impacts, creating a taxonomy of capabilities to help writers understand what can be generated and how compensation should reflect those uses. - [00:30:01](https://www.youtube.com/watch?v=Si5rhjifbZs&t=1801s) **Negotiating Tech Amid Industry Strikes** - The speakers debate how a writer‑studio strike flips power dynamics, consider internal studio strategies, and question the viability of multi‑year contracts given the rapid evolution of technological capabilities. - [00:33:12](https://www.youtube.com/watch?v=Si5rhjifbZs&t=1992s) **Evaluating IBM AI Ops Investment** - Dario explains that IBM’s AI operations research team numbers about 3,500 scientists within a larger workforce of tens of thousands, and argues that assessing underinvestment requires separating the resources needed for technology creation from those needed for product and service deployment. - [00:36:46](https://www.youtube.com/watch?v=Si5rhjifbZs&t=2206s) **People Problem in Tech Adoption** - Malcolm and Dario explain that the biggest hurdle in introducing new technologies to colleges is the human side—aligning faculty, administrators, and students through consulting, methodology, and clear problem definition before the technology itself can be effectively adopted. - [00:40:08](https://www.youtube.com/watch?v=Si5rhjifbZs&t=2408s) **AI‑Driven Medical Career Guidance** - The speakers discuss using AI to advise on medical career choices, emphasizing how advances in computing and knowledge representation will reshape the scientific method and the skills most valued in medicine over the next decade. - [00:44:41](https://www.youtube.com/watch?v=Si5rhjifbZs&t=2681s) **Rethinking Doctor Time with AI** - The speakers argue that the discussion should shift from fearing AI‑driven job loss to examining how AI can accelerate diagnoses, freeing doctors’ minutes for other essential, patient‑focused responsibilities. - [00:47:50](https://www.youtube.com/watch?v=Si5rhjifbZs&t=2870s) **Rejecting Technological Determinism in Democracy** - The speaker argues that technology should not be seen as destiny‑defining, emphasizing that societal good is perpetually contested in democratic discourse and that tech creators bear responsibilities beyond their scientific roles. - [00:51:13](https://www.youtube.com/watch?v=Si5rhjifbZs&t=3073s) **Podcast Credits and Sponsor Note** - A rundown of the production team, collaborators, and advertising disclaimer for the IBM-sponsored episode of Smart Talks with IBM. ## Full Transcript
0:00Malcolm Gladwell: Alright, welcome everybody. You guys excited? Hello, hello. Welcome to Smart Talks  with IBM, a podcast from Pushkin Industries, 0:13iHeartRadio and IBM. I’m Malcolm Gladwell. This season, we’re continuing our conversations 0:19with New Creators— visionaries who are creatively  applying technology in business to drive change, 0:25but with a focus on the transformative power  of artificial intelligence and what it means 0:30to leverage AI as a game-changing  multiplier for your business. 0:35Today’s episode is a bit different than usual.  I was recently joined onstage by Darío Gil for 0:41a conversation in front of a live audience at  the iHeartMedia headquarters in Manhattan. Darío 0:46is the Senior Vice President and Director  of IBM Research—one of the world’s largest 0:51and most influential corporate research labs. We discussed the rise of generative AI and what 0:57it means for business and society. He also  explained how organizations that leverage 1:02AI to create value will dominate in the near  future. Okay, let’s get to the conversation. 1:10Malcolm Gladwell: Hello, everyone. Welcome. And I'm  here with Dr. Darío Gil. And I wanted to say, 1:18before we get started—this is something I said  backstage: that I feel very guilty today because, 1:24you're the arguably one of the most important  figures in AI research in the world, and we 1:32have taken you away from your job for a morning. It's like if, you know, Oppenheimer's wife in 1:391944 said, “Let's go and have a little getaway in  the Bahamas.” It's that kind of thing. You know, 1:46what do you say to your wife? “I  can't. We have got to work on this 1:50thing I can't tell you about.” She's like, “Get me out of Los Alamos.” “No.” So I do feel guilty. 1:56Um, we've set back research by about four hours. but I wanted to—you've been up with that, with 2:05IBM, for 20— Dario: Years. Twenty years this summer. 2:08Malcolm: So—and how old were you when you—not  to give away your age, but you were how old 2:12when you started? Dario: I was 28. 2:13Malcolm: Okay. So I want to go  back to your 28-year-old self. Now, 2:17if I asked you about artificial intelligence,  I asked 28-year-old Darío, “What does the 2:23future hold for AI? How quickly will this new  technology transform our world?” et cetera, 2:29et cetera, what would 28-year-old Darío have said? Dario: Well, I think the first thing is that even 2:34though AI as a field has been with us for a  long time—since the mid-1950s—at that time, 2:40“AI” was not a very polite word to say,  meaning within the scientific community. 2:46People didn't use, sort of, that term. They  would have said things like, you know, maybe, “I 2:50do things related to machine learning,” right? Or  “statistical techniques, in terms of classifiers,” 2:55and so on. But AI had a mixed reputation, right?  It had gone through different cycles of hype and, 3:02it's also had moments of a lot of negativity  towards it because of lack of success. 3:09Um—and so I think that that will be the first  thing. We'd probably say, like, AI is like— what 3:13is that? Like, you know, respectable scientists  are not working on AI defined as such. And that 3:19really changed over the last 15 years only, right?  I would say with the advent of deep learning, 3:24over the last decade, is when that reentered  again the lexicon of saying “AI,” and that 3:29that was a legitimate thing, to work on. So I would say that that's the first thing—I 3:33think we would have noticed a  contrast 20 years ago. Yeah. 3:36Malcolm: So at what point in your 20-year  tenure at IBM would you say you kind of 3:42snapped into the present kind of “wow” mode? Dario: I would say, in, maybe the late 2000s. 3:53When IBM was working on the Jeopardy! project,  and just seeing the demonstrations of what could 4:02be done in question- answering; it— Malcolm: Literally, Jeopardy! is this 4:06crucial moment in the history of AI. Dario: You know, there had been a long 4:10and wonderful history, inside IBM on  AI. So, for example, in terms of, like, 4:16these grand challenges at the very  beginning of the field’s founding, 4:20which is this famous Dartmouth conference  that, actually, IBM sponsored, to create, 4:26there was an IBMer there called Nathaniel  Rochester, and there were a few others who, 4:32right after that—they started thinking  about demonstrations of this field. 4:36And then, for example, they created the first,  game to play checkers and to demonstrate that you 4:41could do machine learning on that. Obviously, we  saw later in the ’90s, like chess, that was a very 4:47famous example of that. That was Deep Blue. With  Deep Blue, right? And, playing with Kasparov. 4:52And then—but I think the moment that  was really—those other ones felt like, 4:56kind of like brute force, anticipating sort of  like moves ahead. But this aspect of dealing with 5:00language and question-answering felt different.  And I think for us internally and many others, 5:07was when—a moment of saying like, wow,  you know, what are the possibilities here? 5:11And then soon after that, connected to the sort of  advancements in computing and with deep learning, 5:16the last decade has just been an all-out,  you know, sort of like front of advancements, 5:20and that—and I just continue to be  more and more impressed. And the last 5:23few years have been remarkable, too. Yeah. 5:25Malcolm: I'm going to ask you three quick conceptual questions before we dig into it. Just so I sort of get a—we all get a 5:32feel for the shape of AI. Question number one  is, where are we in the evolution of this? So, 5:44the obvious que—we, we all suddenly are aware  of it, we're talking about it. What—can you give 5:48us an analogy about where we are in the kind  of likely evolution of this as a Technology? 5:54Dario: So I think we're on a significant  inflection point. That, it feels like 5:59the equivalent of the first browsers when they  appeared, and people imagined the possibilities 6:06of the internet—or more, imagined experiencing  the internet. The internet had been around, 6:11right, for quite a few decades. AI  has been around for many decades. 6:15I think the moment we find ourselves in is  that people can touch it, and they can— before, 6:19there were AI systems that were like behind  the scenes, like your search results, 6:23or translation systems. But they didn't have the  experience of like, this is what it feels like to 6:28interact with this thing. So, that's what I mean. I think maybe that analogy of the browser is 6:32appropriate because it's—all of a  sudden it's like, whoa, you know, 6:35there's this network of machines, and content can  be distributed, and everybody can self-publish. 6:41And there was a moment that—we all remember  that. And I think that that is what the world 6:44has experienced over the last nine months. So, and—but fundamentally, also what is 6:50important is that this moment is where the ease  of—the number of people that can build and use AI 6:56has skyrocketed. So over the last decade,  technology firms that had large research 7:03teams could build AI that worked really well,  honestly. But when you went down into, say, hey, 7:09can everybody use it? Can a data-science team in  a bank, go and develop these applications? And it 7:15was like more complicated. Some could do it, but  it was more—the barrier of entry was high. Now 7:19it's very different because of foundation  models and the implications that that has— 7:24Malcolm: With the moment  where the technology is being— 7:26Dario: Democratized. Being democratized. Frankly,  it works better, for classes of problems, 7:32like programming and other things. It’s  really incredibly impressive what it can do. 7:36So the accuracy and the performance of it is much  better. Yeah. And the ease of use and the number 7:40of use cases we can pursue is much bigger.  So that democratization is a big difference. 7:44Malcolm: You say, when you make an analogy  to the first browsers—if you, if we—to do 7:50another one of these time-travel questions,  back at the beginning of the first browsers, 7:54it's safe to say, many of the potential  uses of the internet and such—we hadn't 8:00even begun, we couldn't even anticipate. Dario: Right. Right. 8:02Malcolm: Exactly. So we're at the point where the future direction is largely unpredictable. 8:06Dario: Yes. Yeah, I think that is right, because it's such a horizontal technology that—  the intersection of the horizontal capability, 8:14which is about expanding our productivity  on tasks that we wouldn't be able to do 8:19efficiently without it—it has to marry, the  use cases that reflect the diversity of human 8:24experience and institutional diversity. So as more and more institutions said, 8:28you know, I'm focused on agriculture,  you know, to be able to improve seeds, 8:33in these kinds of environments, they'll find  their own context in which—that—matters that 8:37the creators of AI did not anticipate at  the beginning. So I think that that is, 8:41then—the fruit of surprises will be like, why, we  didn't even think that it could be used for that. 8:45And also, clever people will create new  business models associated with that. Like, 8:50it happened with the internet, of course, as well,  and that will be its own source of transformation 8:55and change in its own right. So I think all  of that is yet to unfold, right? What we're 8:59seeing is this catalyst moment of technology that  works well enough, and it can be democratized. 9:03Malcolm: Yeah. The next sort of conceptual  question: you know, we could loosely understand 9:09or categorize innovations, in terms of their  impact on the kind of, balance of power between 9:18haves and have-nots. Mm-hmm? Some innovations,  you know, obviously, uh, favor those who already 9:24have a—make the rich richer. Some—the—some,  it's a rising tide that lifts all boats, 9:30and some are biased in the other direction. They close the gap between. Is it possible 9:36to say, to predict, which of those  three categories AI might fall into? 9:41Dario: It's a great question. A first, observation  I would make on your first two categories is that 9:50it will be—both likely be true that the use of AI  will be highly democratized, meaning the number 9:55of people that have access to its power to make  improvements in terms of efficiency and so on 10:00will be fairly universal, and that the ones who  are able to create AI, may be quite concentrated. 10:09So if you look at it from the lens of who  creates wealth and value over sustained 10:15periods of time—particularly, say, in a  context like business—I think just being 10:20a user of AI technology is an insufficient  strategy. And the reason for that is, like, 10:27yes, you will get the immediate productivity  boost of, like, just making API calls and, 10:31that will be a new baseline for everybody.  But you're not accruing value in terms of 10:36representing your data inside the AI in a  way that gives you a sustainable competitive 10:41advantage. So what I always try to tell people  is, don't just be an AI user; be an AI value 10:46creator. And I think that that will have a lot of  consequences in terms of the haves and have-nots, 10:53as an example, and that will apply both to  institutions and regions and countries, etc. 10:58So I think it would be kind  of a mistake, right, to just 11:02develop strategies that are just about usage. Malcolm: Yeah. But to come back to that question 11:07for a moment, to give you a specific— suppose  I'm a, I'm an industrial farmer in Iowa with 10 11:15million in equipment, and blah, blah, blah. And  I'm comparing it to a subsistence farmer, someone 11:21in the developing world, who's got a cell phone, right. Over the next five years, whose, 11:27whose well-being rises by a greater amount? Dario: Yeah, I think, it's a good question, 11:34but it might be hard to do a one-to-one sort of  like attribution to just one variable in this 11:39case, which is AI. But again, provided  that you have access to a phone, right, 11:45and some way to be able to be connected. I do think—so for example, in that context, 11:51we've developed, we've done work with NASA,  as an example, to build geospatial models, 11:56using some of these new techniques. And  I think, for example, our ability to do 12:00flood prediction—I'll tell you an advantage of why  we'll be a democratization force in that context. 12:05Before, to build a flood model based on  satellite imagery was actually so onerous 12:11and so complicated and difficult that you would  just target to very specific regions. And then, 12:15obviously, countries prioritize their  own, right? But what we've demonstrated 12:19is actually you can extend that technique  to have like global coverage around that. 12:22So in that context, I would say it's  a force towards democratization—that 12:26everybody sort of would have access  if you have some kind of connectivity. 12:29Malcolm: That Iowa farmer might have a flood  model. The guy in the developing world definitely 12:34didn't, and now he's got a shot at getting one. Dario: Yeah, but now he has a shot at getting one. 12:37So there's aspects of it that—so long as we  provide connectivity and access to it—that 12:42there can be democratization forces. But I'll  give you another example that, that can be quite 12:46concerning, which is language, right? So there's  so much language, in English. And there is sort 12:54of like this reinforcement loop that happens,  that the more you concentrate—because it has 12:58obvious benefits for global communication  and standardization—the more you can enrich 13:03like base AI models based on that capability. If you have very resource-scarce languages, 13:09you tend to develop less powerful AI  with those languages, and so on. So one 13:14has to actually worry and, and focus on the  ability to actually represent, in that case, 13:21language is a piece of culture also in the AI  such that everybody can benefit from it too. 13:27So there's a lot of considerations  in terms of equity about the data, 13:31the data sets that we accrue, and what problems  are we trying to solve. I mean, you mentioned 13:36agriculture or healthcare and so on. If we only  solve problems that are related to marketing, 13:41as an example, that would be a less rich world in  terms of opportunity than if we incorporate many, 13:46many other broader sets of problems. Malcolm: Yeah. Who do you think—what do 13:50you think are the biggest impediments to the  adoption of, of AI as you would like—as you 13:56think AI ought to be adopted? I mean, if you would  look, what are the sticking points that you would— 14:01Dario: Look, in the end, I'm going to  give a nontechnological answer. The 14:05first one has to do with workflow, right? So even if the technology is very capable, 14:11the organizational change inside a company, to  incorporate into the natural workflow of people on 14:16how we work, is—it's a lesson we have learned over  the last decade is hugely important. Mm-hmm? So 14:22there's a lot of design considerations. There's  a lot of, how do people want to work, right? 14:28How do they work today? And what is the  natural entry point for AI? So that's 14:31like number one. And then the second  one is, you know—for the broad, uh, 14:37value-creation aspect of it—is the understanding  inside the companies of how you have to curate and 14:44create data, to combine it with external  data such that you can have powerful 14:49AI models that actually fit your needs. And that aspect of what it takes to actually 14:55create and curate the data for this modern AI—um,  it's still a work in progress, right? I think part 15:02of the problem that happens very often when I talk  to institutions is that they say, “AI, yeah, yeah, 15:06yeah, I'm doing it, I've been doing it for a long  time.” And the reality is that that answer can 15:12sometimes be a little bit of a cop-out, right? I know you were doing machine learning. You were 15:16doing some of these things, but actually the  latest version of AI, or what's happened with 15:21foundation models—not only is it very new, it's  very hard to do. And honestly, if you haven't 15:27been, assembling very large teams and spending  hundreds of millions of dollars of compute—in sum, 15:32you're probably not doing it right. You're doing  something else that is in the broad category. And 15:37I think the lessons about what it means to make  this transition to this new wave is still in early 15:42phases of understanding. Malcolm: So what would you say? I want to give you a couple of 15:45examples of people in real-world  positions of responsibility. 15:51Imagine I'm sitting right here. So imagine  that I am the President of a small liberal 15:55arts college. And I come to you and I say,  Darío, I keep hearing about AI. My college 16:00has— I'm making this much money. If—that  every year, my enrollment's declining, 16:09I feel like this maybe is an opportunity. What  is the opportunity for me? What would you say? 16:15Dario: So it's probably in a couple of segments  around that, right? All one has to do is, well, 16:21what is the implications of this technology inside  the institution itself, inside of the college, 16:26and how we operate? And, can we improve, for  example, efficiency? Like if you're having 16:31very low levels of, of sort of margin to be  able to reinvest, is, you know, you run IT, 16:38you run, infrastructure, you run many things  inside the college. What are the opportunities 16:43to increase the productivity or automate and drive  savings such that you can reinvest that money into 16:49the mission of education, right?—as an example. Malcolm: So number one is operational efficiency. 16:53Dario: Operational efficiency, is a big one. I  think the second one is: within the context of 16:58the college, there's implications for the  educational mission in its own right. How 17:01will—how does a curriculum need to  evolve, or not? What are acceptable 17:06use policies for some of these AI? I don't think—we've all read a lot 17:09about like what can happen in terms of exams  and, and so on, and cheating and not cheating, 17:13or what—are they actually positive elements  of it in terms of how curriculum should be 17:17developed? And professions? Sustain around  that. And then there's another, third, 17:21dimension which is the outward-oriented element  of it, which is like prospective students, right? 17:25So, which is, frankly speaking, a big  use case that is happening right now, 17:29which in the broader industry is called “customer  care” or “client care” or “citizen care.” So—and 17:33this question will be— education. Like, you  know, “Hey, are you reaching the right students?” 17:38Around that—that may apply to the college. How can you create for them, for example, 17:42an environment to interact with the college, and  answering questions? That could be a chatbot, 17:46or something like that, to learn about it. And  personalization. So I would say there's, like, 17:50at least three lenses with which  I would give advice, right? The— 17:53Malcolm: The second, let's pause on the second  one though, because it's really interesting. 17:57So I really can't assign an essay anymore, can I? Dario: Can I assign an essay? 18:03Malcolm: Can I say, “Write me a research paper and come back to me in three weeks?” Can I do that anymore? 18:08Dario: I think you can. Malcolm: How do I do that? 18:11Dario: I think you can. Look, this—so there's two questions around that. I think that if one goes and explains in the context, 18:19like, “What is it? Why are we here? Why are we  in this class? What is the purpose of this?” And, 18:24one starts with assuming, like an element of,  like, decency in people, or people are there, 18:28like, to learn, and so on, and you just give  a disclaimer: “Look, I know that one option 18:32you have is, like, just, put the essay question  and click ‘Go,’ and, like, and give an answer, 18:37you know? But that is not why we’re here, and that  is not the intent of what we’re trying to do.” 18:41So first I would start with the—sort of  like the norms of intent and decency, 18:47and appeal to those, as step number one. Then we  all know that there will be a distribution of use 18:52cases—that people like that will come in  one ear and come out of the other and do 18:56that. And,—so for a subset of that, I think the  technology is going to evolve in such a way that, 19:02we will have more and more of the ability to  discern—right?—you know when that has been 19:06AI generated, right? And, created. It won't be  perfect, right? But there's some elements that 19:12you can—imagine inputting the essay, and you  say, “Hey, this is like—it— .” And for example, 19:18one way you can do that, just to give you  an intuition, you could just have an essay, 19:21uh, that you write with pencil and paper at the  beginning. You get a baseline of what your writing 19:26is like. And then later, when you, generate it,  there'll be obvious differences around what kind 19:33of writing has been generated from the other. Malcolm: Yeah, but you've turned—it's—everything 19:37you're describing makes sense, but it greatly—in  this, in this respect, at least, it seems to 19:42greatly complicate the life of the teacher.  Whereas the other two use cases seem to kind of 19:47clarify and simplify the role, right? Suddenly,  reaching students, prospective students, sounds 19:55like I can do that much more kind of efficiently. Yeah, I can bring down administration costs, 19:59but the teaching thing is tricky. Dario: Well, until we develop the new norms, 20:05right? I know it's an abused analogy, but  calculators—we deal, we dealt with that too, 20:10right? And, it says, “Well—calculator. What  is the purpose of math? How are we going to 20:14do this?” and so on. And we have— 20:16Malcolm: Can I tell you my dad's calculator story? Dario: Yes, please. 20:18Malcolm: My father was a mathematician.  Taught mathematics at the University of 20:22Waterloo in Canada. And in the ’70s, when  people started to get pocket calculators, 20:27his students demanded that they be able  to use them. And he said no, and he—they 20:31took him to the administration and he lost. So he then changed. Completely threw out all 20:37of his old exams. Introduced new exams, where  there was no calculation. It was all like, 20:44“deep think,” you know. Figure out the problem  on a conceptual level and describe it to me. And 20:49they were all—students deeply unhappy that  he had made their lives more complicated. 20:53Dario: But it's to your point.  That's the point. To your— 20:56Malcolm: Point. Right. The result was probably  a better education. Right. He just removed the 21:02element that they could gain with their pocket  calculators. I suppose it's a version of that. 21:06Dario: I think it's a version of that.  And so I think they will develop the 21:09equivalent of what your father did. And I think people say, you know what, 21:11it's like—these kinds of things, everybody's doing  it generically and none of it has any meaning 21:15because all you're doing is pressing buttons. And  like the intent of this was something which was to 21:19teach you how to write or to think or something.  There may be a variant of how we do all of this. 21:23I mean, obviously some version of that that  has happened is like, okay, we're all going 21:27to sit down and do it with pencil and paper and  no computers in the classroom, but there'll be 21:30other variants of creativity that people will  put forth to say, you know what? You know, 21:34that's a way to solve that problem too. Malcolm: But this is interesting, 21:37because—to stay on this analogy—we're  really talking about a profound rethinking, 21:43just—using a college as an example. A real  profound rethinking of the way—there's no 21:50part of this college that's unaffected by AI, (a).  (B), in one case, I've made everyone's job easier; 21:57in one case I've made—I'm asking us to really  rethink from the ground up what “teaching” means. 22:03In another case, I've automated systems  that I didn't think of. I mean, it's like, 22:07that's right. That's all—it's not all—that's  a lot to ask someone who got a PhD in medieval 22:12language and literature, 40 years ago. Dario: Yeah, but you know, I'll tell 22:16you a positive sort of development that I'm  seeing. The sciences around this, which is, 22:21you're seeing—as you see more and more examples  of applying AI technology within the context 22:27of like historians too as an example, right? When you have archival and—you know, and you have 22:32all these books, and being able to actually help  you as an assistant, right, around that. But not 22:36only with text now, but with diagrams, right? And,  uh, I've seen it in anthropology too, right? And, 22:42uh, in archaeology, with examples of engravings  and translations and things. That can happen. 22:47So, as you see in diverse fields, people  applying these techniques to advance on how 22:53to do physics or how to do chemistry. They  inspire each other, right? And they say, 22:57how does it apply, to my area? So once, as that  happens, it becomes less of a chore of like, 23:03my God, how do I have to deal with this? But actually, it's triggered by curiosity. 23:07It's triggered by—you know, there'll be  like, you know, faculty that'll be like, 23:11you know what, you know, “Let me explore  what this means for my area.” And they will 23:15adapt it to the local context—to the local,  you know, uh, language, and the profession 23:19itself. So I see that as a positive vector. That is not all going to feel like homework, 23:25you know? It's not going to feel like, “Oh my  God, this is so overwhelming,” but rather to 23:28be very practical, to see what works. What  have I seen others do that is inspiring? 23:33And what am I inspired to do? You know, what—  what is—how is this going to help my career? 23:37I think that that's going to be an interesting  question for, for, you know, those faculty 23:40members, for the students and professionals. Malcolm: Sorry, I'm gonna stick with this example 23:44alone, because it's really interesting. I'm  curious—following up on what you just said—that 23:48one of the most persistent critiques of academia,  but also of many, of many corporate institutions, 23:56um, in recent years has been “siloing,” right? Different parts of the, of the organization going 24:02off on their own and not speaking to each other  is a potent—is, is a real potential benefit to AI: 24:11the kind of breaking down—a simple  tool for breaking down those kinds 24:15of barriers. Is that a very, is that an  elegant way of sort of summing that up? 24:19Dario: I really think—and I was actually  just having a conversation with a provost 24:23very much on this topic very recently, exactly  on that, which is: all these, this appetite, 24:29right? To collaborate across disciplines. There's a lot of, attempts towards our goal, 24:33right? Creating interdisciplinary centers,  creating dual-degree programs or dual-appointment 24:38programs. But actually, in—a lot of progress in  academia, happens by methodology too. Right? Like 24:45a new, when some methodology gets adopted—I mean,  the most famous example of that is the scientific 24:51method, as an example of that—but when  you have a methodology that gets adopted, 24:55it also provides a way to speak to your  colleagues across different disciplines. 25:00And I think what's happening in AI is, is linked  to that. That within the context of the scientific 25:06method, as an example, the methodology about which  we, about which we do discovery—the role of data, 25:13the role of these neural networks, of how we  actually find proximity of concepts to one 25:17another—it's actually fundamentally different  than how we've traditionally applied it. 25:23So, as we see across more professions, people  applying this methodology is also going to give 25:29some element of common language to each other,  right? And in fact, in this very high-dimensional 25:35representation of information that is present in  neural networks, we may find amazing adjacencies 25:41or connections of themes and topics in ways that  the individual practitioners cannot describe, but 25:47yet will be latent in these large neural networks. We are going to suffer a little bit from 25:52causality—from the problem of like, “Hey,  what's the root cause of that?” Because I 25:57think one of the unsatisfying aspects that  this methodology will provide is they may 26:02give you answers for which they don't give you  good reasons for where the answers came from. 26:07And then there will be the traditional process  of discovery, of saying, if that is the answer, 26:12what are the reasons? So we're gonna  have to do this sort of hybrid, uh, 26:16way of understanding the world. But I do think  that common layer of AI is a powerful new thing. 26:22Malcolm: Yeah. A couple of random  questions that come to mind as you talk. 26:26In the, in the writer's strike that just ended in  Hollywood, one of the sticking points was how the 26:32studios and writers would treat AI-generated  content—would writers get credit if their 26:38material was somehow the source for AI? But more  broadly, did the writers need protections against 26:45the use of—. I could go on. You know what?  You probably were familiar with all of this. 26:49Had you been—I don't know whether you were,  but had either side called you in for advice 26:54during that? The writers, had the writers  called you and said “Dario, what should 26:58we do about AI? And how should we—that  should be reflected in our contract 27:04negotiations?” What would you have told them? Dario: I—the way I think about that is that 27:09I divide it—I would divide it into two parts.  Pieces. First is: what's technically possible, 27:13right? And anticipate scenarios, like, what  can you do with voice cloning? For example, 27:20it is possible there's been, um, dubbing,  right? Like—let's just take that topic, 27:25right? Around the world, there was all these,  folks that would dub people in other languages. 27:30Well, now you can do these incredible renderings;  I mean, I don't know if you've seen them, where, 27:35you know, you match the lips—it's your original  voice, but speaking any language that you want. 27:39That's the thing. So basically that has a  set of implications around that. I mean, 27:42just to give an example. So I would say: create  a taxonomy, that describes technical capabilities 27:48that we know of today and, uh, applications to the  industry and to examples of like, “Hey, I could 27:54film you for five minutes and I could generate  two hours of content of you and I don't have to, 27:58you know—then if you'll get paid by the hour,  obviously I'm not paying you for the other thing.” 28:02So I would say “technological capability,” and  then map with their expertise consequences of 28:07how it changes the way they work, or the way  they interact, or the way they negotiate, 28:11and so on. So that would be one element of it. And then the other one is like a 28:15non-technology-related matter, which is an element  of—almost of distributive justice. It's like, 28:19who deserves what? Right? And who has the power  to get what? And, and then that's a completely 28:25different discussion. That is to say, well,  if this is the scenario of what's possible, 28:29you know, what do we want? And what are we able  to get? And, I think that that's a different 28:34discussion, which is, as old as life. Malcolm: Which one do you do first? 28:38I think it is very helpful to have an understanding of what's possible and how it  changes the landscape, uh, as part of a broader, 28:48uh, discussion—right?— and a broader negotiation.  Because, you also have to see the opportunities, 28:54because there will be a lot of ground to  say, “If we can do it in this way, and we 29:00can all be that much more efficient in getting  this piece worked on or this filming done....” 29:05But we have a reasonable agreement about how  we—both sides—benefit from it, right? Then 29:11that's a win-win for everybody, right? So that's  a—I think that would be a golden triangle, right? 29:16Malcolm: Here's my reading, and I would  like you to correct me if I'm wrong. And 29:19I'm likely to be wrong. Uh, when I looked at  that strike, I said, “If they're worried about 29:24AI—the writers are worried about AI. That seems  silly. It should be the studios who are worried 29:30about the economic impact of AI.” Doesn't,  in the long run—AI puts the studios out of 29:35business long before it puts the writers out  of business. I only need the studio because 29:39the costs of production are as high as the sky  and the costs of production are overwhelming. 29:45And—whereas if I don't, if I have a  tool which brings, introduces massive 29:51technological efficiencies to the production  of movies, then I don't need—why do I need a 29:55studio? Why would they be the scared ones? Dario: Or maybe—or maybe you need a, like, 29:59a different kind of studio. Or a different  kind of studio. A different kind of studio. 30:01Malcolm: What do you mean? In this strike,  the frightened ones were the writers and 30:08were the studios. Wasn't that backwards? Dario: I haven't thought about it. But the 30:16implications of it—it goes back to what we  were talking about before. The implications, 30:18because they're so horizontal—it  is right to think about it. Like, 30:21what does it do to the studios as well, right? But then, the reason why that happens is that 30:28it's the order of either negotiations or who  first got concerned about it and did something 30:35about it—right?— which is in the context of  the strike. Um, you know, I don't know what the 30:39equivalent conversations are going on inside  the studio and whether they have a war room 30:42saying what this is going to mean to us, right? But it doesn't get exercised through a strike, 30:47but maybe through a task force inside, the  companies, about what are they going to do, right? 30:51Malcolm: Well—and to go back to your  thing, you said the first thing you do 30:54is you make a list of what technological  capabilities are, but don't technological 30:58capabilities change every—? I mean, they do. You're racing ahead so fast. So you can't—can 31:03you have a contract? Again, I'm sorry  for getting into a little weeds here, 31:07but this is interesting. Can you have a—you can't  have a five-year contract if the contract is based 31:12on an assessment of technological capabilities  in 2023. Because by the time we get to 2028, 31:202028, it's totally different, right? Dario: Yeah, but like, where I was going 31:25is like—there are some, abstractions around  that. It’s like, what can we do with my image, 31:32right? Like, if I generally get the category,  that my image can be reproduced, generated, 31:36contents, and so on, it’s like, let’s talk about  the abstract notion about who has rights to that, 31:41or do we both get to benefit from that? If you get that straight, yes, the nature 31:46of how the image gets altered, created, or  something—it will change underneath, but the 31:50concept will stay the same. And, uh, so I think  what’s important is to get the categories right. 31:55Malcolm: Yeah. Yeah. If you had to—if you had to  think about the biggest technological obstacle, 32:02revolutions of the postwar era—last 75 years—we  can all come up with a list. Actually, it’s really 32:09fun to come up with a list. I was thinking about  this when we were, you know—containerized shipping 32:14is my favorite. The green revolution.  The internet. Where is AI in that list? 32:24Dario: So I would put it first. In that context that you put forth, 32:29since World War II, undoubtedly, like, computing  as a category is one of those trajectories that 32:36has reshaped, right, our world. And I  think within computing, I would say, 32:42the role that semiconductors have had has been  incredibly defining. I would say AI is the second 32:49example of that as a core architecture, uh, that  is going to have an equivalent level of impact. 32:56And then the third leg I would put to  that equation will be quantum. Quantum 32:59information. And that’s sort of like—I like  to summarize that the future of computing is 33:03bits, neurons, and qubits. And it’s that  idea of high-precision computation—the 33:08world of neural networks and artificial  intelligence and the world of quantum. 33:12And the combination of those things  is going to be the defining force 33:15of the next 100 years in that category of  computing. But it makes the list for sure. 33:20Malcolm: If it’s that high up on the list, this  is a total hypothetical. Would you—if you were 33:25starting over; if you were starting at IBM right  now—would you say, “Oh, our AI act operations 33:32actually should be way bigger”? Like, how many thousands 33:35of people working for you? Dario: So within the research division, 33:38uh, it’s about like 3,500 scientists. Malcolm: So in a perfect world, would you, 33:42if it’s that big, isn’t that too small a group? Dario: Yeah. Well, that’s like in the research 33:48division. I mean, IBM overall, there’s tens  of thousands of people working on that. 33:52Malcolm: We’re talking, we’re talking about—but  I mean, like, so, starting from—first, 33:54so you have a—you’ve, we’ve got a technology  that you’re ranking with compute and, you know, 34:01up there with it in terms of a world changer.  Are we—so what I'm basically asking is, 34:08are we underinvested in this future? Dario: No, but so—so yeah, 34:13it’s a, it’s a good question. So like what I would say is that 34:15I think we should segment. How many people do you  need on the creation of the technology itself? And 34:22what is the right size of research and engineers  and compute to do that? And how many people do you 34:27need in the sort of application of the technology  to create better products, to deliver services 34:34and consulting, and then ultimately to diffuse it  through, you know, sort of all spheres of society? 34:39And the numbers are very different, and that  is not different than anywhere else. I mean, 34:43I mean, if you give examples of—since you were  talking about, in the context of World War II, 34:47how many people does it take to create, an atomic  weapon as an example. It’s a large number. I mean, 34:53it wasn’t just Los Alamos. There’s a lot  of people in Oakland. It’s a large number, 34:57but it wasn’t a million  people, right? Um, so, so you 35:00could have highly concentrated teams of people  that with enough resources can do extraordinary 35:06scientific and technological achievements. And  that’s always—by definition, is going to be, uh, 35:101 percent compared to the total volume that  it’s going to require to then deal with it. 35:16Malcolm: Yeah. But the application side is infinite, almost. Dario: That’s exactly—so that is where, like, 35:21in the end, the bottleneck really is. So, with,  you know, thousands of scientists and engineers, 35:26you can create world-class AI. Right? And, you  don’t need 10,000 to be able to create the large 35:33language model and the generative model and so on. But you need thousands, and you need, 35:37you know, a very significant amount of compute  and data. You need that. The rest is, “Okay, I, 35:43build software,” “I build databases,” or “I build  a software product that allows you to do inventory 35:48management,” or “I build, a photo editor,” and  so on. Now that product, incorporating the AI, 35:55modifying, expanding it, and so on—well, now  you’re talking about the entire software industry. 36:00So now you're talking about millions of people,  right, who are required to bring AI into their 36:06products. Then you go a step beyond the technology  creators in terms of software and you say, well, 36:12okay, now what? The skills to help organizations  go and deploy it in the Department of, 36:17you know, the Interior, right? And then I said, okay, well, 36:20now you need like consultants and experts  and people to work there to integrate into 36:24the workflow. So now you’re talking into the many  tens of millions of people around that. So I see 36:29it as these concentric circles of it. But to some  degree in many of these core technology areas, 36:35just saying like, well, I need a team of like a  hundred thousand people to create, like, AI, or a, 36:39or a new transistor or a new quantum computer— It’s actually a diminishing return, 36:43right? In the end, like, too many people  connecting with each other is very difficult. 36:46Malcolm: But, on the application side,  it was just—think of our example of that 36:53college. Just the task of sitting down  with a faculty and working with them 36:59to reimagine what they do with this,  with this new set of tools in mind, 37:04with the understanding that the students  coming in are probably going to know 37:07more about it than they do—that alone—I mean,  that’s a, that is a Herculean people problem. 37:13Dario: It’s a people problem. Yeah,  that’s why I started in terms of the 37:16barriers of adoption of that. I mean, in the context of 37:18IBM, an, an example—that's why we have a  consulting organization, IBM Consulting, 37:23that complements IBM Technology, and the  IBM Consulting organization has over 150,000 37:29employees. Because of this question, right?  Because you have to sit down and you say, 37:33okay, what problem are you trying to solve?  What is the methodology we're going to use? 37:37And here's the technology options that we have  to be able to bring to the table. In the end, 37:42the adoption across, uh, our society will be  limited by this part. The technology is going 37:49to make it easier, more cost- effective  to implement those, uh, solutions. But 37:55you first have to think about what you want to  do, how you're going to do it, and how you're 37:58going to bring it into the life of this—in this  context, a faculty member, or, uh, you know, the 38:03administrator and so on in this college, right? Malcolm: With that Hollywood, that, that notion, 38:07I thought, which was absolutely, I thought really  interesting that, in a Hollywood strike, you have 38:14to have this conversation about—a distributive  justice conversation about how do we—that's, 38:19it's a really hard conversation, right, to  have in a—so this brings me to my next point, 38:24which is that you—we were talking backstage. You have two daughters, one in college, 38:29one about to go to college. Darío: That's right. Malcolm: So, they're both science minded. 38:33Darío: Yeah. Malcolm: So tell me about 38:34the conversations you have with your daughters.  You have a unique conversation with your daughters 38:40because your conversa—your advice to them is,  is influenced by what you do for a living. 38:46Darío: Yes, it's true. Malcolm: Did you warn your 38:50daughters away from certain fields? Did you  say, “Whatever you do, don't be”—you know? 38:55Dario: No, no, no, no. That's not my style.  I mean, for me, no. I try not to be like, 39:00preachy about that. So for me it was just  about showing by example of things I love, 39:06right? And, things I care about. And then, bringing them to the lab 39:09and seeing things, and then the natural  conversations of things I'm working on, 39:13or interesting people I meet. So, to the extent  that they have chosen that—and obviously this 39:18has an influence on them—it has been through  seeing it, perhaps through my eyes, right? 39:24And what you see me do, and that  I like my profession. Right? 39:26Malcolm: But one of your daughters, you said,  is thinking that she wants to be a doctor. But 39:32being a doctor in a post-AI world is surely a  very different proposition than being a doctor 39:37in a pre-AI world. Do you think—have you  tried to prepare her for that difference? 39:43Have you explained to her what you think will  happen to this profession she might enter? 39:46Dario: Yeah. I mean, not in like, you  know, incredible amount of detail, but yes, 39:51at the level of understanding what is changing,  like this lens of the—information lens with which 39:57you can look at the world and what is possible,  uh, and what it can do, like what is our role and 40:03what is the role of the technology and how that  shapes at that level of abstraction, for sure. 40:08But not at the level of like,  don't be a radiologist, you know, 40:11because this is what we want for you. Malcolm: I was going to say, if you, 40:13if you're unhappy with your current job, you could  do a podcast called Parenting Tips with Darío, 40:17which is just, “an AI person gives you advice on  what your kids should do based on exactly this.” 40:24Like, “Should I be a radiologist? Darío,  tell me.” Like, it seems to be a really 40:28important question. Darío: Yeah. 40:30Malcolm:, Let me ask this question in a  more—I'm joking, but in a more serious way, 40:35surely it would—if—I don't mean to  use your daughter as an example, 40:38but let's imagine we're giving advice to somebody  who wants to enter medicine. A really useful 40:43conversation to have is, what are the skills  that are—will be—most prized in that profession, 40:51yeah, fifteen years from now, and are they  different from the skills that are prized now? 40:55How would you answer that question? Darío: Yeah, I think, for example—this is, 41:01goes back to how is the scientific method on,  in this context, like the practice of medicine, 41:05going to change? I think we will see more changes  in how we practice the scientific method and 41:10so on as a consequence of what is happening  with the world of computing and information, 41:16how we represent information, how we  represent knowledge, how we extract 41:19meaning from knowledge as a method, uh,  than we have seen in the last 200 years. 41:25So therefore, what I would like strongly  to encourage is not about, like, hey, 41:29use this tool for doing this or doing that, but  in the curriculum itself, in understanding how 41:34we do problem solving in the age of like  data and data representation and so on; 41:39that needs to be embedded in the curriculum of  everybody. You know, that is, I would say actually 41:44quite horizontally, but certainly in the context  of medicine and scientists and so on, for sure. 41:50And to the extent that that gets ingrained, that  will give us a lens that no matter what specialty 41:56they go with in medicine, they will say, actually,  the way I want to be able to tackle improving the 42:01quality of care, the way to do that is—in addition  to all the elements that we have practiced in our, 42:06in the field of medicine—is this new lens. And  are we representing the data the right way? Do 42:11we have the right tools to be able to represent  that knowledge? Am I incorporating that in my 42:16own—sort of with my own knowledge in a way that  gives me better outcomes, right? Do I have the 42:21rigor of benchmarking to, and quality of, the  results? So that is what needs to be incorporated. 42:27Malcolm: How, in a perfect world, if I asked  you to, your team to rewrite the curriculum 42:35for American medical schools, how dramatic  a revision is that? Are we tinkering with 42:4010 percent of the curriculum or are  we tinkering with 50 percent of it? 42:44Darío: I think there would be, a subset of classes  that is about the method—the methodology. What has 42:51changed. Like, like, have this lens of it  to understand. And then within each class, 42:57that methodology will represent  something that is embedded in it, 43:02right? So it will be substantive, but it  doesn't mean replacing the specialization 43:08and the context and the knowledge of each domain. But I do think everybody should have sort of a 43:13basic knowledge of the horizontal, right? What  is it? How does it work? What tools you have, 43:19what is the technology, and like, you know,  what are the dos and don'ts around that. And 43:22in every area, you say—“That thing that you  learn? This is how it applies to, uh, anatomy, 43:28and this is how it applies to radiology,” if  you're studying that. “Or, this is how you apply, 43:33in the context of discovery—right?— of cell  structure. And this is how we can use it.” 43:37Or “protein folding, and this is how it, it  does—.” So that way, you'll see a connecting 43:42tissue through, uh, throughout the whole thing. Malcolm: Yeah. I mean, I would add to that. It's 43:50also this incredible opportunity to do what  doctors are supposed to do but don't have time 43:55to do now, which is, they're so consumed with  figuring out what's wrong with you that they 44:02have little time to talk about the implications  of the diagnosis. And what we really want are—if 44:08we can free them of some of the burden of what is  actually quite a prosaic question of “What's wrong 44:14with you?” and leave the hard human thing of let  me—should you be scared or hopeful? Should you—, 44:21what do you need to do? What—let me put this in  the context of all the patients I've seen. That 44:26conversation, which is the most important one, is  the one that's—seems to me. So like if I had to, 44:31I would add, if we were reimagining the curriculum  of med school, I'd like—with whatever—by the way, 44:38very little time. Maybe we have to  add two more years to med school. 44:42But like a whole—that's not gonna be popular.  But the whole thing about bringing back the 44:48human side of, yeah, you know, now  if I can give you ten more minutes, 44:54how do you use that ten more minutes? Darío: But in that, in that 44:57reconceptualization that you just did  is what we should be doing around that. 45:00Because I think the debate as to like, “Well, am  I gonna need doctors or not?” is actually a not 45:05very useful debate. But rather this other question  is “How is your time being spent? What problems 45:10are you getting stuck?” I mean, I generalize  this by like the obvious observation that if 45:15you look around in our professions, in our daily  lives, we have not run out of problems to solve. 45:20So as—an example of that is, hey, if I'm spending  all my time trying to do diagnosis, and I could do 45:24that ten times faster, and it allowed me actually  to go and, um, you know, and take care of the 45:29patients and all The next steps and what we have  to do about it. That's probably a trade-off that a 45:34lot of doctors would take—would take, right? Yeah. And then you say, well, to what degree does it 45:38allow me to do that? And I can do these other  things and these other things that are critically 45:42important for my profession around that. So  when you actually become less abstract, and 45:47like we get past the futile conversation of like,  “Oh, there's no more jobs and AI's gonna take it, 45:52all of it,” which is kind of nonsense, is: you go  back to say, in practice, in your context, right, 45:58for you, what does it mean? How do you work? What  can you do differently around that? Actually, 46:04that's a much richer conversation. And very often  we would find ourselves—that there's a portion of 46:08the work we do that we say, “I would rather do  less of that. This, this other part I, I like a 46:12lot. And if it is possible that technology could  help us make that trade- off, I'll take it in a 46:17heartbeat.” Now, poorly implemented technology can  also create another problem. You say, hey, this 46:23was supposed to solve things, but the way it's  being implemented is not helping me, right? It's 46:28making my life much more miserable, or so on, or  I've lost connection in how I used to work, etc. 46:34So that is why design is so important. That is  why also workflow is so important in being able to 46:41solve these problems. But it begins by, you know,  going from the intergalactic to the reality of it, 46:48of that faculty member in the liberal arts  college or a practitioner in medicine in a 46:54hospital and what it means for them, right? Malcolm: Mm-hmm. Yeah. What struck me, Darío, 46:59throughout our conversation is, um, how much of  this revolution is nontechnical. ’Cause to say, 47:07“You guys are doing the technical thing  here, but the real, the revolution is 47:10going to require a whole range of people doing  things that have nothing to do with software, 47:16that have to do with working out new, new  human arrangements”—talking about that, 47:20I mean, I keep coming back to the Hollywood strike  thing: that you have to have a conversation about 47:27our values as creators of movies; how are  we going to divide up the credit, and the—  Dario: Exactly right! 47:35Malcolm: Like that’s a conversation about philosophy, and, Darío: it's in the grand tradition of why, 47:46a liberal education is so important in  the, the broadest possible sense, right? 47:50There's no common conception of the good,  right? That is always a contested, uh, 47:56dialogue that happens within our society. And  technology is going to fit in that context 48:00too, right? So that's why personally, as a  philosophy, I'm not a technological determiner. 48:05Right? And I don't like when colleagues in my  profession, right, start saying like, well, this 48:10is the way the technology is going to be, and by  consequence, this is how society is going to be. 48:14I'm like, that's a highly contested goal, and  if you want to enter into a realm of politics 48:19or the realm of other ones, go and stand  up on a stool and discuss whether that's 48:23what society wants. You will find that it's a  huge diversity of, of opinions and perspective, 48:28and that's what makes, you know, uh, you know,  in a democracy, the richness of our society. 48:33And in the end, that is going to be the  centerpiece of the conversation. What do we 48:37want? You know, who gets what? And so on, and  that is—actually, I don't think it's anything 48:42negative. That's as it should be. Because in  the end, it's anchor of who we want as humans, 48:48as friends, family, citizens, and we have many  overlapping sets of responsibilities, right? 48:53And as a technology creator, my  only responsibility is not just as 48:56a scientist and a technology creator; I'm  also a member of a family, I'm a citizen, 49:00and I have many other things that I care  about. And I think that—that sometimes 49:03in the debate of the technological determinists,  they start now butting into what is the realm of 49:13justice and society and philosophy and democracy. And that's where they get the most uncomfortable, 49:19because it's like—I'm just telling you  like, you know, uh, what's possible. And 49:23when there's pushback, it's like, yeah,  but, but now we're talking about how we 49:28live. And how we work and how much I get paid  or not paid. So that technology is important. 49:36Technology shapes our conversation. But  we're gonna have the conversation with a 49:40different language. As it should be. And  technologies need to get accustomed to—if 49:44they want to participate in that world with the  broad consequences, hey, get accustomed to deal 49:49with the complexity of that world. Of politics,  society, institutions, unions, all that stuff. 49:55And, you know, you can't be, like, whiny  about it. It's like, “They're not adopting 49:58my technology.” That's what it takes  to bring technology into the world. 50:01Malcolm: Yeah, well said. Thank you, Darío, for  this wonderful conversation. Thank you, to all of 50:10you for coming and listening. And, thank you. Darío: Thank you. 50:14Malcolm: Darío Gil transformed how  I think about the future of AI. 50:22He explained to me how huge of  a leap it was when we went from 50:25chess-playing models to language-learning models. And he talked about how we still 50:31have a lot of room to grow. That’s why it’s important that we get things right. 50:36The future of AI is impossible to predict.  But the technology has so much potential 50:42in every industry. Zooming into an academic  or a medical setting showed just how close 50:47we are to the widespread adoption of AI. Even  Hollywood is being forced to figure this out. 50:54Institutions of all sorts will have to be at the  forefront of integration in order to unlock the 50:59full power of AI thoughtfully and responsibly.  Humans have the power and the responsibility to 51:06shape the tech for our world. I, for one,  am excited to see how things play out. 51:13Smart Talks with IBM is produced  by Matt Romano, Joey Fischground, 51:18David Zha, and Jacob Goldstein.  We’re edited by Lidia Jean Kott. 51:22Our engineers are Jason Gambrell,  Sarah Bruguiere, and 51:26Ben Tolliday. Theme song by Gramoscope. Special thanks to Andy Kelly, Kathy Callaghan, 51:33and the EightBar and IBM teams, as  well as the Pushkin marketing team. 51:38Smart Talks with IBM is a production  of Pushkin Industries and Ruby Studio 51:43at iHeartMedia. To find more Pushkin podcasts,  listen on the iHeartRadio app, Apple Podcasts, 51:50or wherever you listen to podcasts. I’m Malcolm Gladwell. 51:55This is a paid advertisement from IBM.