Learning Library

← Back to Library

IBM‑MIT Lab Builds AI Foundations

Key Points

  • Malcolm Gladwell introduces “Smart Talks with IBM,” focusing on how AI acts as a game‑changing multiplier for businesses, with guest Dr. David Cox, IBM’s VP of AI models and director of the MIT‑IBM Watson AI Lab.
  • Cox explains his dual role: leading the MIT‑IBM Watson AI Lab—an academic‑industry partnership that dates back to the 1950s origin of AI—and overseeing IBM’s development of large “foundation” generative models.
  • The MIT‑IBM collaboration highlights a long‑standing history, tracing back to IBM engineer Nathaniel Rochester’s 1956 Dartmouth workshop that coined “artificial intelligence,” showcasing how the partnership has evolved to push the frontiers of AI research.
  • Cox breaks down foundation models in accessible terms and describes how these massive generative AI systems enable new capabilities across industries, from product design to data analysis.
  • He also discusses the practical impacts of AI on business operations, the future of work, and how enterprises can leverage these models to drive innovation and competitive advantage.

Sections

Full Transcript

# IBM‑MIT Lab Builds AI Foundations **Source:** [https://www.youtube.com/watch?v=kLBdpNsXh1A](https://www.youtube.com/watch?v=kLBdpNsXh1A) **Duration:** 00:38:52 ## Summary - Malcolm Gladwell introduces “Smart Talks with IBM,” focusing on how AI acts as a game‑changing multiplier for businesses, with guest Dr. David Cox, IBM’s VP of AI models and director of the MIT‑IBM Watson AI Lab. - Cox explains his dual role: leading the MIT‑IBM Watson AI Lab—an academic‑industry partnership that dates back to the 1950s origin of AI—and overseeing IBM’s development of large “foundation” generative models. - The MIT‑IBM collaboration highlights a long‑standing history, tracing back to IBM engineer Nathaniel Rochester’s 1956 Dartmouth workshop that coined “artificial intelligence,” showcasing how the partnership has evolved to push the frontiers of AI research. - Cox breaks down foundation models in accessible terms and describes how these massive generative AI systems enable new capabilities across industries, from product design to data analysis. - He also discusses the practical impacts of AI on business operations, the future of work, and how enterprises can leverage these models to drive innovation and competitive advantage. ## Sections - [00:00:00](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=0s) **AI Foundations and Business Impact** - Malcolm Gladwell introduces Dr. David Cox, IBM Research VP and director of the MIT‑IBM Watson AI Lab, to discuss the evolution of AI, foundation models, and how artificial intelligence is reshaping business, work, and design. - [00:03:50](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=230s) **IBM‑MIT AI Strategy Partnership** - David Cox explains how the IBM‑MIT collaboration places them at the forefront of AI research—especially foundation models—while pursuing broader societal impacts such as climate change mitigation and advanced materials development. - [00:07:04](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=424s) **From Data to Self‑Supervised AI** - The speaker outlines how the surge in digitized data and faster computers paved the way for breakthroughs, culminating in self‑supervised learning that removes the need for labeled data and fuels the creation of increasingly powerful models such as ChatGPT. - [00:10:06](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=606s) **Foundation Models: The AI Oven Analogy** - Jacob Goldstein likens foundation models to a versatile oven that replaces single‑purpose AI tools, emphasizing their broad utility and how they drastically cut the labor needed to build and deploy AI solutions. - [00:13:13](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=793s) **Beyond Dams: Foundation Model Power** - The speaker likens traditional automation to building a dam only on large rivers—ignoring countless smaller data “puddles”—and explains that foundation models, trained on massive unlabeled corpora, serve as a universal base that lets users efficiently address any downstream task without starting from scratch. - [00:16:32](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=992s) **Specialized Small Models Outperform Giants** - Jacob Goldstein discusses how a modest 2.7‑billion‑parameter model trained on biomedical literature surpasses far larger models in that domain, highlighting gains in efficiency, cost, and domain expertise while illustrating the industry's trade‑off between generic big models and task‑specific tools. - [00:20:01](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=1201s) **AI Hallucinations and False Confidence** - The speakers illustrate how language models can confidently fabricate information—such as a completely invented biography—showing that the persuasive tone of AI outputs can easily mislead users into accepting inaccurate content as true. - [00:23:03](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=1383s) **Beyond Sentiment: AI Summarization Use Cases** - David Cox explains how modern LLMs extend traditional AI tasks like sentiment analysis to more complex applications such as automatically condensing customer call transcripts and meeting notes for quicker insight. - [00:26:09](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=1569s) **Conversational AI Transforming Code Generation** - The speaker envisions AI dramatically reshaping software development by allowing users to describe requirements in natural language and have the system generate, reason about, and execute code autonomously. - [00:29:17](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=1757s) **AI's Ubiquitous Business Revolution** - The speaker argues that AI will permeate every product, cutting costs and creating new opportunities for firms while freeing workers from tedious tasks, drawing parallels to past technological shifts like mechanized agriculture and email. - [00:32:24](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=1944s) **Watsonx Enables AI-Driven Creative Enterprise** - The speaker outlines how watsonx unifies foundation models, tooling, and data management to let businesses safely harness AI’s power, while emphasizing the creative research role that fuels innovative enterprise solutions. - [00:35:27](https://www.youtube.com/watch?v=kLBdpNsXh1A&t=2127s) **AI as a Natural Resource** - David Cox envisions AI becoming an abundant, resource‑like tool that seamlessly augments human work, creating both job displacement challenges and unprecedented opportunities for intellectual productivity. ## Full Transcript
0:02Malcom Gladwell: Hello, hello. Welcome to Smart Talks with IBM, 0:05a podcast from Pushkin Industries, iHeartRadio and IBM. 0:09I’m Malcolm Gladwell. 0:10This season, we’re continuing our conversation with New Creators, 0:14visionaries who are creatively applying technology in business to drive change 0:19— but with a focus on the transformative power of artificial intelligence 0:24and what it means to leverage AI as a game-changing multiplier for your business. 0:30Our guest today is Dr. David Cox, VP of AI models at IBM Research, 0:37and director of the MIT-IBM Watson AI Lab, 0:42a first-of-its-kind collaboration , industry-academic collaboration 0:45between IBM and MIT, 0:48focused on the fundamental researchof artificial intelligence. 0:52Over the course of decades, David Cox has watched as the AI revolution steadily grew 0:59from the simmering ideas of a few academics and technologists into the industrial boom 1:04we are experiencing today. Having dedicated his life to pushing the field of AI towards 1:10new horizons, David has both contributed to and presided over many of the major breakthroughs 1:16in artificial intelligence. In today’s episode, you’ll hear David 1:21explain some of the conceptual underpinnings of the current AI landscape—things like 1:27foundation models (in surprisingly comprehensible terms, I might add). We’ll also get into 1:32some of the amazing practical applications for AI in business, as well as what implications 1:37AI will have for the future of work and design. David spoke with Jacob Goldstein, host of 1:43the Pushkin podcast What’s Your Problem? A veteran business journalist, Jacob has reported 1:49for the Wall Street Journal, the Miami Herald, and was a longtime host of the NPR program 1:55“Planet Money.” Okay! Let’s get to the interview. 2:05Jacob Goldstein: Great. So tell me about your role at IBM. 2:07David Cox: So I wear two hats at IBM. So, one, I'm the IBM director of the MIT-IBM Watson 2:13AI lab. So that's a joint lab between IBM and MIT where we try and invent what's next 2:19in AI. It's been running for about five years. And then more recently, 2:23I started as the vice president for AI models. And I'm in charge of building, IBM's foundation models. 2:30You know, building these big models, generative models, that allow us to have all 2:33kinds of new, exciting capabilities in AI. 2:35Jacob Goldstein: So I want to talk to you a lot about foundation models, about generative AI, but before we get to that, 2:41let's just spend a minute on the, on the, IBM-MIT collaboration. Where did that partnership start? How did it originate? 2:50David Cox: Yeah, so actually, it turns out 2:53that MIT and IBM have been collaborating for a very long time in the area of AI. In fact, 2:59the term “artificial intelligence” was coined in a 1956 workshop that was held at 3:05Dartmouth, and it was actually organized by an IBMer, Nathaniel Rochester, who led the 3:09development of the IBM 701. So we've really been together in AI since 3:14the beginning. And as you know, AI kept accelerating more and more and more. I think there was 3:21a really interesting decision to say, “Let's make this a formal partnership.” So IBM 3:25in 2017 announced that it'd be committing close to a quarter billion dollars over 10 3:29years. To have this joint lab with MIT—and we located 3:33ourselves right on the campus, and we've been developing very, very deep relationships where 3:37we can really get to know each other, work shoulder to shoulder, you know, conceiving 3:41what we should work on next and then executing the projects. And it's really, you know—very 3:46few entities like this exist between academia and industry. 3:50It's been a really fun last five years, to be a part of it. 3:53Jacob Goldstein: And what do you think are some of the most important outcomes of this 3:56collaboration between IBM and MIT? 3:59David Cox: Yeah, so we're really kind of the tip of the spear for, for IBM's AI strategy. 4:06So we're really looking at what, you know—what's coming ahead. And, you know, in areas like foundation models, you know, 4:12as the field changes. MIT people are interested in working on, you know—faculty, students and staff 4:17are interested in working on, “What's the latest thing?” “What's the next thing?” 4:20We at IBM Research are very much interested in the same. So we can kind of put out feelers: 4:25you know, interesting things that we, that we are, that we're seeing in our research, 4:29interesting things we're hearing in the field—we can go and chase those opportunities. So when 4:32something big comes, like the big change that's been happening lately with foundation models, 4:37we're ready to jump on it. That's really the purpose. That's the lab 4:40functioning the way it should. We're also really interested in how do we advance the 4:46AI that can help with climate change or build better materials and all these kinds of things 4:51that are, you know, a broader aperture, sometimes, than what we might consider just looking at 4:56the product portfolio of IBM. And that gives us a breadth where we can see 5:00connections that we might not have seen otherwise. We can think things that help out society 5:05and also help out our customers. 5:07Jacob Goldstein: So the last, whatever, six months, say, there has been this wild rise in the public's interest in AI, right? 5:18Clearly coming out of these generative AI models that are really accessible—you know, certainly 5:23ChatGPT, language models like that, as well as models that generate images, like Midjourney. 5:28I mean, can you just sort of briefly talk about the, the breakthroughs in AI that have 5:35made this moment feel so exciting, so revolutionary, for artificial intelligence? 5:40David Cox: Yeah. You know, I've been studying AI basically my entire adult life. 5:46Before I came to IBM. I was a professor at Harvard. I've been doing this a long time and I've 5:51gotten used to being surprised. It sounds like a joke, but it's—it's serious. Like 5:54I'm getting used to being surprised at the acceleration of the pace. 5:59Again, it tracks, actually, a long way back. You know, there's lots of things where there 6:04was an idea that just simmered—for a really long time. Some of the key math behind this—the 6:12stuff that we have today, which is amazing—there's an algorithm called “back propagation,” 6:17which is sort of key to training neural networks. That's been around, since the ’80s, in wide 6:21use. And really, what happened was it simmered for a long time, and then enough data and 6:28enough compute came. So we had enough data because, you know, we 6:33all started carrying multiple cameras around with us. Our mobile phones have all, you know, 6:38all these cameras, 6:39and this—we put everything on the internet and there's all this data out there. We caught 6:43a lucky break that there was something called a “graphics processing unit,” which, you 6:46know, turns out to be really useful for doing these kinds of algorithms— maybe even more 6:50useful than it is for doing graphics. (They're great at graphics too.) 6:53And things just kept kind of adding to the snowball. So we had deep learning, which is 6:59sort of a rebrand of neural networks that I mentioned, from the ’80s. 7:04And that was enabled, again, by data, because we digitalized the world, and compute, because we kept 7:09building faster and faster and more powerful computers. 7:12And then that allowed us to make this, this big breakthrough. And then, you know, more 7:16recently, using the same building blocks, that inexorable rise of more and more and 7:22more data met a technology called “self-supervised learning,” where—the key difference there—in 7:30traditional deep learning, you know, for classifying images, you know, like, “Is this a cat or 7:34is this a dog?” in a picture. Those technologies require supervision. So you have to take what 7:41you have and then you have to label it. So you have to take a picture of a cat and 7:44then you label it as a cat. And it, it turns out that, you know, that's very powerful, 7:48but it takes a lot of time to label cats and to label dogs. And, you know, there's only 7:53so many labels that exist in the world. So what really changed more recently is that 7:58we have self-supervised learning, where you don't have to have the labels. We can just 8:01take unannotated data. And what that does is it lets you use even more data. And that's 8:07really what drove this latest, sort of, rage. And then all of a sudden we start getting 8:13these, these really powerful models, and then—really this has been—you know, again, simmering 8:17technologies, right? This has been happening for a while. But one of the—and progressively 8:23getting more and more powerful. One of the things that really happened with ChatGPT and 8:29technologies like Stable Diffusion and Midjourney was that they made it visible to the public. 8:35You know, you put it out there; the public can touch and feel and they're, like, “Wow, 8:39not only is there a palpable change,” and “Wow, this, you know, I can talk to this 8:43thing, wow, this thing can generate an image.” Not only that, but everyone can touch and 8:47feel and try. My kids can use some of these AI art generation technologies. 8:54And that's really just launched—you know, it's like a propelled slingshot at us into a different 9:01regime in terms of the public awareness of these technologies. 9:03Jacob Goldstein: You mentioned earlier in the conversation “foundation models,” 9:07and I want to talk a little bit about that. I mean, can you just tell me, you know, what 9:12are foundation models for AI and why are they a big deal? 9:16David Cox: Yeah. So this term, “foundation model,” was coined by a group at Stanford, 9:23and I think it's actually a really apt term. Because remember, I said, you know, one of 9:27the big things that unlocked this latest excitement was the fact that we could use large amounts 9:33of unannotated data, we could train a model. We don't have to go through the painful effort 9:38of labeling each and every example. You know, you still need to have your model do something 9:43you want it to do. You still need to tell it what you want to do. You can't just have 9:46a model that doesn't, you know, have any purpose. But what a foundation model is—it provides 9:51a foundation, like a literal foundation. You can sort of stand on the shoulders of giants. 9:55You can have one of these massively trained models and then do a little bit on top. You 10:00know, you could use just a few examples of what you're looking for and you can get what, 10:05what you want from the model. In some cases, you don't need examples at all. 10:06So just a little bit on top now gets you the results that a huge amount of effort—you 10:10used to have to put in, you know, to get from the ground up to that, that, that level. 10:14Jacob Goldstein: I was trying to think of, of an analogy for sort of foundation models 10:20versus what came before. And I don't know that I came up with a good one, but the best 10:25I could do was this; I want you to tell me if it's plausible. It's like before foundation 10:30models, it was like you had these sort of single-use kitchen appliances. 10:35You could make a waffle iron if you wanted waffles, or you could make a toaster if you 10:39wanted to make toast. But a foundation model is like, like an oven with a range on top. 10:44So it's like this machine and you could just cook anything with this machine. 10:47David Cox: Yeah, that's, that's a great analogy. They're very versatile. The other piece of 10:53it too is that they dramatically lower the effort that it takes to do something that 10:58you want to do. And sometimes—I used to say, about the old world of AI, I would say, 11:03you know, the problem with automation is that it's too labor intensive. 11:07Which sounds like I'm making a joke. 11:09Jacob Goldstein: Indeed. Famously, if automation does one thing, it substitutes machines or computing power for labor. Right? So what 11:16does that mean to say AI is, or automation is, too labor intensive? 11:21David Cox: I'm actually—it sounds like I'm making a joke, but I'm actually serious. And 11:24what I mean is that the effort it took in the old regime to automate something was very, very high. 11:30So if I need to go and curate all this data, collect all this data, and then 11:35carefully label all these examples, that labeling itself might be incredibly expensive and time 11:42consuming. And we estimate anywhere between 80 to 90% of the effort it takes to field 11:46an AI solution actually is just spent on data. So that, that has some consequences, which 11:52is, the threshold for bothering, you know, if you're going to only get a little bit of 11:58value back from something, are you going to go through this huge effort to curate all 12:03this data? And then when it comes time to train the model, you need highly skilled people 12:08that might be expensive or hard to find in the labor market. Are you really going to 12:13do something that's just a tiny little incremental thing? No, you're going to do the—only the 12:16highest-value things that warrant that level of investment. 12:20Jacob Goldstein: Because you have to essentially build the whole machine from scratch. 12:23And there aren't many things where it's worth that much work to build a machine that's only 12:28going to do one narrow thing. 12:30David Cox: That's right. And then you, you tackle the next problem and you basically have to start over and— there are some nuances 12:37here. Like for images, you can pretrain a model on some other task and change it around. 12:41So there are some examples of this like nonrecurring cost that we had in the old world too, but 12:47by and large, it's just a lot of effort. It's hard. It takes, a large level of skill to 12:53implement. One analogy that I like: think about it as, 12:58you know, you have a river of data, you know, running through your company or your institution; 13:03traditional AI solutions are kind of like building a dam on that river. You know, dams 13:07are very expensive things to build. They require highly specialized skills and lots of planning. 13:13And, you know, you're only going to put a dam on a river that's big enough—that you're 13:18going to get enough energy out of it that it was worth your trouble. You're gonna get 13:22a lot of value out of that dam if you have a river like that, you know, a river of data. 13:26But it's—actually, the vast majority of the water, you know, in your kingdom actually 13:30isn't in that river; it's in puddles and creeks and babbling brooks. And, and, you know, there's 13:37a lot of value left on the table, because it's like, well, I can't— there's nothing 13:41I can do about it. It's just that that's too, you know, low value. 13:45So it takes too much effort. So I'm just not going to do it. The return on investment just 13:49isn't there. So you just end up not automating things because it's too much of a pain. Now, 13:54what foundation models do is they say, “Well, actually, no; we can train a base model, a 13:59foundation that you can work on.” We don't—we don't care. We don't have to 14:02specify what the task is ahead of time. We just need to, like, learn about the domain 14:05of data. So if we want to build something that can understand English language, there's 14:10a ton of English-language text available out in the world. We can now train models on huge 14:17quantities of it. And then it learned the structure. It learned 14:20how language—you know, a good part of how language works, on all that unlabeled data. 14:25And then when you roll up with your task—you know, “I want to, you know, solve this particular 14:30problem,” you don't have to start from scratch. You're starting from a very, very, very high 14:34level. So that just gives you the ability to just, 14:37you know—now all of a sudden everything is accessible. All the puddles and creeks 14:41and babbling brooks and kettle ponds. You know, those are all accessible now. Um, and 14:47that's, that's very exciting. It just changes the equation on what kinds of problems you could use AI to solve. 14:52Jacob Goldstein: And so foundation models basically mean that automating some new task is much less labor intensive. The sort of 15:00marginal effort to do some new automation thing is much lower because you're building 15:05on top of the foundation model rather than starting from scratch. 15:08David Cox: Absolutely. 15:10Jacob Goldstein: So that is, like, the exciting good news. I do feel like there's, there's a little bit of a countervailing idea that's 15:18worth talking about here. And that is the idea that, even though there are these foundation 15:23models that are really powerful, that are relatively easy to build on top of, it's still 15:28the case, right, that there is not some one-size-fits-all foundation model. So, you know, what does 15:35that mean and why is that important to think about in this context? 15:38David Cox: Yeah. So we believe very strongly that there isn't just “one model to rule 15:45them all.” There's a number of reasons why that could be true. One, which I think is 15:49important and very relevant today, is how much energy these models can consume. So these 15:56models, you know, can get very, very large. 16:00So one thing that we're starting to see, or starting 16:05to believe, is that you probably shouldn't use one giant sledgehammer model to solve 16:10every single problem. You know, like we should pick the right-size model to solve the problem. 16:15We shouldn't necessarily assume that we need the biggest, baddest model for, for every 16:20little use case. We're also seeing that, you know, small models that are trained, like, 16:25to specialize on particular domains can actually outperform much bigger models. So bigger isn't 16:31always even better. 16:32Jacob Goldstein: So they're more efficient, and they do the thing you want them to do better as well. 16:36David Cox: That's right. So, Stanford, for instance—a group at Stanford trained a model. 16:41It was a 2.7 billion parameter model, which isn't terribly big by today's standards. They 16:46trained it just on the biomedical literature. You know, this is the kind of thing that universities 16:50will do. And what they showed was that this model was better at answering questions about 16:55the biomedical literature than some models that were, you know, 100 billion parameters. 16:59You know, many times larger. So it's a little bit like, asking an expert 17:04for help on something versus asking the smartest person you know. The smartest person you know 17:09may be very smart. But they're not gonna beat expertise. And then, as an added bonus, this 17:15is now a much smaller model. It's much more efficient to run. We are—you know, it's cheaper. 17:20So there's lots of different advantages there. So I think we're gonna see attention in the industry between vendors that say, 17:27“Hey, this is the one, big model,” and then others that say, 17:32“Well, actually,there's, there's, you know, lots of different tools we can use that all have this nice quality that we outlined 17:38at the beginning.” 17:39And then we should really pick the one that makes the most sense for the task at hand. 17:43Jacob Goldstein: So there's sustainability—basically, efficiency. Another, another kind of set of issues that come up a lot with AI are 17:51bias and hallucination. Can you talk a little bit about bias and hallucination—what they 17:56are and how you're working to, to mitigate those problems? 17:58David Cox: Yeah. So there, there are lots of issues still. As amazing as these technologies 18:03are—and they, they are amazing; Let's be very clear, lots of great things we're gonna 18:07enable with these kinds of technologies—bias isn't a new 18:11problem. So, basically we've seen this since the beginning of AI: if you train a model 18:18on data that has a bias in it, the model is going to recapitulate that bias when it provides 18:24its answers. So if every time, you know—if all the text 18:27you have says, is more likely to refer to female nurses and male scientists, then you're 18:33going to get models that—for instance, there was an example where a machine-learning-based 18:37translation system translated from Hungarian to English. 18:41Hungarian doesn't have gendered pronouns, English does. And when you, when you asked 18:45it to translate, it would translate “they are a nurse” to “she is a nurse.” And 18:49it would translate “they are a scientist” to “he is a scientist.” And that's not 18:52because the, the people who wrote the algorithm were building in bias, and coding in, like, 18:57“Oh, it’s gotta be this way.” It's because the data was like that. 19:00We have biases in our society, and they're reflected in our data and our text and our 19:07images everywhere, and then the models—they're just mapping from what they, what they've 19:12seen in their training data to, to the result that you're trying to get them to do and to 19:16give, and then these biases come out. So, there's a very active program of research. 19:23And, we do quite a bit of IBM research at MIT, but also all over the community and industry 19:30and academia, trying to figure out, “How do we explicitly remove these biases? How 19:34do we identify them? How do you know? How do we build tools that allow people to audit 19:38their systems to make sure they aren't biased?” So this is a really important thing. And, 19:42you know, again, this was here since the beginning, you know, of machine learning and AI. But 19:49foundation models and large language models and generative AI, um, just bring it into 19:53sharper, even sharper focus, because there's just so much data and it's, it's sort of building 19:58in, baking in, all of these different biases we have. 20:01So that, that's, that's absolutely, um, a problem that these models have. Another one 20:06that you mentioned was hallucinations. Um, so even the most impressive of our models 20:11will often just make stuff up. You know, the technical term that the field has chosen is 20:16“hallucination.” Um, to give you an example, I asked ChatGPT to create, you know, create 20:21a biography of David Cox at IBM. And, you know, it started off really well. 20:26You know, it identified that I was the director of the MIT-IBM Watson AI Lab and said a few 20:30words about that. And then it proceeded to create an authoritative but completely fake 20:36biography of me, where I was British, I was born in the UK, I went to British universities in the UK. 20:44Jacob Goldstein: The authority, right? It's the certainty that it, that is, is weird about 20:49it, right? It's, it's dead certain that you're from the UK, et cetera. 20:52David Cox: Absolutely, yeah. And it has all kinds of flourishes, like “I won awards 20:57in the UK.” So yeah, it's problematic, because it kind of pokes at a lot of weak spots in 21:04our human psychology, where if something sounds coherent, we're likely to assume it's true. 21:11We're not used to interacting with people who eloquently and authoritatively, you know, 21:15emit complete nonsense. Like—we could debate about that. 21:20Jacob Goldstein: Yeah. We can debate about that, but yes, its sort of blithe confidence, 21:25throws you off when you realize it's completely wrong, right? 21:28David Cox: That's right. And, we do have a little bit of like a Great and Powerful Oz 21:32sort of vibe going sometimes, where we're like, “Well, you know, the AI is all knowing 21:37and, and therefore whatever it says must be true.” But, but these things will make up 21:41stuff, very, aggressively. Um, and—you know, everyone can try asking it for their bio. 21:49You'll get something that—you'll always get something that's of the right form, that 21:53has the right tone, but you know, the facts just aren't necessarily there. So that, that's 21:57obviously a problem. We need to figure out how to close those gaps, fix those problems. 22:02There's lots of ways we could use them much more easily. 22:04Malcom Gladwell: I’d just like to say, faced with the awesome potential of what these technologies 22:09might do, it’s a bit encouraging to hear that even ChatGPT has a weakness for flamboyant, 22:15if fictional, versions of people’s lives. And while entertaining ourselves with ChatGPT 22:22and Midjourney is important, the way ordinary people use consumer-facing chatbots and generative 22:28AI is fundamentally different from the way enterprise businesses use AI. How can we harness 22:35the abilities of artificial intelligence to help us solve the problems we face in business 22:40and technology? Let’s listen on as David and Jacob continue their conversation. 22:45Jacob Goldstein: We've been talking in a somewhat abstract way about AI and the ways it can 22:50be used. Let's talk in a little bit more of a specific way. Can you just talk about some 22:56examples of business challenges that can be solved with, with automation, with this kind 23:01of automation we're talking about? 23:03David Cox: Yeah, so the sky's the limit. There's a whole set of different applications that, these models are really good at. 23:11And basically, it's a super set of everything we used to use AI for in business. So, you 23:17know, the simple kinds of things are like, “Hey, if I have text and I have, like, product 23:21reviews and I want to be able to tell if these are positive or negative, let's look at all 23:25the negative reviews so we can have a human look through them and see what was up.” 23:30Very common business use case. You can do it with traditional deep-learning-based AI. 23:34So there's things like that, that are, you know—it's very prosaic, sort of. We're already 23:39doing it. We've been doing it for a long time. Then you get situations that are, that were 23:44harder for the old AI. Like if I want to compress something. Like I want to—say I have a chat 23:51transcript, like a customer called in and they had a complaint. They call back. Okay, 23:57now a new person on the line needs to go read the old transcript to catch up. 24:02Wouldn't it be better if we could just summarize that? Just condense it all down? Quick little 24:06paragraph. You know, “Customer call, they were upset about this,” rather than having 24:09to read the blow-by-blow. There's just lots of settings like that, where summarization 24:14is really helpful. “Hey, you have a meeting.” I'd like to just automatically have that meeting—or 24:20that email, or whatever—I'd like to just have it condensed down so I can really quickly 24:23get to the heart of the matter. These models are really good at doing that. They're also 24:28really good at question answering. So if I want to find out what's—how many vacation 24:32days do I have, I can now interact in natural language with the system that can go and—that 24:39has access to our HR policies, and I can actually have a, you know, multiturn conversation where 24:44I can, you know—like I would have with, somebody, you know, an actual, HR professional 24:49or customer service representative. So a big part, of what this is doing is it's 24:57putting an interface—you know, when we think of computer interfaces, we're usually thinking 25:00about UI (user interface) elements, where I click on menus and there's buttons and all 25:05this stuff. Increasingly now we can just talk. Just in words, you can describe what you want. 25:11You want to answer, ask a question, you want to sort of command the system to do something, 25:17rather than having to learn how to do that, clicking buttons, which might be inefficient. 25:20Now we can just, sort of, spell it out. 25:22Jacob Goldstein: Interesting, right? The graphical user interface that we all sort of default to—that's not, like, the state of nature, 25:28right? That's a thing that was invented and just came to be the standard way that we interact 25:33with computers. And so you could imagine, as you're saying, like, chat, essentially, 25:38chatting with the machine could, could become a sort of standard user interface, just like 25:43the graphical user interface did, you know, over the past several decades. 25:47David Cox: Absolutely. And I think those kinds of conversational interfaces are going to 25:50be hugely important for increasing our productivity. It's just a lot easier if I don't have to 25:57learn how to use a tool, or I don't have to kind of have awkward, you know, interactions 26:01with my computer; I can just tell it what I want and it can understand. It could potentially 26:04even ask questions back to clarify, and—have those kinds of conversations. 26:09That can be extremely powerful. And in fact, one area where that's gonna, I think, be absolutely 26:14game-changing is in code, when we write code. Programming languages are a way for us to 26:22sort of— between our very sloppy way of talking and the very exact way that you need 26:28to command a computer to do what you want it to do. They're cumbersome to learn. They 26:33can—you know, you create very complex systems that are very hard to reason about. And we're 26:37already starting to see the ability to just write down what you want and the AI will generate 26:41the code for you. And I think we're just going to see a huge 26:44revolution of—like, we just converse. You know, we can have a conversation to say what 26:48we want and then the computer can actually not only do fixed actions and, and, and do 26:53things for us, but it can actually even write code to do new things, you know, and, and 26:57generate the software itself. Given how much software we have, how much, 27:00um, just like craving we have for software—like we'll never have enough software in our world—the 27:06ability to have AI systems is—as a helper in that, I think we're going to see a lot 27:11of, a lot of value there. 27:13Jacob Goldstein: So if you, if you think about the different ways AI might be applied to business — 27:18I mean, you've talked about a number of the sort of classic use cases. What are some of the more “out there” use cases? 27:25What are some, you know, unique ways you could imagine AI being applied to business? 27:30David Cox: Yeah, there's—um, really the sky's the limit. I mean, we have one project 27:34that I'm kind of a fan of, where, we actually were working with a mechanical engineering 27:39professor at MIT, working on a classic problem of how do you build linkage systems. 27:45Which are like, you know—imagine bars and joints and motors, you know, the things that— 27:49Jacob Goldstein: Building a thing, building a physical machine of kind. 27:53David Cox: Yeah, like, real, like, metal and, you know, and— 27:57Jacob Goldstein: Nineteenth century. Just old-school industrial revolution. 28:00David Cox: Yeah. Yeah. But, but, you know, the little arm that's holding up my microphone 28:04in front of me, the cranes that build your buildings, you know, parts of your engines—this 28:08is like classical stuff. It turns out that, you know, humans, if you want to build an 28:12advanced system, you decide what, like, curve you want to create. And then a human, together 28:18with a computer program, can build a five- or six-, bar linkage. And then that's kind 28:23of where you top out. It just gets too complicated to— 28:25Jacob Goldstein: Huh. David Cox: We built a generative AI system 28:28that can build 20-bar linkages. Like arbitrarily complex. So these are machines that are beyond 28:33the capability of a human to design themselves. Another example: we have, AI systems 28:40that can generate electronic circuits. You know, we, we had a project where we were working, 28:43where we were building better power converters, which allow our computers and our devices 28:48to be more efficient. Save energy. You know, less carbon output. 28:53I think the world around us has always been shaped by technology. If you look around, 28:58you know, just think about how many steps and how many people and how many designs went 29:01into the table and the chair and the lamp. It's really just astonishing. And that's already 29:08the fruit of, automation and computers and those kinds of tools. But we're going to see 29:12that increasingly be, you know—and it's just going to be everywhere around us. 29:17Everything we touch is going to have been helped in some way to get, get to you, by AI. 29:23Jacob Goldstein: You know, that is a pretty profound transformation that you're talking about in business. How do you think about 29:29the implications of that, both for the, business itself and also for, for employees? 29:35David Cox: Yeah. So I think for businesses, this is gonna cut costs, make new opportunities, 29:42delight customers. You know, like, there's just, you know—it's sort of all upside, right? 29:48For the workers, I think the story is mostly good too. You know, like—how many 29:53things do you do in your day that you'd really rather not, right? And we're used to having 30:00things we don't like automated away. We didn't—if you didn't 30:04like walking many miles to work, like, you can have a car and you can drive there. Or 30:09we used to have a huge fraction, over 90%,of the US population engage in agriculture,and then we mechanized it. 30:16now very few people work in agriculture. A small number of people can do the work of a large number of people. 30:20And then, things like email, they've led to 30:23huge productivity enhancements, because I don't need to be writing letters and sending 30:27them in the mail. I can just instantly communicate with people. 30:30We just become more effective. Like, our jobs have transformed. whether it's a physical 30:36job like agriculture or it's—whether it's a knowledge worker job, where you're sending 30:40emails and, and communicating with people and coordinating teams, we've just gotten better. 30:45And, you know, the technology has just made 30:47us more productive. And this is just another example of that. Now, you know, there are 30:51people who worry that, you know, “We'll be so good at that that maybe jobs will be 30:56displaced,” and that's, that's a legitimate concern, but just like now, in agriculture, 31:01you know, it's not like suddenly we had 90% of the population unemployed. You know? People 31:06transitioned to, to other jobs. And the other thing that we found too is that our appetite 31:13for, for doing more things is—as humans, is sort of insatiable. So even if we can dramatically 31:20increase how much, you know, one human can do, um, that doesn't necessarily mean we're 31:24going to do a fixed amount of stuff. There's an appetite to have even more. So we're going 31:28to, you know, continue to grow—grow the pie. So I, I think at least, uh, certainly 31:31in the near term, you know, we're going to see a lot of drudgery go away from work. 31:35We're going to see people be able to be more effective at their jobs. You know, 31:40we will see some transformation in, in jobs and what they look like, but we've seen that 31:45before. And, the technology at least has the potential to make our lives a lot easier. 31:50Jacob Goldstein: So IBM recently launched watsonx, which includes watsonx.ai. 31:57Tell me about that. Tell me about, you know, what it is and the new possibilities that it opens up. 32:02David Cox: Yeah. So, watsonx, is obviously 32:05a bit of a new branding on the Watson brand. T. J. Watson—that was the founder of IBM. 32:14And our AI technologies have had the Watson brand; watsonx is a recognition that 32:21there's something new. There's, there's something that actually has changed the game. 32:24We've gone from this old world of, “Automation is too labor intensive” to this new world 32:29of possibilities, where it's much easier to use AI. And what watsonx does is it brings 32:36together tools for businesses to harness that power. So, watsonx.AI It includes, foundation 32:44models that our customers can use. It includes tools that make it easy 32:48to run, easy to deploy, easy to experiment. There's a watsonx.data component, which allows 32:55you to sort of organize and access your data. So what we're really trying to do is give 33:00our customers a cohesive set of tools to harness the value of these technologies and at the 33:07same time be able to manage— the risks and the things that you have to keep an eye on 33:12in an enterprise context. 33:14Jacob Goldstein: So we talk about the guests on this show as, as “New Creators,” 33:19by which we mean people who are creatively applying 33:23technology in business to drive change. And I'm curious how, how creativity plays a role 33:31in the research that you do. 33:33David Cox: I honestly, I think the creative aspects of this job—this is what makes this work exciting. I should say, the folks who 33:43work in my organization are doing the creating and I get to sort of— 33:47Jacob Goldstein: You're, you're doing the managing, so They could do the creating. Yeah. 33:51David Cox: I'm helping them be their best. I do—I still get to get involved —in the 33:56weeds of the research, as much as I can. But, there's something really exciting about inventing. 34:03One of the nice things about doing invention and doing research on AI and industry is it's 34:09usually grounded in a real problem that somebody is having. You know, a customer wants to solve 34:14this problem, it's losing money, or there could be a new opportunity. You identify that 34:19problem and then you, you, you build something that's never been built before, to do that. 34:25And I think that's honestly the adrenaline rush that keeps all of us in this field. 34:30How do you do something that nobody else on earth has, has done before or tried before? So that, 34:36that kind of creativity—and, and there's also creativity as well in identifying what 34:40those problems are, being able to understand the places where, you know, the technology 34:47is close enough to solving a problem, and doing that matchmaking between problems that 34:53are now solvable. And in AI, where the field is moving so fast, there's this constantly 34:58growing horizon of things that we might be able to solve. 35:02So that matchmaking, I think, is also a really interesting, creative problem. So, I think 35:08that's why it's so much fun. And it's a fun environment we have here too—it's people 35:13drawing on whiteboards and, you know, writing on pages of math. And, uh— 35:18Jacob Goldstein: Like in a movie, like in a movie. 35:21David Cox: Yeah, straight from central casting. 35:22Jacob Goldstein: You drawing on—the drawing on the window, writing on the window with a Sharpie. 35:26David Cox: Absolutely. 35:27Jacob Goldstein: So, let's close with the really long view. How do you imagine AI and people working together 20 years from now? 35:40David Cox: Yeah, it's really hard to make 35:43predictions. The vision that I like, actually,—this came from an MIT economist named David Autor. 35:54Which was, “Imagine AI almost as a natural resource.” You know, it's like, we have—we 36:01know how natural resources work, right? Like there's an ore we can dig up out of the 36:04earth that comes from, you know—kind of springs from the earth. 36:08We usually think of that in terms of physical stuff. With AI, you can almost think of it as, like, 36:11“There's a new kind of abundance, potentially 20 years from now, where not only can we have 36:17things we can build or eat or use or burn or whatever. Now we have, you know, this ability 36:22to do things and understand things and do intellectual work.” And I think we can get 36:27to a world where automating things is just seamless; we're surrounded by the capability 36:33to augment ourselves—to, to get things done. And you could think of that in terms of, like, 36:39“Well, that's going to displace our jobs, because eventually the AI system is going 36:42to do everything we can do.” But you could also think of it in terms of, like, “Wow, 36:47that's just so much abundance that we now have. And really, how we use that abundance 36:51is, is sort of up to us.” You know, like, when you can—writing software is super easy 36:55and fast, and anybody can do it. Just think about all the things you can do now. 36:59Like, think about all the new activities. Think about all the ways we could use that 37:03to enrich our lives. That's where I like to see us in 20 years: you know, we can do just 37:10so much more than we were able to do before. 37:13Jacob Goldstein: Abundance. Great. Thank you so much for your time. 37:17David Cox: Yeah. It's been a pleasure. Thanks for inviting me. 37:20Malcolm Gladwell: What a far ranging deep conversation. I'm mesmerized by the vision David just described. 37:27A world where natural conversation between mankind and machines can generate creative solutions to our most complex problems. 37:35A world where we view AI not as our replacements, but as a powerful resource we can tap into and exponentially boost our innovation and productivity 37:47Thanks so much to Dr. David Cox for joining us on Smart Talks. We deeply appdicaite him sharing his huge breadth of AI Knowledge with us, 37:56and for explaining the transformative potential of foundations models in a way even I can understand. 38:03We eagerly await his next great breakthrough. 38:07Smart Talks with IBM is produced by Matt Romano, 38:10David Zha, Nisha Venkat, and Royston Beserve, with Jacob Goldstein. We’re edited 38:16by Lidia Jean Kott. Our engineers are Jason Gambrell, Sarah Bruguiere, and Ben Tolliday. Theme song by Gramoscope. 38:25Special thanks to Carly Migliori, Andy Kelly, Kathy Callaghan, and the EightBar and IBM teams, as well as the Pushkin marketing 38:34team. Smart Talks with IBM is a production of Pushkin 38:37Industries and Ruby Studio at iHeartMedia. To find more Pushkin podcasts, listen on the 38:43iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts.