Learning Library

← Back to Library

Will Million‑GPU Clusters Arrive?

Key Points

  • Industry leaders agree that a one‑million‑GPU cluster is unlikely to appear in the next three years, citing a forthcoming reset in ROI expectations that will drive more pragmatic scaling strategies.
  • AI companies have historically chased scale by amassing ever more data and compute, a formula that has fueled massive growth in data‑center demand and projected $250 billion in infrastructure spending by 2030.
  • Experts warn that usable training data is approaching saturation, meaning larger models no longer benefit proportionally from additional data and prompting a shift toward higher inference‑time compute.
  • This evolving landscape will shape discussions about the next wave of AI developments, including the role of agents and the path toward artificial general intelligence.

Sections

Full Transcript

# Will Million‑GPU Clusters Arrive? **Source:** [https://www.youtube.com/watch?v=GP4UrwbzLT8](https://www.youtube.com/watch?v=GP4UrwbzLT8) **Duration:** 00:39:07 ## Summary - Industry leaders agree that a one‑million‑GPU cluster is unlikely to appear in the next three years, citing a forthcoming reset in ROI expectations that will drive more pragmatic scaling strategies. - AI companies have historically chased scale by amassing ever more data and compute, a formula that has fueled massive growth in data‑center demand and projected $250 billion in infrastructure spending by 2030. - Experts warn that usable training data is approaching saturation, meaning larger models no longer benefit proportionally from additional data and prompting a shift toward higher inference‑time compute. - This evolving landscape will shape discussions about the next wave of AI developments, including the role of agents and the path toward artificial general intelligence. ## Sections - [00:00:00](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=0s) **Skepticism About Million‑GPU Clusters** - Experts on the Mixture of Experts podcast argue that a one‑million GPU cluster is unlikely within three years, foreseeing a reset in scaling expectations and a shift toward more rational, energy‑aware AI infrastructure. - [00:03:08](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=188s) **Questioning the Scale-First Paradigm** - The speaker debates whether to keep relying on ever‑larger data and compute for machine learning—recognizing past successes and personal enthusiasm for the engineering challenges of scaling—while urging a reassessment of the motivations and limits of this approach. - [00:06:12](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=372s) **Compact Distributional Learning for AI** - The speaker argues that AI should emulate animal‐like efficient learning by building and updating compact causal distributions instead of relying on massive observational data, advocating a return to reinforcement‑learning–style approaches exemplified by AlphaZero and AlphaFold. - [00:09:15](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=555s) **Debating the Limits of AI Scaling** - Panelists argue that merely increasing model size won’t sustain performance gains, questioning whether scaling has already hit a wall and if 2025 will mark the point where larger models cease to be the primary driver of AI advancement. - [00:12:17](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=737s) **Brain Size, Architecture, and Human Intelligence** - A neuroscientist explains that human cognitive superiority stems from a distinctive mix of brain architecture, scaling, and environmental factors rather than sheer brain size. - [00:15:23](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=923s) **Inference Compute Scaling & Overbuild** - The speakers argue that as businesses realize ROI from AI inference, demand will drive massive investment in inference hardware, leading to a temporary over‑building of data‑center capacity similar to early internet infrastructure. - [00:18:25](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=1105s) **Integrating AI Agents into Products** - The panel debates hiring dedicated AI sales teams versus embedding AI agents directly into products, stressing seamless integration, realistic expectations, and the gap between current capabilities and lofty promises. - [00:21:50](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=1310s) **Beyond Prompt: Building Controllable AI Workflows** - The speaker cautions that relying on lengthy prompts to direct AI agents is unsustainable and argues for explicit control points, system‑ and model‑level rules, and bounded autonomy to achieve robust, reliable real‑world automation. - [00:24:53](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=1493s) **Practical LLM Agents for Workflow Automation** - The speaker argues that realistic, task‑specific LLM agents—not full AGI—will become increasingly useful for automating spreadsheet and other application workflows over the next year. - [00:28:02](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=1682s) **AI Headlines, AGI Definition, and Progress** - The speaker argues that current AI hype eclipses the unclear definition of AGI, emphasizing incremental, domain‑specific integrations such as coding assistants while cautioning that true general intelligence remains distant and its development will not follow a simple linear trajectory. - [00:31:25](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=1885s) **Debating AGI Timelines and Hype** - The speaker questions optimistic AGI predictions, contrasting genuine belief with possible marketing hype, using financial market analogies and referencing Anthropic’s Dario Amadei. - [00:34:33](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=2073s) **Realist Concerns Over LLM Impact** - Panelists discuss the dangers of unrestricted LLMs, the debate over AI timelines, and how customers weigh adoption benefits against existential worries. - [00:37:39](https://www.youtube.com/watch?v=GP4UrwbzLT8&t=2259s) **Responsible AI Strategy for Enterprise** - The speaker stresses that enterprises must adopt AI with robust protocols, safety measures, and governance, pressuring model providers to build responsible tooling rather than merely accelerating development. ## Full Transcript
0:00Will we see a one million GPU cluster 0:02opening up sometime in the next three years? 0:05Kate Soule is a director of technical 0:07product management at Granite. 0:08Kate, welcome back to the show. 0:09What do you think? 0:10No, I really don't think so. 0:12Anthony Annunziata is director 0:14of AI Open Innovation. 0:15Anthony, welcome to the show for the first time. 0:17What's your take? 0:18I don't think so either. 0:20And then we've got a very special guest, 0:22Naveen Rao is VP of AI at Databricks. 0:25I think our first external 0:26guest on Mixture of Experts. 0:27Naveen, what do you think? 0:29Unlikely. 0:30Um, I think there will be a reset in 0:32terms of, uh, expectations, ROIs, and 0:35that's probably going to drive a little 0:36more rationality into building this out. 0:38All right. 0:39All that and more on today's Mixture of Experts. 0:46I'm Tim Hwang welcome to Mixture of Experts. 0:48Each week MOE brings you the insights you 0:50need to navigate the ever changing, ever 0:52unpredictable world of artificial intelligence. 0:55Today we're going to be talking about 0:572025, what the future holds for agents, 0:59what the future holds for AGI, but first 1:02let's talk about the future of scale. 1:05AI companies have basically been chasing scale. 1:07Unless you've been living under a rock, that 1:09won't be something that's unfamiliar to you. 1:11And kind of where that has been most 1:12prominent has been in data centers and power. 1:16McKinsey just came out with a report that 1:17estimated that global demand for data centers 1:20could triple by 2030 with generative AI 1:23driving huge increases in energy consumption. 1:26And, you know, their estimate, 1:27which is mind boggling, right, is 1:28that spend will be 250 billion. 1:32Um, for this infrastructure by 1:332030, um, and so I guess maybe Kate, 1:36maybe I'll kick it to you first. 1:38Can you give our listeners a little bit of 1:39intuition for like why all these companies 1:41are chasing scale and why that's been 1:43important to the history of AI so far? 1:45Yeah, sure thing, Tim. 1:46So if you think of how these models 1:48have trained and evolved over time, it's 1:51basically been a really simple formula 1:53of taking as much data as you can get. 1:55Adding as much compute as you have 1:57access to and training a model for as 2:00long as you can afford it in order to 2:03maximize performance and send that out. 2:04So, you know, to date, the recipe for 2:07scale has been a mixture of getting 2:09more data and getting more compute. 2:11And obviously that's going to continue 2:13to drive, uh, costs and potentially 2:15drive demand for data centers. 2:18I think there's going to be some 2:20interesting things that start to emerge, 2:21though, that are going to maybe break 2:23some of the trends that we've seen. 2:24For one, we're just running out of data. 2:27Uh, we're seeing all the data is being, you 2:29know, used no matter the model size, it's 2:31no longer scaling proportional to size. 2:33And there's only so much, uh, data 2:36out there that's worth training. 2:37We're also seeing a lot more compute 2:40being starting to spend at inference 2:42time instead of just training time. 2:44So as we continue to max out what we can 2:47bake, pre bake into the model as it starts 2:49to train, we're starting to see are there 2:51other places, like when the model runs 2:53inference, that we could spend some extra 2:54compute to try and boost performance. 2:56So that also might start to 2:57break some of those trends. 2:58That's great. 2:58Well, Naveen, maybe I'll turn to you because 3:00I know in your opening comment when we 3:02were talking a little bit before the show, 3:03you were saying that, hey, look, you know, 3:06maybe scale is not all you need, right? 3:08And that we're going to have to kind of 3:09really evaluate how we do machine learning. 3:12And I guess maybe to kind of take Kate's 3:14comment there, you know, like, why shouldn't 3:16we believe like it's been working so far? 3:18Like, why shouldn't it keep working? 3:19Basically, like, it feels like, you 3:21know, we had these huge successes, just 3:23kind of doing the dumb thing, which 3:24is add more data and add more compute. 3:27Um, why is, why is now different? 3:28Right? 3:29Like, you know, 3:29well, I think you also got 3:30to look at the motivations. 3:31I mean, I was a scale 3:32maximalist for a long time. 3:33I mean, I, I started the first AI 3:35chip company, uh, back in 2014, and 3:38we built it for scale from day one. 3:40It was designed to be a 3:41scale out sort of a thing. 3:43And, uh, I'll offer a different explanation. 3:46I mean, yes, everything Kate said is 3:47correct, but there's also another motivation. 3:51As an engineer, it's a really freaking 3:53cool problem to scale something 3:55bigger and bigger and bigger. 3:56It's just cool. 3:58And I've been 3:59seduced by that myself. 4:00Like, oh, this is cool. 4:01I want to build that, you know? 4:03And like, there's, there's interesting 4:04challenges that get presented each time. 4:06Like the, you know, the, the, 4:07the latency starts to matter. 4:09How do I deal with that? 4:10Can I come up with new strategies? 4:11So. 4:12It's one of these things that's like, 4:13it's like an intellectual pursuit. 4:14I'm like, I'm going to keep going 4:15bigger and bigger and bigger. 4:17And you know, it, it is a cool problem. 4:19It is a fun problem, but at some 4:20point you've got to solve problems, 4:22not just for their own sake. 4:24And, uh, and I think that's 4:25what we've come to come to now. 4:27Like Kate said, we have run out of data, but 4:29also the paradigm is simply trying to train on 4:32more data isn't going to yield more results. 4:35And I'm happy to go into why, um, because. 4:39These, these things are essentially 4:40conditional probability estimators, and 4:43you can never uncover every conditional 4:45probability in the data you have. 4:47You will, you will, I've said it many times, 4:49you will get to the eventual heat death of the 4:51universe before you will uncover all of those. 4:54So I think there, there is always going 4:56to be some, um, return from getting bigger 4:59and more data, but like it's diminishing, 5:01uh, for real world applications. 5:03So you need a new paradigm. 5:04Yeah, for sure. 5:05And do you want to talk a little bit 5:06about what you think that new paradigm is? 5:07I mean, in some ways, it's like 5:09the multi billion dollar question. 5:10But, you know, while we're speculating around 5:122025, kind of curious if like you've got 5:14intuitions on, okay, if not this, like it 5:17actually turns out the data is failing us, 5:18which is kind of a crazy thing to say, but it's 5:21like, well, what, what comes after data, right? 5:23Data is kind of what we know in 5:24some sense, if you think about 5:25like trying to train these models. 5:27Yeah, I think there's several facets to it. 5:30Okay. So I'll say on the algorithmic 5:31side, like It's intuitive. 5:33If anyone's been around, you know, a 5:35child learning or even trying to train 5:36an animal, um, you don't train it 5:39through exhaustive, um, observation. 5:42You don't put a kid in front of 5:44every observation of how to do a task 5:46and then expect them to learn it. 5:47That's exactly, that's what we're 5:48asking right now of an AI model. 5:50So, we actually do it through a trial 5:52and error of actually performing 5:55something, getting a reward or an anti 5:58reward for, uh, for, for performance. 6:01You're talking reinforcement learning. 6:03Yeah, I mean, that's, that's a big part of it. 6:05We do some form of reinforcement 6:06learning with neural networks. 6:07Uh, to be clear, uh, it's kind of a 6:10weak version, but it is, it is there. 6:12So this concept does exist, but it's also 6:14predicated on this huge, you know, set of 6:17distributions that has been trained upon. 6:19And what animals tend to do is 6:21actually be much more efficient. 6:22They, they observe some, they build some, some 6:24baseline distributions and then they act and 6:27update these distributions kind of all at once. 6:30So I, I think something towards 6:32that end is going to be the answer. 6:33There's no doubt in my mind it has to work 6:35that way, because this way we can be much 6:37more compact with our representations. 6:39We can actually discern causality. 6:41Causality may or may not exist from 6:43a physics standpoint, but the reality 6:45is, it's a more compact way to describe 6:47how the world tends to work, right? 6:49And so I think this is something that 6:50has to be uncovered, uh, in our models. 6:53We can't make it just hugely observational. 6:55It's not going to work. 6:56Yeah, that's super fascinating and 6:58it's actually kind of funny to think 6:59that like the recent history of deep 7:00learning is kind of out of order, right? 7:02Like I remember in the AlphaGo era, it was 7:04like everything's going to be reinforcement 7:06learning and then that kind of just 7:07like sort of petered out as all of these 7:09other approaches kind of had success. 7:11But almost you're saying like we kind of got 7:12to get back to that, like that's actually true. 7:14I actually think AlphaFold and AlphaZero 7:16were very much on the right track. 7:18I think we didn't have the 7:19scale part figured out yet. 7:21Um, but honestly, I think 7:23it was the right approach. 7:24I'd present also a complementary 7:26perspective, which is maybe a simple one. 7:28So research is hard. 7:30Research in AI is hard. 7:31When you find something that works, uh, 7:33people jump on it and they run, right? 7:36So what happened a couple years ago 7:37is that, Um, and when that happens, 7:40there's kind of irrational exuberance 7:42almost, right, in the research community. 7:43Sometimes we think decisions are made, uh, 7:46more deeply than that, but sometimes you 7:47just find something that works, and you 7:49push it as hard as you can until it stops 7:51working as well, or until other things catch 7:54up, including, you know, costs and ROI. 7:56100%. 7:57Yeah, for sure. 7:57And I think what's interesting, I 7:58mean, your title is, uh, looking 8:00at specifically open innovation. 8:02And I did, do think that is one thing to talk 8:04a little bit about is, you know, traditionally, 8:06traditionally, and by traditionally, 8:07I mean, like last 36 months, right? 8:09Like a lot of the breakthroughs have 8:11been, you have access to this really big 8:13computer that no one else has access to. 8:15And I guess, I don't know, Anthony, 8:16if your predictions about how 8:17these dynamics change, right? 8:19Like if scale is no longer the thing that 8:20really gets the breakthrough, are there 8:22just more opportunities elsewhere now? 8:24Like there's going to be more people who 8:25can do you know, kind of advance the state 8:28of the art here without necessarily having 8:29to have access to a million GPU cluster. 8:33Yeah, I think so. 8:34Uh, just taking a little bit of what Naveen 8:36was saying, um, innovation at the architectural 8:39level, innovation at the feedback level, 8:41innovation in how AI systems are built, 8:43like there's a huge opportunity for that. 8:45Um, In the open community and universities 8:48in players that I think have been left 8:50on the sidelines, but have struggled to 8:52catch up with the scale story, right, 8:54just because of the centricity of compute. 8:56I think we're going to see even more of that. 8:58I think it's really important. 8:59I think the other side of it is the product of a 9:02couple years of just pushing ahead really hard. 9:04Is that we have great, um, open models 9:07out there, uh, that are very capable. 9:10And you've already seen a 9:11flourishing of innovation with them. 9:12But, um, there's a lot more to go 9:14just with what we've built already. 9:15And, and, you know, what's 9:16going to continue to come out. 9:18For sure. 9:18I want to force the panel to make 9:20some concrete predictions here, right? 9:21I think one of the interesting things 9:22about scale is there's always the dream. 9:24If you like flip over a few more 9:26cards, you know, maybe the model is 9:27just going to get that much better. 9:29And so like, it feels like this kind of 9:30like the gas could run out of the scale 9:33car Before we realize that kind of scale 9:36is broken, but I'm kind of curious, like 9:38is 2025 the year where scale sort of breaks 9:40like we're just like, actually, it turns 9:41out that this is not going to work anymore. 9:43I think it already broke. 9:44You think it already broke? 9:45Why? Why do you think that? 9:46Show me a bigger model than 9:48gbt for no one built one. 9:50And there's a good reason for it, right? 9:52They probably have built one, but it 9:53didn't do anything all that special. 9:56Right? Right. 9:57Uh, and I think that's been the 9:58issue, is I think it's already 9:59I asked for hot takes, and it feels like 10:01you're really delivering for us in the opinion. 10:02Yeah, there you go, right? 10:03Yeah. 10:04Show me something bigger than 1. 10:056 trillion parameters. 10:06I mean, not to say that there won't be 10:08a way that that does yield advantages, 10:10but there's got to be more to it. 10:12It's not the only ingredient, scale is not 10:13the only ingredient, you need that plus 10:16something else and maybe then you'll get some 10:18super intelligence or whatever you want to 10:19call it, but we haven't cracked that yet. 10:21Is, uh, Kate, Anthony, I don't know if 10:22you'd agree, is, is scale already failed? 10:25Right? Like, are we, we're already living in 10:26a kind of post scale world, basically? 10:28I mean, I think there's an important 10:30part of the story that we haven't covered 10:32yet, which is part of the advantage of 10:34scale right now is being able to then 10:36boost the performance of smaller models. 10:38So maybe the performance of how far we 10:40can push the top of the spectrum has been 10:43maxed out to some degree, just on pure 10:46size alone, but I think there's still 10:48a lot more to talk about in terms of. 10:51how to scale the performance and the amount 10:54of performance you can pack into fewer and 10:56fewer parameters on the smaller model sizes, 10:59using those large models as teacher models, 11:01as synthetic data generators, as, uh, you 11:04know, using them in our AIF workflows in 11:07order to better improve smaller models. 11:09So we've seen a trend, right? 11:11If you look at what, you know, a model 11:13could do last year, you could do that if 11:16it took 70 billion parameters or a hundred 11:18billion or a trillion parameters last 11:19year, you can do many of those same tasks 11:21in fewer than 8 billion parameters today. 11:24I don't think we've maxed out that curve 11:26of downsizing and packing more and more 11:28performance into smaller and smaller models. 11:31Yeah, the commercial dynamics of that are 11:32really interesting because you know the rhetoric 11:35has often been we're gonna train this massive 11:37model And then we're gonna sell an API against 11:39it right like basically that it's gonna be 11:41an external phenomenon Okay, you're almost 11:43presaging a world where like the each of the big 11:45labs will have their gigantic model But it'll 11:47kind of be for internal purposes almost it's 11:49like for minting things The smaller models that 11:51really are absolutely the commercial action. 11:53I think there's like this huge competitive 11:55advantage that model providers have simply 11:57by having their own in house large model 11:59to boost and create the smaller models 12:01that everyone's actually going to use. 12:03No one wants to run a trillion 12:04parameter model for real tasks that 12:06inference done as cool as it is. 12:08It's cool. 12:08Everyone wants to say they have it, 12:10but no one wants to actually use it 12:11in real world applications, right? 12:13It's the smaller models that 12:14will be much more cost effective. 12:16I'll offer another set of data. 12:17So I'm a neuroscientist, 12:19uh, um, from grad school. 12:21And, you know, I think I, I like to 12:23look at biology as a, as a blueprint 12:26for many of these things, because over 12:274 billion years of evolution, you know, 12:29some, some interesting things came about. 12:32And, uh, if you look at brains, 12:33scale was not all you needed. 12:35Uh, humans do not have the largest 12:37brains in the animal kingdom. 12:39You know, brains do scale with body size, so 12:41blue whales have the largest brain by mass. 12:44Um, it's actually very likely also more neurons. 12:48Dolphins have very large brains as well. 12:50So, uh, there are, and elephants. 12:53So there are lots of mammals that have larger 12:55brains than us, but clearly haven't had 12:57the same impact on the world as we've had. 12:59So, I mean, there are several reasons 13:01for that, but I actually argue there's 13:02some architectural differences. 13:04in their brains that, that lead to this. 13:06And we came up with the right, um, mix of 13:11scale, architecture, and environment to 13:14actually, you know, build human intelligence. 13:17Yeah, I like that. 13:17It's almost like the adage like super 13:18intelligence is not all you need, right? 13:20It's basically like yeah, like you might 13:22have a huge brain, but actually its impact 13:24may be actually quite limited in some ways 13:26Yeah, yeah, that's a whole other topic 13:27I'd love to love to dive into if you 13:29want, but like I don't even know what 13:30the hell super intelligence is, right? 13:32Like how do we even define this? 13:34I mean have some definitions, but I 13:36think everyone's like talking about, 13:37oh, it's a foregone conclusion. 13:38It's happening in two years. 13:39Like guys, we haven't even 13:40solved regular intelligence. 13:42You can't even define it. 13:45So we will, we will definitely get to that. 13:46I guess, Anthony, maybe I'll turn to you on 13:48predictions and we'll close out this segment 13:50is so, I mean, I would just observe, right? 13:52Like the contracts to build these massive 13:54data centers are happening now, right? 13:57Like regardless of what's 13:58happening in scale lands. 13:59Hardware is data centers are certainly scaling. 14:02So is it kind of like we're going to see 14:04in 36 years or 36 months basically just 14:06like These huge facilities just kind 14:08of mothballs like we're gonna have big 14:10empty data centers is kind of the future 14:13No, I don't think that's going to happen. 14:15I think there'll be some correction, 14:16but I think it'll be a smooth correction 14:18Also, I think what's really important 14:19is we you know, we focused a bit on 14:22the training part of scaling, right? 14:23So the scaling of deployment, whether they're 14:25medium sized models, small models, APIs to 14:27big models or whatnot, will absolutely depend 14:30on the availability of cloud data centers. 14:32So I think the trend is reasonable, 14:34maybe it's inflated a bit, but 14:37I don't think it's going to, uh, 14:39I would agree with that as well. 14:41So if we look at, again, where there is 14:43opportunities to add something that's not 14:45scale into the equation to try and improve 14:48and boost performance, I think we're starting 14:50to see there's a lot more innovation that 14:51we can do at a single model, regardless of 14:54its scale at runtimes, allowing it to run 14:56multiple times, generate multiple answers is 14:58a very basic example, um, in order to boost 15:02performance for any, any given inference. 15:05And if that trend continues, then we have 15:08a whole much larger population that's 15:11going to be driving up inferencing costs. 15:13And they only have to pay for their small 15:15fraction or part of it versus training, right? 15:17You have to get these big model providers 15:19to dump tons and billions of dollars 15:21into building these compute centers. 15:23But if everyone can start to see that 15:25lift and, you know, have their own, uh, 15:27ROIs that they can take advantage of, I 15:29think that's going to continue to drive 15:31the investment at inference time compute. 15:34Even more so than what we have today, you know, 15:36any given API call could, you know, cost 10 15:39times what it does today just because it could 15:41be worth it from a performance gain perspective. 15:43Yeah, for sure. 15:44I think that'll be really sort of 15:46interesting to see as you build these 15:47big data centers being like, we're going 15:48to do the mother of all training runs. 15:50And it's like actually like 15:51we need it for inference. 15:53So, yeah, I think that's a whole scale AI 15:57practically that, you know, really just started 15:59and I think, uh, yeah, I fully agree with Kate. 16:01There's some parallels to, uh, 16:02the internet build out as well. 16:04Like, I think a lot of, there was a lot 16:06of talk around, around 2000 timeframe when 16:08the stock market crashed, that, oh my God, 16:11did we overbuild, you know, a bunch of 16:13network infrastructure, blah, blah, blah. 16:15And, you know, in the fullness 16:16of time, none of that was true. 16:17It was underbuilt, if anything. 16:19Uh, but it, you know, it took a few years. 16:20There was an overbuild for a short 16:22period, like maybe two or three years. 16:24Until all the demand caught up. 16:26And I think you're absolutely right. 16:27That's where we're going to end up, 16:28probably, is like, it's not going to be 16:30like these data centers are way, way fallow, 16:32but Everyone's gonna, there's gonna be a 16:34bunch of articles that say like everything 16:36was overbilled, the bubbles burst, and 16:39then in two years it'll all make sense. 16:42Wouldn't that be a nice change? 16:43High availability of GPU 16:45compute at reasonable prices. 16:46I think it's already true, honestly. 16:48It's true, it's true. 16:50Dream the impossible dream. 16:56I'm going to move us on to our second segment. 16:58Um, so if there is one word that 17:00has characterized enterprise and AI 17:03in 2024, it has been, uh, agents. 17:06Agents, agents, agents. 17:08Uh, even on this show, it's become a little bit 17:09of an in joke that like, agents need to come up 17:11at least once during the course of the episode. 17:14And there is news out, that Salesforce, uh, 17:17is planning on hiring one thousand salespeople 17:20to support its push into the agents market. 17:24Um, as we kind of get into November here 17:26and start thinking about 2025, I just want 17:29to ask, like, is the future really agents? 17:30Like, are we going to continue to live? 17:32Like, I'm going to have to 17:33hear more about agents in 2025. 17:35Um, I guess maybe Naveen, I'll kick it 17:36over to you first, is like, how do you 17:38think this market's going to evolve? 17:39And are we, are we about to like, is hiring 17:41a thousand salespeople justified here? 17:44Well, honestly, no, um, because I know the state 17:46of the art of where agents are, but, you know, 17:48it's a great headline and that's what they do. 17:50I mean, Salesforce is great 17:51at this and it's fine. 17:53You know, it's going to make them try to appear 17:55as more of a big AI player and that's what 17:58they're going for, uh, with that statement. 18:00So, uh, I think it'll serve their needs. 18:02I don't think it's actually necessary 18:04because when an agent's really good, 18:06when an agent really works, you're not 18:07going to have to do much to sell it. 18:09Honestly, it'll just automate 18:10things, but we're not there yet. 18:12And I think that's where. 18:13The hype is a little bit ahead. 18:15And again, it's going to be one of these things. 18:16It'll be a big disillusionment 18:17in the next two years. 18:18And then it'll come back slowly and it'll 18:20actually be super useful in three or four years. 18:23That's kind of how this is all going to work. 18:25So you have to join us November 2025. 18:27And then maybe then Naveen 18:28will be like, eh, maybe. 18:30Yeah, exactly. 18:30But I, but I think, you know, it's not, 18:32you're not going to need a thousand 18:33people just to focus on agents. 18:35It's just going to, it's going to be 18:36something that's going to be amazing for 18:38the products and people will use it and 18:40their, and their, their sales infrastructure 18:42should be able to handle such a thing. 18:43I mean, at Databricks, we have similar problems. 18:45You know, we've actually decided, should we 18:47hire, like we, we gone through this, we hire 18:49a whole bunch of people to sell AI, or should 18:51we try to like layer it into the product? 18:53We've actually done a mix. 18:54We haven't hired a thousand, 18:55but we have hired some people. 18:57And, uh, you know, it comes with mixed 18:59success because What you need to do is 19:02really integrate it into how people use 19:03the tools and make it somewhat invisible. 19:06And then it will sell itself. 19:07Yep. 19:07For sure. 19:08Anthony, I see you nodding vigorously. 19:09I mean, maybe I can just ask you to go a little 19:11bit more into almost like what Naveen said is 19:13like, almost like the promises right now are 19:15not necessarily matching up with where we are. 19:17I'm curious if you've got thoughts on 19:19like where the gaps are at the moment. 19:21Yeah. 19:21A few thoughts. 19:24So look, first, you know, what is an AI agent? 19:26What is an agent in general? 19:28Like there's a large 19:29spectrum of what that means. 19:30I think if you look at some of the 19:31announcements, like the one you referenced, 19:33uh, the use of agent is kind of, uh, 19:36a relatively early version in terms 19:38of the level of automation and, you 19:40know, task automation and execution. 19:42So like. 19:42If by agent we mean, you know, a chat 19:45experience that has a bit more of a lookup 19:47and, you know, search capability and the 19:50ability to ask sort of questions to get 19:52the right data and things like that, right? 19:54A little bit more interactive, a little bit 19:56more, you know, kind of implicit reasoning. 19:58I think we've already seen that. 20:00I think that'll, you know, 20:01steadily and incrementally grow. 20:03If instead we're talking about an agent, like 20:05give it a goal and it'll go off and interact 20:07with a large variety of systems and execute 20:09without any supervision, like no way, right? 20:13There's so many steps with compounded error 20:15across that whole, that whole environment. 20:17Like we can't even get high accuracy 20:19basic Q& A in many industry domains yet. 20:22Like no way we're going to get that 20:23level of automated agent execution. 20:25So I think like, like any story, 20:27there's a piece of it that's, 20:28that's valid and true and will grow. 20:30I think there's a long tail of research 20:32that has to be done to get this kind of full 20:33fruition of uh, what an agent might mean. 20:36Yeah, for sure. 20:37And it's kind of funny. 20:38I mean, one of the adage about AI is always 20:39like, we don't know what we're talking about. 20:41People just say AI to refer to everything, 20:43like, do you mean linear regression? 20:44That's not AI, you know? 20:46And I think that like, almost, it sounds 20:48like Anthony, you're kind of arguing 20:49that that's almost happening in the agent 20:51market where it's kind of like the word 20:52has become so broad that it's like, Yeah. 20:54You know, are you, are you just talking rag? 20:57Because like, if that's the 20:58case, then sure agents exist. 21:00That's right. 21:00I agree. 21:01It's the, there's, there's a definite 21:02stretching of the definition here. 21:04Okay. Maybe I can ask you to jump in. 21:05I mean, so are you ultimately, I 21:07guess, you know, because Anthony 21:08had these two pictures of the world. 21:09One is like, is it just a chat 21:11bot that looks things up for you? 21:12And the other one is like, you tell 21:14the agent to do something and it does 21:15the whole thing in the real world. 21:17Are you kind of an optimist? 21:18Like, do you think we're going to 21:18get there to that second vision? 21:20Or is that going to be like way, 21:21way off from your point of view? 21:23Yeah. 21:23So, I'm pretty skeptical of the broad 21:28definition of agent as it exists today, you 21:30know, an agent is really just a long prompt 21:34right now, it's like a multi page prompt 21:36where you're asking a model very nicely 21:39to do five different things and to always 21:42think in a specific order and to call APIs 21:45a specific way, and it works pretty well. 21:50Pretty well, but you know is not controllable. 21:51There's no real thought yet in my mind on 21:55like what are the control points that need 21:56to be inserted along an agentic workflow 21:58in order to have any degree of robustness 22:01and reliability deployed out in the world. 22:04And You know, ultimately, I think that there's 22:06a lot of work to do to transition from here's 22:10a four page kind of word vomit of everything 22:13I want an agent to do and it goes off and does 22:15the thing to here is a very controllable program 22:19that I've executed that has very clear rules, 22:23some of which are at the system level, some of 22:25which are at the model level that can go out 22:28and execute a series of tasks within a certain 22:30degree of Freedom, um, not, uh, not unlimited. 22:34And I worry that right now everyone's just so 22:37amazed that if I give a model four pages worth 22:39of instructions, it can do a reasonable job. 22:41It can do, I mean, I can't read four 22:43pages worth of instructions, remember 22:44everything I'm supposed to do. 22:45So like, well, yeah, it's really impressive. 22:48And I see a lot of excitement 22:49and hype being built around it. 22:51But if we're not careful, like we're just 22:53going to keep going down this road of how do 22:55I cram more and more instructions into this? 22:58prompt for what I want the model to do 23:00and not really focus on like what are 23:03the control points needed for AI enabled 23:06workflows to be automated out in the world 23:08and does chat even need to be a part of them? 23:10Agent also I think really connotes like having 23:13a conversation or a dialogue and I think a 23:15lot of the opportunities for AI and where 23:18we're going to be incentivized to build AI 23:20are not necessarily chat based and so there's 23:23I think just a lot of Evolution that's going 23:26to be needed for agents to really find their 23:29their application and actually get traction. 23:32Yeah, it's pretty interesting to hear that 23:34like I think like that did that the chat thing 23:36actually might be totally just this kind of 23:37Mistake of history and like the long term 23:39evolution of this stuff is like Actually, it's 23:41like kind of a bad interface for this stuff. 23:44Well, I, I agree. 23:44If I'm writing an email, I don't want 23:46to, like, talk to somebody multiple 23:48times about what the email should have. 23:50I want to, like, have just a short 23:52little, you know, box I put some 23:53info in and an email comes out. 23:55I mean, chat has been an obsession 23:56in AI for decades, right? 23:58Like, it's like a life definition. 24:01Yeah. And these kinds of things. 24:02Eliza. 24:02That's what I was thinking. 24:04That's right. 24:04Yeah. Well, again, it's a little bit 24:05like Naveen was saying earlier. 24:06I mean, it feels really cool. 24:09That's actually a really 24:10strong motivator, for sure. 24:11It is. 24:12Um. 24:12And I think part of this is also that, 24:14uh, we haven't built those models 24:16that do what I was saying, right? 24:17About actually trying to uncover, causality. 24:20You can't build something that 24:21has quote unquote agency unless it 24:24understands the, the, the intrinsic, 24:27uh, causal nature of the world, right? 24:29I do this and that happens. 24:31Like, these models don't have that. 24:33They, they basically can pick up on 24:35patterns and extract different sorts 24:36of patterns, but they actually don't 24:38understand this causal relationship. 24:39So, Naveen, I know you're on 24:40the show for the first time. 24:41I'm trying to, I'm starting to get 24:42a sense of your vibe, which is that 24:43you're, you're grumpy about AI. 24:44Uh, I'm wondering if I can kind of push you in. 24:46I'm actually very hopeful. 24:47I think it's actually, I devoted my last 24:5015 or 18 years of my life to this field. 24:53I'm not grumpy. 24:53I'm just a realist. 24:56Well, I think in the spirit of 24:58realism, can I push you on your 24:59predictions around agents in 2025? 25:02But like, what's the bull case? 25:03Like, what do you think the most 25:04impactful thing on agents is going to 25:06be in the next 12 months, if anything? 25:07Yeah, I think, uh, if you basically narrow 25:10the definition a bit, we actually get 25:11something that's, that's super useful, right? 25:14To be clear, an LLM, the thing that 25:15can summarize and do all the things 25:17that they do, is actually super useful. 25:18It doesn't mean it's AGI or whatever. 25:21I, I kind of hate that term, uh, but it, 25:23it, it is something that is super useful. 25:25And I think, um, you know, as Kate said, 25:27the interface is not necessarily a chatbot. 25:30Like, what I want is something that when 25:32I'm in an Excel spreadsheet, I want it to, 25:34like, You know, impute values or describe 25:37things or, you know, there's so many ways 25:39that you can add value to those experiences. 25:41That's what we can do now. 25:43And so, uh, being able to automate, okay, I 25:45want to like copy all these cells and then, 25:47you know, apply this formula across the rows. 25:50You know, there's all these 25:51kind of tasks that we do. 25:52If I could just say, hey, do this for 25:53me, that's an agentic workflow, if you 25:56will, but it's not thinking on its own. 25:58It's I'm telling it what to do. 26:00It just has to carry it out 26:01within the framework of the app. 26:03So I think that's what we're going 26:04to, we're going to see more of. 26:05And you know, inside Databricks, 26:06we're seeing a lot of this now. 26:07In fact, we've been using, um, LLMs and, you 26:11know, generative AI to improve the experience 26:14of Databricks itself, like actually finding bugs 26:16in your SQL code or You know, and actually be 26:19able to fix it for you or propose a fix for you. 26:22These kinds of things 26:23actually are big time savers. 26:25So I think that's what we're going 26:26to see in 2025 is more of that. 26:29It is going to drive demand for 26:30compute and everything, but it's 26:32not, you know, super intelligence. 26:34That's what I'm grumpy about. 26:35Maybe if you want to put a point on 26:39it. 26:39I'm glad you said that, Naveen, because 26:40it's always a good sign when a panelist 26:41is like, I really dislike that term. 26:43Moving us on to the third segment 26:44of today, let's talk a little bit 26:46about superintelligence and AGI. 26:53This is the last segment I kind 26:54of wanted to focus on, uh, just 26:56as we kind of look towards 2025. 26:58And of course, it's part of the 26:59narrative of where AI is going. 27:01Um, the information reported out, uh, that, 27:04uh, OpenAI is seeing sort of the rates of 27:06improvement in GPT kind of slowing over time. 27:09And, um, I thought, I think I caught earlier, 27:11there's an interview with Ilya, um, where he 27:13basically said, Hey, you know, maybe, maybe 27:15this is like actually progress is slowing down. 27:18Um, and I think I just kind of wanted to put. 27:20Those kind of rumblings or concerns, 27:23uh, next to some of what we're hearing 27:25from leaders in the industry, right? 27:26So Sam Altman did a blog post where 27:28he predicted that superintelligence 27:29is potentially a thousand days away. 27:32Um, Anthropic recently warned that, 27:34you know, these systems are advancing 27:36so quickly, we need serious kind of 27:37targeted regulation in the next 18 months. 27:40Um, and so, You know, I guess maybe like 27:43Anthony, I'll kind of kick it to you first 27:44is what are we to make of this right like is 27:47is AGI on the way is this kind of like how 27:50do we kind of square a lot of what we've been 27:52talking about this episode, which is, you know, 27:54it's going to get harder with, I think, kind 27:56of like pretty some pretty strong claims that 27:58like, hey, we're about to have ultra powerful 27:59systems, you know, in the next thousand days. 28:02Look, we're talking about it. 28:03So the headlines work, right? 28:05Like, it's a compelling topic. 28:07It, uh, attracts the public's attention. 28:09It's like the superhero obsession 28:10or whatever you want to call it. 28:12Um, I think a lot of it is that, right? 28:15Look, where are we today? 28:16I don't even know what a working 28:17definition of AGI is at this point. 28:20Um, I can propose my own, but I think what 28:24we're gonna start to see really matter is 28:26more and more ways that AI is integrated and 28:29embedded and helps in specific contexts, right? 28:31So Naveen mentioned some, certainly coding 28:33assistants, embedded coding assistants, 28:36um, have made a lot of progress. 28:37It's kind of an early set of use cases. 28:38We've seen lots of utility. 28:40We'll see a lot more of that. 28:42Um, look, in terms of like, when does AI 28:45reach, Some level of general intelligence, 28:49uh, even if we take a definition of that 28:51being like, you know, equivalence to 28:53human capacity to not only, you know, 28:56know things, but to reason, to perceive, I 28:59mean, that's a very long way off, I'd say. 29:01Yeah, I mean, I don't know if it's all, I guess 29:03it depends on what you define as long way off. 29:05Um, I think we will get there. 29:07Uh, it's just going to take, it's 29:09going to be harder than we think. 29:10So everybody. 29:12Our perception is very linear. 29:13It's like, okay, this thing has been going and 29:15every year it gets better and better and better. 29:16So then by two more years, we're 29:18going to have this other thing. 29:19That's not actually how these technologies seem 29:21to evolve and that's never really been true. 29:23And so we always underestimate or 29:25overestimate the technology in the short 29:27term, but underestimate in the long term 29:29because they actually work on exponentials. 29:31So 10%, 5 percent improvement a 29:33year on year, um, of something. 29:35actually adds up a lot, you know, very 29:37fast once you get to like year seven. 29:40So I think what we're going to see 29:41is in 10 years, we very well might 29:45have something that does reason and 29:48actually does understand causality. 29:49My prediction has been between, 29:51I think, by 30 years, it's a 95 29:54percent chance we will solve that. 29:56By 10 years, I think it's 29:57like a 30 percent chance. 30:00So that's kind of my bounds are 30:0110 to 30 years from now, but I 30:03think that's not that long, right? 30:06So 30:06like you're kind of saying, like, you 30:07know, there's, there's people alive 30:09today who will definitely see that. 30:11Yeah, totally. 30:11Right. I mean, uh, which I think is very cool. 30:14Uh, but It's not something 30:16that's going to happen next year. 30:17I think that's just a hype 30:19train, to be honest with you. 30:21We haven't solved fundamental problems yet. 30:23We will see around that precipice a 30:25year ahead of time pretty clearly. 30:28And right now, it's not super clear. 30:29So to me, it doesn't feel credible to say that. 30:32Well, I think we're also 30:32conflating a lot of things. 30:34Like, Cause and effect and causal understanding 30:37versus super intelligence like there are 30:40Causal models that are out in the world 30:42today that can help break down and isolate 30:44cause and effect relationships and things 30:46particularly in like drug discovery That are 30:49are widely used, you know So, are we just 30:52talking about can we get the models to better 30:54understand causal reasoning or are you talking 30:56about sentience and in, like, every stretch 30:59of the world and having a model that has a 31:01personality and, you know, goes off and does, 31:04you know, things of its own, uh, own will, so 31:07to speak, and I think that In general, those 31:11aspirations are really more around marketing, 31:14and I don't think there's even necessarily 31:16the right economic incentives to develop that, 31:18versus developing, you know, cause and effect 31:20reasoning, but developing better tools for 31:22handling language and doing different tasks. 31:25Absolutely. 31:26Um, you know, I think in the next, three 31:27to ten plus years, um, is more realistic. 31:30Yeah, there's this adage in, um, kind of 31:32financial markets that like the market be, can 31:34be irrational longer than you can stay solvent. 31:36And I was joking with a friend 31:37recently, it's like, AGI can be imminent 31:39longer than you can stay solvent. 31:41It's like, it's just around the 31:42corner, everybody believe me, 31:43it's just around the corner. 31:45Um, I guess, Anthony, maybe to go 31:46back to you, I mean, I, I want to 31:48kind of challenge the idea that it is 31:50potentially just all marketing, right? 31:52I think, I think one of the really 31:53interesting comments, uh, that came out of 31:55this, uh, kind of essay that Dario Amadei, 31:58who runs Anthropic wrote, we were talking 31:59about a few episodes ago, you know, he's 32:01writing about the future of AI and how it's 32:03going to change the world and everything. 32:04A lot of people say, ah, marketing. 32:06And, you know, some people kind of 32:07looked up, you know, like his writings 32:09from when he was like a grad student. 32:10And he's like still writing 32:11about this stuff, right? 32:13Um, and I do think that that is kind of 32:15an interesting thing that I would love 32:16to kind of get your thoughts on is like, 32:18it almost feels like in order to be able 32:20to look past all of the current problems 32:23with the technology, you almost kind of 32:25have to be a true believer in some sense. 32:28And like, in some ways, like, I 32:30actually don't know if it is marketing 32:31coming out of some of these companies. 32:32Like, I think they do genuinely 32:34believe that it is imminent. 32:35I don't know how you think about that. 32:36I think AI is going to change the world. 32:38I think it's going to change it incrementally, 32:40practically, and pretty quickly. 32:41And it already is. 32:43Um, in all the practical ways we've 32:44talked about specific applications, 32:46integrated with software, integrated with 32:48capabilities that we want assistance with. 32:50Um, no, I wouldn't say that 32:52people are like disingenuous. 32:54Uh, I just think that there's this cultural 32:57kind of continued obsession with, you 32:59know, intelligent, super anything, right? 33:02Uh, and it's interesting and it's fun. 33:05Another kind of more negative side of that is, 33:07you know, uh, the whole existential debates that 33:09have hopefully started to, to die back, I think. 33:14Uh, but you saw that like, you know, 33:15a year and a half ago, especially like 33:17really, really, uh, with a lot of heat. 33:20Um, Yeah, I'd say that it's just kind of so 33:23natural and attractor, like it's hard not 33:25to bring it up, but, uh, look, I don't know, 33:29maybe I'm just too much of a pragmatist. 33:30I just try to focus on all the ways that AI is 33:32actually helping and will help, like every day, 33:36like on this podcast probably before too long. 33:38It's just to me, that's how the world 33:40changes, not with some super intelligence. 33:42Well, and I think what's interesting, 33:44uh, is, I agree with you. 33:46I don't think there's this 33:46they're being disingenuous. 33:47I think people really 33:48believe it and that's fine. 33:49Um, but we want to pull back 33:52and contextualize a bit. 33:53Like, do you care that the airplane 33:54was invented in 1903 instead of 1910? 33:57Does it really matter? 33:59It doesn't, right? 34:00I mean, these were splitting hairs a little bit. 34:02Like, why am I right? 34:02And someone else is wrong. 34:04It actually doesn't matter. 34:05Like, I think if it's It's three years, 34:08like as Dario says, or it's 10 years. 34:11If you look back at 50 years, 34:12it doesn't matter, right? 34:14Right. 34:15Because of exponentials, you know? 34:18So I think it's okay. 34:19It's okay that we're exuberant and we believe. 34:21I also think some of the anthropics, uh, 34:24warnings, so to speak, uh, and the need for 34:27safety and better understanding of these 34:29problems aren't necessarily just predicated 34:31by the arrival of super intelligence. 34:33Dumb intelligence can be pretty dangerous. 34:35And if it's out in the world, right, uh, if 34:37we're starting to give LLMs, all these API 34:40calls and ability to impact the world and 34:43pull real data into their decision making. 34:45So, you know, I, as we talk about it being 34:48genuine, I think from that perspective. 34:51It absolutely is true, uh, and something that 34:53everyone should be aware of regardless of is 34:56this quote AGI or superintelligence or not. 34:58Yeah, for sure. 34:59Yeah, I think that these last few comments 35:00are really interesting because I think all 35:02three of you kind of would picture yourself 35:03as like realists in the world of AI. 35:05Um, but kind of where we're almost 35:07landing is, look, we're all agreed this 35:08technology is going to be a huge deal. 35:10We're just hair splitting over whether or 35:11not it's going to be 10 years or 20 years You 35:13know, two, two months from now, um, which I 35:15think is, is a pretty interesting outcome. 35:17Yeah. 35:18Um, maybe the final comment I'll, I'll 35:21kind of throw in, because I'm just kind 35:23of curious to get all of your thoughts on 35:24this is, you know, to talk about realism. 35:26Like all three of you are talking 35:28to customers that are in the market. 35:30People that need to just basically 35:31wake up in the morning and be like, 35:33is this technology going to be better 35:34than what I currently use in my stack? 35:36And should I implement it? 35:38Like, do you hear from customers? 35:39They're like, And by the way, Anthony, 35:41should I be worried that this technology 35:42is going to destroy the world? 35:43Like, I'm kind of curious about how much 35:45of this is kind of, sort of, chin stroking 35:46media discussion, or how much of it actually 35:48is, like, influencing actual enterprise 35:51decisions and discussions happening on the 35:53ground, or if those two are basically, like, 35:54completely separate worlds in some sense. 35:56Certainly. 35:57Lots of customers are concerned and ask 36:00questions about accuracy, about trust 36:01in systems, about how to implement 36:03specific use cases, right, with a high 36:06quality of output that they can trust 36:08in deployment, that they can trust. 36:10Save money or make money on and not have 36:12a big liability with and there's lots of 36:14challenges across the board In all sorts of 36:17domains right in health and finance and in 36:20legal and all many many areas I hear very 36:24little if any questions about Their you know 36:27helping AI, you know, destroy the world, right? 36:32These big existential kind of, if I deploy 36:35AI, am I going to, you know, contribute to 36:38the robot army that takes over humanity? 36:40Like, none of that stuff, right? 36:41It's very practical. 36:42It's business focused as it should be. 36:44Right? 36:44That's what I hear. 36:46That's so interesting to me is kind of 36:47like, we think about the AI discussion as 36:48being like kind of one block, but I think 36:50in practice it's actually these like pretty 36:52distinct, you know, kind of like fora in 36:55which these discussions are happening. 36:57And okay, Naveen, if you've got thoughts 36:58on this, on like what you're hearing from 37:00customers, and whether or not this AGI 37:01stuff even kind of like registers at all. 37:03Yeah, I mean, I think Anthony nailed it. 37:06It's very practically grounded. 37:08That being said, I think the motivations are 37:12such that I don't want to be the one who didn't 37:14jump on the train and made the company get 37:16left behind, whatever that company is, right? 37:19And so, uh, there's a lot of tops down, uh, push 37:23for, for getting AI coming even from the boards. 37:25I've spoken to multiple boards of very large 37:27public companies and, you know, that this 37:29is a discussion front and center there. 37:31And it's really about this is the next 37:33technology transition, we have to be part of it. 37:35No one's really talking about like, is it 37:38going to take over the world or whatever. 37:39It's just like, how do we, how do 37:41we craft a strategy such that we 37:43are, we are part of this new world? 37:45Yeah, I'd echo both of those statements and I 37:48think Overall, what I'm really optimistic about, 37:51honestly, is that a lot of the conversations 37:54with enterprise is about how do I take 37:56advantage, but how do I also make sure I've got 37:58the right protocols, the right control points, 38:01the right safety measures in place, because, I 38:05mean, their bottom line is ultimately at risk 38:07with the deployment, and I think that provides 38:10a lot of really helpful and healthy pressure 38:13on model providers to develop the  solutions that are needed for a more 38:17responsible and governed approach versus a, 38:20uh, just build as far and fast as you can. 38:23And so I think that is going to ultimately 38:25help us create a lot of the, the 38:27tooling that's needed so that it isn't 38:30necessarily the, the concern that, you 38:32know, AGI is going to take over the world. 38:33We will have hopefully built the right 38:36controls and processes in place to be 38:38able to have a very well governed world. 38:40AI system. 38:41Yeah, I can't think of a better 38:42note to end on than that, Kate. 38:44So, thank you. 38:45Um, so I'm going to wrap us up for today. 38:46Uh, Kate, as always, thanks 38:47for coming on the show. 38:48Really appreciate it. 38:49And, uh, Anthony and Naveen, we hope to have 38:52you on the show in the future, hopefully. 38:53Thanks for joining us. 38:54If you enjoyed what you heard, you 38:55can get us on Apple Podcasts, Spotify, 38:57and podcast platforms everywhere. 38:59And we'll see you next 39:00week on Mixture of Experts.