Learning Library

← Back to Library

Video lMrOvPloJ0o

Key Points

  • Machine learning’s inherent probabilistic nature guarantees a persistent error rate, highlighting the need for breakthroughs beyond current technologies to achieve truly human‑like conscious decision‑making.
  • The “Mixture of Experts” podcast episode brings together experts Olivia Bjek, Chris Haye, and Mihi Cre to discuss the week’s AI headlines, including radiology advances, manifold research, and a major IBM‑Anthropic partnership.
  • Recent AI news features AMD’s multi‑billion‑dollar chip supply deal with OpenAI (including a potential 10% equity stake), the integration of synthetic diamond for superior chip heat dissipation, IBM’s Project Bob boosting developer productivity by 45%, and Peloton’s AI‑powered “IQ” trainer offering real‑time workout guidance.
  • OpenAI’s release of Agent Kit introduces a new user‑friendly agent builder and updates to its evaluation platform, marking a significant step forward in the rapidly evolving “agent” ecosystem.
  • The episode emphasizes that while AI tools are accelerating productivity and expanding capabilities across industries, the field still faces fundamental challenges in achieving reliable, human‑level decision making.

Sections

Full Transcript

# Video lMrOvPloJ0o **Source:** [https://www.youtube.com/watch?v=lMrOvPloJ0o](https://www.youtube.com/watch?v=lMrOvPloJ0o) **Duration:** 00:43:43 ## Summary - Machine learning’s inherent probabilistic nature guarantees a persistent error rate, highlighting the need for breakthroughs beyond current technologies to achieve truly human‑like conscious decision‑making. - The “Mixture of Experts” podcast episode brings together experts Olivia Bjek, Chris Haye, and Mihi Cre to discuss the week’s AI headlines, including radiology advances, manifold research, and a major IBM‑Anthropic partnership. - Recent AI news features AMD’s multi‑billion‑dollar chip supply deal with OpenAI (including a potential 10% equity stake), the integration of synthetic diamond for superior chip heat dissipation, IBM’s Project Bob boosting developer productivity by 45%, and Peloton’s AI‑powered “IQ” trainer offering real‑time workout guidance. - OpenAI’s release of Agent Kit introduces a new user‑friendly agent builder and updates to its evaluation platform, marking a significant step forward in the rapidly evolving “agent” ecosystem. - The episode emphasizes that while AI tools are accelerating productivity and expanding capabilities across industries, the field still faces fundamental challenges in achieving reliable, human‑level decision making. ## Sections - [00:00:00](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=0s) **Limits of Machine Learning & AI News** - The excerpt begins with a commentary on the probabilistic nature of machine learning versus human decision‑making, then introduces the “Mixture of Experts” podcast and its expert panel before previewing upcoming discussions on radiology AI, IBM‑Anthropic partnership, OpenAI’s agent kit, and AMD’s chip deal with OpenAI. - [00:03:04](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=184s) **From Codegen to Low‑Code Evolution** - The speaker explains how IBM’s acquisition of DataStax (bringing Langflow) and tools like Crew AI and Langraph illustrate a shift from pure generative code‑generation toward integrating low‑code, visual builders that make AI agent development accessible even to those without deep programming expertise. - [00:06:45](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=405s) **Limits of Visual Programming Paradigms** - The speaker argues that while visual tools such as UML, Scratch, and Node‑RED are attractive, they fail to scale across large, heterogeneous legacy systems, and the advent of AI does not alter this fundamental limitation. - [00:10:00](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=600s) **Simplifying AI Agent Development** - OpenAI’s agent builder enables non‑technical users to create agents through visual workflows while exposing the underlying TypeScript/Python SDK for programmers, mitigating complexity even with features like the Common Expression Language. - [00:13:10](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=790s) **Enterprise AI Agent Lifecycle Discussion** - The speaker highlights OpenAI's lucrative subscription base, then discusses IBM's partnership with Anthropic and introduces a structured agent development life cycle for securely deploying enterprise AI agents. - [00:16:28](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=988s) **Building an Evolving Agent Ops Framework** - The speaker outlines how IBM and partners co‑created a guide, stressing continuous development, cross‑industry adoption, evaluation challenges, and integration of new tools into a unified Agent Development Life Cycle (ADLC) platform. - [00:19:49](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=1189s) **Personal vs Enterprise Agent Automation** - The speaker argues that while AI agents today mainly automate individual tasks, real industry impact will come from enterprise‑wide workflows where smaller, purpose‑built models often outshine merely larger ones, as illustrated by a video game generating NPC backstories locally rather than querying a massive central model. - [00:23:45](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=1425s) **Explaining Gradient Explosions with Manifolds** - The speaker outlines how deep neural networks can suffer gradient explosions during training and why visualizing the loss landscape as a curved manifold (rather than a flat plane) helps keep the model’s weights stable. - [00:27:34](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=1654s) **Manifolds as a Path to Stable AI** - The speakers reflect on an early 2016 study, discuss how better use of manifolds could improve model training, fine‑tuning, and overall AI stability, and speculate on the practical implications of such advances. - [00:31:31](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=1891s) **AI in Radiology: Myth vs Reality** - The speaker critiques the hype that computer‑vision will replace radiologists, using a recent investigative article to show how the expected disruption has not materialized as predicted. - [00:35:24](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=2124s) **AI Threat to Radiology** - Olivia debates whether advancing AI agents will replace radiologists, emphasizing unresolved trust concerns, data bias pitfalls, and the current limitations of machine learning despite improving diagnostic accuracy. - [00:38:47](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=2327s) **Debating AI Reliability vs Human Judgment** - Two participants argue over whether probabilistic machine learning can ever replace human decision‑making, especially in high‑stakes situations. - [00:42:33](https://www.youtube.com/watch?v=lMrOvPloJ0o&t=2553s) **Human Oversight in AI Radiology** - The speaker warns that AI diagnostic tools without radiologist supervision are vulnerable to cyber‑threats and asserts that ultimate accountability, context, and communication must remain with human clinicians. ## Full Transcript
0:00Machine 0:01learning is fundamentally probabilistic 0:03and humans are not. So I think there is 0:05always going to be an error rate with 0:08machine learning techniques as they have 0:10currently been developed. Would have to 0:12be some kind of other advance, some kind 0:14of other technology to act like a human 0:18and have actual like conscious decision 0:21making. It's actually a huge engineering 0:23challenge. 0:24>> All that and more on today's mixture of 0:26experts. 0:27[Music] 0:32I'm Tim Hang and welcome to Mixture of 0:34Experts. Each week, Moe brings together 0:36a panel of brilliant, funny, and in the 0:38case of Chris Haye, somewhat unhinged 0:40technical experts to discuss and debate 0:42the week's news in artificial 0:44intelligence. This week, we've got an 0:45incredible panel. We've got Olivia Bjek, 0:47senior staff dev advocate, Chris Haye, 0:50distinguished engineer, and Mihi Cre, 0:52distinguished engineer, Aentic AI. All 0:54right, we've got another packed episode 0:56again this week. Uh we're going to talk 0:57about radiology, we're going to talk 0:58about manifold, we're going to talk 1:00about a huge partnership between IBM and 1:02Enthropic. Uh as well as OpenAI's agent 1:04kit release. But first, we've got Eiley 1:06with the news. 1:08[Music] 1:12Hey everyone, I'm Eiley McConn, a tech 1:14news editor for IBM Think. I'm here with 1:16a few AI headlines you might have missed 1:18this week. Chipmaker AMD signed a deal 1:20to supply open AI with billions of 1:23dollars worth of chips. In exchange, 1:25OpenAI could get up to a 10% stake in 1:27AMD. Are diamonds a computer chip's new 1:31best friend? Companies are starting to 1:32embed tiny pieces of synthetic diamond 1:35into chips because diamonds are 1:37exceptionally good at moving heat. These 1:38chips could ultimately help data centers 1:41generate less heat, currently a big 1:42waste of electricity. This week, IBM 1:45introduced Project Bob, a new set of 1:48tools for developers to automate complex 1:50processes like code development. When 1:526,000 IBM developers tested Project Bob, 1:55their productivity increased on average 1:57by 45%. 1:59Are you trying to get in shape before 2:01the holidays? Bike maker Pelaton has 2:03introduced Pelaton IQ, an AI assisted 2:06feature that acts like your personal 2:07trainer. It gives you feedback on your 2:09form and suggestion for weights and 2:11workout plans. Want to dive deeper into 2:13some of these topics? Subscribe to the 2:15Think newsletter linked in the show 2:17notes. And now back to the episode. 2:24All right, so let's just dive into it. 2:26So, the big news of the week is OpenAI's 2:28release of Agent Kit, which is really 2:30kind of a two-part uh announcement. They 2:33talked a little bit about their new 2:34agent builder that they've created, 2:36which is sort of a clean sort of user 2:38experience for designing agents. Um, as 2:40well as a number of updates on its 2:42evaluation platform. Um, and Olivia, 2:44maybe I'll I'll turn it to you. How big 2:46of a deal is this? I mean, I feel like 2:47we've really been in a year of agents. 2:49It's been a joke on that we say agent 2:52like repeatedly every single episode, 2:53but this builder seems to be maybe the 2:56first attempt in my mind to kind of like 2:58really make some of this technology like 3:00broadly accessible if you're not sort of 3:01technical. Is that right? 3:03>> I don't know if it's the first attempt, 3:04but it's certainly a very strong one. Um 3:09we IBM recently acquired data stacks 3:12which brings Langflow. Um, and Langflow 3:15I think brings a lot of that uh low code 3:18builder experience that you see in the 3:20agent kit um into uh available in open 3:25source and that's something that we're 3:27starting to try to integrate into a lot 3:29more IBM products as well. Um and then 3:33before that we see things like crew AI 3:35and Langraph. Um Langraph I think I 3:38would agree with you it requires much 3:40more of a technical edge. Um but then 3:43crew AI is as long as you really can 3:46write some code and almost anyone can 3:49write some code now especially basic 3:51agent code with uh you know generative 3:53AI at this point. Um you can see people 3:57getting agent things across uh out of 4:00the box pretty easily. And there's I 4:03think a fun kind of evolution here and I 4:04think you touched on something I would 4:06love to get your thoughts on is you know 4:07I saw this and I was like oh it's really 4:09funny because like with all the stuff 4:10that's been happening in codegen I think 4:12we originally had maybe a vision that's 4:14like well it's okay you'll just tell the 4:16machine what you want and then it'll 4:18generate the code and and everybody will 4:19be able to code is essentially the 4:20direction we're going in. This almost 4:22seems like a step back, not maybe a step 4:24back, but a step in a different 4:25direction to say, okay, even in a world 4:27of codegen, we will still need kind of a 4:30graphic kind of no code interface for 4:32managing all this stuff. And I guess I'm 4:33curious, do do you buy one approach or 4:35the other or do you feel it's like still 4:36unclear which one's going to kind of win 4:38out over time? Yeah. So, I have actually 4:39some pretty strong feelings on this 4:41because I've been involved in AI in one 4:43form or another for about the last 15 4:45years. And pretty much every time we 4:47have a big evolution, people are like 4:49the machine learning code, it's going to 4:50replace everything. And you know, it's 4:54been 20, 30 years. Realistically, we've 4:57had AI in some form since the 1960s. And 5:01you, you know, it still hasn't happened. 5:02It's still there. There was a paper 5:04about 10 years ago from Google that put 5:07out this very pretty little diagram 5:09where it showed that um only about 10% 5:13of the code of any machine learning 5:16system was actually machine learning. 5:18And I think that still holds true um in 5:21reality. So if you're asking the LLM to 5:25do all of your compute, you're 5:27essentially saying, "Okay, I would like 5:28the most expensive part of my system to 5:31do everything." even if you have a plan 5:33in mind. So, you know, let's say that 5:36you wanted it to check your email every 5:38morning and uh look through and find the 5:41highest priority items that are in 5:43there. Um you probably don't really want 5:46the LLM choosing its own adventure every 5:49morning. You probably want it to do 5:51essentially the same thing every day. So 5:53agent frameworks are really the only way 5:55that we can mix that kind of classic 5:58deterministic pro programming with 6:00something that's a little bit more 6:02probabilistic like this. 6:04>> Mihi, maybe I'll kick it over to you 6:05next. Um, I saw the kind of OpenAI sort 6:09of agent kit sort of experience getting 6:11some critique online where people were 6:13saying this kind of way of doing this 6:15where it's like a bunch of blocks that 6:17are all kind of wired together like 6:18maybe this is like not the most 6:21efficient or easiest way of managing 6:23this stuff. And so I think like to maybe 6:25build off of what Olivia is saying like 6:26I think if we even if we say okay we're 6:28going to accept this kind of graphics 6:30paradigm. Uh, I guess I'm kind of 6:32curious about your view of like is this 6:33kind of like blocks and wires sort of in 6:35your opinion kind of the way we're going 6:37to go about visualizing this or are 6:39there other ways that you think are 6:40going to be like ultimately kind of the 6:42dominant way we do this? 6:43>> I think code is there for a reason and 6:45the fact we now have AI um doesn't 6:48change that in any manner. We've tried 6:50to do this before with UML. If you 6:52remember back in the day when we used to 6:54think that everything could be designed 6:56and developed just using diagrams and 6:58there were a lot of products including 6:59from IBM, you know, rational rows and 7:01all these kind of things with the 7:03promise of writing UML as code. You 7:06translate from UML to code, from code to 7:08UML, and you design everything visually. 7:11And it requires specific paradigms like 7:14object-oriented languages for this to 7:15work. And it sort of worked for 7:19I would say a bit. But once you went 7:22beyond you know single developer small 7:23projects you extended into legacy 7:25projects you wanted to integrate with 7:27different systems different programming 7:28languages a lot of the paradigms broke. 7:31We see this with agents as well where 7:33you have platforms like NA10 and lung 7:35flow and flow wise which take the same 7:38paradigm as now the agent kit from open 7:41AAI where you have this visual 7:43programming language. Now it's very 7:45attractive because even as kids for 7:47example 7:48you typically learn programming 7:50visually. So if you've done you know 7:52turtle programming or scratch which is 7:55like a visual ID for kids where you kind 7:57of learn how to program using these 7:59exact same blocks or even node red an 8:02open source project from maybe 15 years 8:04ago which taught you how to compose 8:06things for the internet of things using 8:08visual programming languages. It's very 8:10attractive. You can get started very 8:12quickly. The problem is scaling. Once 8:14you go beyond a single contributor, once 8:16your project goes beyond a single 8:18workflow, once you need to integrate 8:20with existing systems, once you need to 8:22do things like version control and 8:24management, it becomes very very 8:25unwieldy. So I would say it has its 8:28place, especially for things like 8:30one-off automations where you, you know, 8:32it's one and done. But if you want to 8:34build a real application, I still 8:36believe there's a strong strong case to 8:39make. You should be writing code to 8:40begin with. 8:41>> Oh, Chris, you want to jump in? Of 8:42course I want to jump in. Um 8:46I I I just I I gonna rebut that a little 8:49bit. Mihi. So I mean I think to your 8:51point I'm going to cover two things. I 8:53think the first thing is that um I think 8:56this is really about making agents 8:58available to everybody. And to your 9:01point about scratch actually the average 9:04human being can go and create agents now 9:07uh in a very very simple way. It really 9:09doesn't take more than 5 to 10 minutes 9:11to learn how to use the tool and be able 9:13to create an agent very quickly. And and 9:16actually if if we think about the sort 9:18of workflows and the things that people 9:19want to be able to do it, it is clicky 9:21clicky, right? It's like I want to go 9:23and connect to my Google mail. I want 9:25and there's a connector there available 9:27for this. I want to go and write 9:29something to Google Sheets and there's a 9:31connector available to this. And then if 9:32we think of even enterprise systems for 9:34a second, the reality is we want to be 9:37able to uh protect our inputs. So be 9:40able to put the kind of uh the guard 9:42rails, you know, on either side of of 9:44your inputs as well. So that's two or 9:45three blocks and and we have to do that 9:47today in enterprise compute anyway. So 9:50for the ability for somebody to be able 9:52to quickly stick a guardrail in there, 9:54maybe do a little bit of routing and 9:56even be able to do multi- aent, right? 9:58because it's just a matter of sticking 10:00another agent in there and then 10:01connecting them up. I I think you're 10:05pushing the bounds to your point me high 10:08of of if you are not a coder that 10:11ability without knowing about tools like 10:14lang flow etc that you don't really have 10:17good ways of doing that for the mass uh 10:19consumers. So I think you're going to 10:21get better agents from consumers there. 10:23to your point about programmers, I I 10:25completely agree. I I do agree with your 10:27point that um you know things get 10:30sufficiently complex and then you're 10:31like I really need a code representation 10:34but but then at the same time what's 10:36quite nice about the way it openai has 10:38done the agent builder it's actually 10:39built on top of the agent SDK. So if we 10:41actually look underneath of the the h 10:44the hood of the workflow actually you 10:47have your agent SDK code there which can 10:50be in TypeScript or Python and you can 10:51just take that representation. So you 10:53can you can start with the the workflow 10:56but then you can just source control it 10:58as code there. So there's nothing 11:00stopping you from from doing that. Um so 11:02I think what they've done is is quite 11:05clever and and actually some of the 11:07things they've really done well is take 11:08away the complexity. So one of the 11:10things I sort of panicked at was like oh 11:12my goodness they've introduced common 11:14expression language right so if then 11:16else is and I was like there is no way 11:18the average human being is going to be 11:19able to understand that but but actually 11:21what but what they've done there is 11:24really good especially for structured 11:26outputs is they just put an LLM in the 11:28way and you just describe what the 11:30schema you need is and then it will go 11:31and generate it for you and you're like 11:33actually that is a pretty cool technique 11:36so I I I I there is complexity in But I 11:40think what they've done to make it 11:41easier is quite useful. So I I think I 11:44am excited. But to your point, it's just 11:46nowhere near as powerful as things like 11:48Langflow. Um uh for example, 11:51>> Mihi, maybe a final question to you on 11:53this. So I think Olivia, you kind of 11:55called out uh my original prompt on this 11:57being like, well, hold up. This is not 11:58the first attempt in the space. There's 12:00a bunch of other players in the space. 12:02Um but obviously whenever OpenAI does 12:04something, it's just a 800 pound gorilla 12:06kind of moving around. I guess mihi, do 12:08do you have a think thought on like how 12:09the business strategy of this evolves, 12:11right? Like if you're not OpenAI and 12:12you're seeing them do this, what's like 12:14the next game if you're really trying to 12:16compete in this sort of like no code 12:18sort of agent builder um kind of 12:20ecosystem? 12:21>> I think what OpenAI has going for them 12:23is volume. They have the market. They 12:25have hundreds of millions of users and 12:28consumers and they can afford to be 12:30slow. introduce features that don't 12:33necessarily try to solve everything, but 12:35what they solve has a good user 12:36experience is easy enough for these 12:39users to kind of understand and 12:41eventually this will make its way 12:42towards enterprise systems where the 12:44same consumers want to do for example 12:46citizen automation and we've seen this 12:48before with things like power automate 12:50for example or excel macros where the 12:52premise was the same that anyone or even 12:55Lotus notes like Lotus applications any 12:58citizen within the organization can go 13:00in can drag and drop a couple of things 13:01can write a bit of automation and do 13:04something that's a one-off tasks or even 13:06turn it into an enterprise application. 13:08So I do see how some of these systems 13:10could make their way towards enterprise 13:12as well. And I do see a business play 13:14there. But I think for OpenAI is 13:18sufficient just to have their current 13:20monetization 13:21and hundreds of millions of users all 13:24paying $20 a month is going to be quite 13:27substantial. 13:28[Music] 13:32Well, this is great and a good way to 13:33kind of flow into the next topic that I 13:36want to cover today. Um, so some other 13:38really big news and mihi, we we'll stay 13:40with you because I think like you were 13:41directly involved in this, but um, IBM 13:44has announced this strategic partnership 13:45with Enthropic to integrate Anthropic 13:48um, into a bunch of its sort of tools 13:50and methodology. Um, and I think one of 13:53the things I know you direct were 13:54directly involved in this project that 13:55really stuck out to me and we've talked 13:57about it on the show before is kind of 13:59this idea of creating a guide for how 14:02you should securely deploy enterprise AI 14:05agents. Um, and specifically this idea 14:07that we're going to have this thing 14:08called the agent development life cycle, 14:10which is going to be sort of a 14:11structured way that people should go 14:13about doing this. And I find this so 14:15interesting because it feels like, you 14:16know, we're always talking about agents, 14:17but we always focus on the technology. 14:19feels like this is maybe like one of the 14:21first things I've seen although Olivia 14:22keep me honest again if it's not 14:23actually the first thing um where it 14:26does feel like there's now starting to 14:27be a lot more thinking in terms of like 14:28okay well what's the whole set of 14:30business processes that kind of need to 14:32fit around this technology and so I 14:33don't know if you were directly involved 14:35in the ADLC kind of guide work but um uh 14:38if you were or if you weren't would be 14:39curious to get your thoughts 14:40>> yeah I I was I was working on the ADLC 14:43guide as well so one of the things that 14:45came out of that exercise was that 14:48agents do need to have their own process 14:51similar to software development life 14:52cycle but it needs to deal with the 14:55non-robabilistic sorry the probabilistic 14:57nature of large language models and take 14:59into account things like for example 15:01testing and testing of AI agents that 15:04needs to be done in a different way so 15:05for example true evals in fact I believe 15:08agent kit which was released from open 15:10AAI touches on that as well one of the 15:12components it starts to offer or add to 15:14the mixture is the component of evals 15:17how are you going to ensure that the 15:19outcomes of these agents are correct 15:21either in line so as the agent executes 15:23it can go oops that's the wrong result 15:25I'm going to go back I'm going to retry 15:27or after the agent has executed I can 15:29take one agent 100 agent executions and 15:31look at um the accuracy and look at some 15:34of the numbers of these agents and 15:36having a structure governed process 15:38around this for planning coding building 15:40testing releasing deploying operating 15:42monitoring the life cycle of agents is 15:45important for enterprises and many of 15:47the non-functional requirements, things 15:49like encryption and security and 15:50governance and uh all the things that 15:53come with traditional enterprise 15:55software need to be weaved into this 15:57approach and we've seen that today AI 16:00agents and AI development is quite 16:02immature and one of the projects we've 16:04started in this space is context for GCP 16:06gateway which supports A2A supports MCP 16:09and provides support for the agent 16:12development life cycle and agent ops. So 16:15we see a similar need. Uh we see that 16:17these are things that enterprises are 16:20saying must happen before any of these 16:24agents are allowed to touch production 16:26systems. 16:26>> That's really uh that's interesting. So 16:28uh I guess m where does it go next? I 16:30guess in terms of this development now 16:31that the guide is out you know is part 16:33of this now kind of testing it in the 16:35field or you know how do we develop this 16:36out? I'm really interested in basically 16:38how these processes now become sort of 16:40like industrywide right we're talking 16:41about adoption now 16:42>> to build this guide. A lot of folks from 16:44IBM had to come together, folks from 16:46consulting, from technology, from 16:47research. We've collaborated with 16:48ventropic as well. Uh, and we've 16:51leveraged our experience in both 16:52customer engagement. So, we've looked at 16:54healthcare clients where we've 16:55implemented agents. We've looked at 16:57telco clients. We've looked at banking 16:59clients. Um, but it's just a start. This 17:02needs to be an ongoing evolving process 17:05and document. It needs to reflect all 17:08the latest and greatest changes in 17:09fields like eval for example. Do you 17:12trust another agent to evaluate your 17:14agent? Does an LLM that evaluates itself 17:18have any bias in that evaluation? So all 17:21these things need to flow back into the 17:23ADLC and we need to see tools and 17:26technologies develop around this better 17:29evals better components such as the MCP 17:32gateway uh components that support for 17:34example the development life cycle 17:37itself like for example project Bob 17:39which we've also announced uh as part of 17:41this. So all of this needs to come 17:43together into one more cohesive agent 17:46ops and ADLC agent ops platform and ADLC 17:49process. Um, Levi, I'm kind of curious 17:51about kind of the market evolution that 17:53a strategic partner like ship a 17:55strategic partnership like this signals. 17:57Um, I remember it was only like I feel 18:00like 24 months ago where there's kind of 18:02a vision that it was like going to be 18:03one model to rule them all, right? Like 18:05eventually you have an AI company that 18:06creates like such a powerful capable 18:08model that everything else follows suit, 18:11right? But it does seem like not just 18:13Anthropic but all the big kind of AI 18:15players uh uh are kind of thinking about 18:18how partnerships work in the space which 18:20sort of suggests to me that like a 18:22company like Enthropic is saying well we 18:24want to get into enterprise we don't 18:26really know a whole lot about enterprise 18:27like IBM knows enterprise so we have to 18:30partner um and so does this mean that 18:32like this the industry is ultimately 18:34going to be characterized by a lot more 18:36partnerships over time than this kind of 18:37original model which is okay it's one AI 18:40company and they kind of end controlling 18:41everything. It feels like it's this is 18:43going to be more multipolar than we 18:45thought. Um is is that the right way of 18:46thinking about what's happening here? 18:48>> Yeah, I think it's an interesting point 18:49especially in context of the agent 18:52development questions that you were 18:53having before where basically um I think 18:57what we're seeing is that LLM 19:00engineering is a lot more complicated 19:02than uh we initially thought as as you 19:06have said originally we kind of thought 19:08hey we'll just hand all our problems off 19:09to an LLM. the LM's going to, you know, 19:12LM is clearly just as good as a person. 19:14We can just have it do anything and it's 19:17going to all work out great. And I think 19:19we're finding that that's not 100% true. 19:21And there's also some theoretical limits 19:24that I think we're starting to bump up 19:25against a bit when it comes to sheer LLM 19:29training. Um, so what that means is 19:33either people need to partner or they 19:36need to uh get really good each in their 19:39own realm at uh at solving all of the 19:43same problems that Mihi is discussing. 19:45So, you know, I as a um I I've recently 19:49been looking at the um this space as it 19:53uh pertains to how do you look at the 19:56use cases for um uh for agents basically 20:00and I think it really divies up into two 20:02things. One is you're automating a piece 20:05of your own work or you are automating 20:09something a process for the business. 20:11And right now a lot of the uh agent 20:15world has focused on how do you automate 20:17things for yourself and that's cool. 20:19That's you know that's a little bit 20:22labor saving. It's definitely going to 20:25uh be helpful for people and yet at the 20:27same time it's not actually going to 20:29transform any industries until we start 20:32building workflows that help entire 20:35enterprises and entire businesses really 20:38do something better than they were 20:39before and more reliably than they were 20:41doing before. um when it comes to things 20:45like that, it means that uh just 20:48throwing a bigger and larger model at it 20:51isn't always the answer. Um and 20:53sometimes we also see things that uh 20:56where a smaller model is the answer. So, 20:59for example, um I I'll I'll give 21:02something that's totally off-the-wall 21:04relative to like su super enterprise use 21:07cases, but there's a video game I've 21:09been playing where they're starting to 21:10use for some of the non-player 21:12characters, they're starting to generate 21:14background stories for them um using 21:18generative AI. Do they really want to be 21:21sending for all of those randomly 21:23generated characters in the world back 21:25to a a core model that's like hosted? 21:28No, probably not. Like the compute level 21:30for that makes no sense in especially in 21:33a video game which is already really 21:35taxing on the GPU. So what you want 21:37there is a usable LLM that's really 21:40small. So that means that each company 21:43has sort of been um that has been 21:46playing in this model space has 21:48different strengths. Not all models are 21:51going to fit all use cases. And so that 21:54means yes partnerships are have to be 21:56the future. Uh companies talking to each 21:59other has to be the future. Um honestly 22:01when have we seen a point in technology 22:03when that hasn't been true? You know, I 22:05don't see any reason to believe that 22:07this this bit of technology is so 22:10different from history. 22:11>> Definitely. All right. Well, let's end 22:12the segment with two quick questions. 22:14Uh, Mihi, if people want to learn more 22:15about this, where should they go? 22:16>> So, we have the white paper out which 22:18you can find on IPM's website. Just 22:20search for architecting secure 22:22enterprise AI agents with MCP. I think 22:24it's a great read. And 22:26>> not that you're biased. 22:27>> Honestly, many of these problems haven't 22:29been not not being biased. Um, and many 22:32of these problems don't have mature 22:35solutions. So, if you want to innovate 22:37in this space, this is perfect. 22:39>> Yeah. Great. And Olivia, final question 22:41quick for you is, what's the video game 22:43you've been playing? 22:44>> Uh, Enzoy. 22:46>> Okay, great. You should check it out. 22:47I'm going to check it out. That sounds 22:48awesome. 22:53>> All right, I'm gonna move us on to our 22:54third topic of the day. Um, we have not 22:57talked very much about thinking 22:58machines. Um, so we've talked about SSI 23:00and we've talked about a number of 23:02companies that have kind of spawned out 23:04of former OpenAI leadership. Um, and 23:06thinking machines, if you haven't been 23:08tracking it, uh, is Mera Morades, who's 23:10the former OpenAI CTO's, uh, startup, 23:13uh, and fundamental kind of research 23:15lab. Um, and I want to cover it just 23:17because they they put out a piece fairly 23:18recently that I thought was very 23:20interesting and I think is worth diving 23:22into and kind of like explaining and 23:24parsing through a little bit more. Um 23:26the name of the blog post is called 23:27modular manifolds and you can find it on 23:29the thinking machines uh website. Um and 23:32I guess maybe to start uh Chris, we 23:34haven't gone to you because I wanted to 23:36save you to be the leading person on 23:38this segment. What is a manifold 23:40exactly? 23:41>> What what are you doing to me Tim? What 23:45are you No pressure. No pressure. 23:47>> I am not smart enough to answer this 23:49question. So I think the quick version 23:51of this is when we are training models, 23:54right? These models are really really 23:56kind of deep and we're throwing the 23:58entire internet worth of data at them 24:01and then there's lots and lots of 24:03layers, right? So it's called deep 24:05learning and the good news is the reason 24:07it's called deep learning is because the 24:08layers go very very deep. But as it's 24:11training on these models and the you 24:13know and they're they're faring away 24:15basically what happens is the weights of 24:17the models change. Now when we are 24:20training the model, small shifts on 24:23those weight trainings will can 24:25potentially send the model off and just 24:27basically have a gradient explosion and 24:29therefore it basically trashes the 24:31model. And and the reason that's 24:33happening and and please don't ask me 24:36questions beyond this, but but basically 24:38what's happening here is the the the 24:41model is effectively on the gradient and 24:42it's an and it's a flat surface. So the 24:45bigger these shifts, the more likely 24:46you're going to get this explosion. It's 24:48going to go off. What a manifold is 24:50doing is it's basically more running on 24:54a curvature rather than a flat plane. 24:56Kind of like the Earth in that sense. So 24:59rather than when you're making those 25:01shifts, rather than the uh you know 25:04effectively 25:06exploding off and going off into into 25:09space, you're staying within that plane 25:11and therefore it keeps the model on 25:13track and and that's effectively what's 25:15going on. So the the best analogy I can 25:18come up with is probably gravity, right? 25:21which is if you think of an astronaut 25:23and the astronaut is floating around in 25:25space and then you push them in one 25:28direction and they could just go off 25:30into deep space and you'll never ever 25:31seen them again. And what we do is you 25:33go, "Haha, we've got a little we got a 25:35little coil to them and we'll pull them 25:37back in." And that's effectively what 25:39you're doing when you're training, 25:40right? The astronauts floating off and 25:41then you pull them off. You're making 25:42these quick adjustments. But but 25:44actually in in my analogy of of deep 25:47learning here is we don't need to sort 25:50of pull the astronaut back in because 25:52gravity is keeping the astronaut within 25:56uh the planetary space and they're not 25:58going to float off into deep space. So I 26:01think I've done a great job of confusing 26:03the listeners even more. Um and and if 26:06you want to read my book on deep 26:07learning for idiots that don't know 26:09anything, you feel free to check it out 26:11on Amazon. 26:13>> That was great. I mean that's that's 26:14that's that was brilliant. Uh again, I 26:16was telling all the guests before uh we 26:18started recording that like this is a 26:20hard topic. Uh and I'm really interested 26:22in like how do we explain this because I 26:24think so often ate it's easy to get 26:26caught up in like what the apps are 26:27doing or what the latest features are. 26:29There's some real kind of fundamental 26:31research still kind of ticking along in 26:32the background that we don't talk about 26:34enough. Um, and so, 26:36>> and Tim, that's actually just just 26:38before we sort of jump on there. I 26:40actually think that's one of the 26:41interesting things that thinking 26:43machines is doing. They're taking a 26:45they're taking a really different 26:47approach to all the other labs is you 26:50know if you saw their paper on um you 26:53know basically how to make LLM and for 26:56instance deterministic as opposed to 26:57non-deterministic one of their blog 26:59posts it feels as if they are 27:01fundamentally going back to each part of 27:04the training process and challenging in 27:06the assumptions that we have and 27:07releasing these kind of mini bits of 27:10papers that just explain how they're 27:12trying to improve thing at the micro 27:14level and I and I think that's 27:16interesting and great. Um, and I'm sort 27:18of excited to go, well, if if if all of 27:20these things add up well, what is what 27:22is their model going to look like at at 27:25some point, assuming they're going to 27:26get there. So, I think I think they're 27:27taking an interesting approach which is 27:29a little bit more scientific and 27:31engineering focused. 27:32>> Yeah, absolutely. Yeah, I totally agree. 27:34I mean, I was reading it and actually I 27:36have my uh sort of Nurips 2016 mug here 27:39and it was like kind of like a weird 27:40like throwback. I was like, "Oh, wow. 27:42this is like kind of like much more 27:43early days like on the research whereas 27:45like let's just talk about the 27:46mathematics of this representation for 27:48you know 90 minutes um and I think that 27:50throwback is like yeah I agree with you 27:52Chris is like very very interesting as 27:53like something quite distinct as an 27:55approach from from a lot of the kind of 27:57players in the space um Olivia so you 28:01know Chris has done his valiant best I 28:03think to explain what manifolds are in 28:05the training process and it feels like 28:08you know if we can say anything about 28:09this blog post is you It's it's 28:12attempting to find a way to do this 28:13better. Um, and I guess do you want to 28:17speculate a little bit? I mean, like, 28:19you know, let's abstract away all the 28:21kind of technical complexity for a 28:22moment. If we're able to kind of like 28:24stabilize and use manifolds more 28:27effectively, what's that mean for AI? 28:29What's that mean for training? What's 28:31that mean for fine-tuning? What are the 28:32implications practically? If you're just 28:34someone listening to this being like, 28:36space people being roped back, what what 28:37does that mean for for, you know, 28:39day-to-day and AI? goodness. I'm I'm not 28:41going to say that I know the answer, but 28:43just speculating using uh some very old 28:47knowledge, I'm going to say that that 28:49basically looks like um models staying 28:53on target better, models keep um not uh 28:58drifting off during training quite as 29:00much, uh maybe not being as influenced 29:02by the latest data that they're seeing, 29:04but honestly, I'm not quite sure. 29:06>> If you look up close, even the Earth 29:08looks flat. And I think the idea here is 29:11that you're going to constrain things 29:13along that flat plane. And it kind of 29:16reminds me of a of a book I really like, 29:18which is the foundation where um they 29:21don't predict every single event 29:24to set humanity on its course. They 29:26actually design the pathways in which 29:28civilization is likely to move and then 29:31chaos itself can be somewhat 29:32predictable. And I think the same is 29:34happening here where instead of just 29:36throwing random weights in a very high 29:38dimensional space um you let them wander 29:40around a very very well- definfined 29:43plane like take a sphere but that 29:46manifold itself is just of a predictable 29:48it looks like a flat plane. So it kind 29:50of leads to I would say a more 29:52predictable path for chaos if that makes 29:54sense. 29:54>> I think training is really hard. It is 29:58it is as much as we want it to be a 30:00science and it is a science there's a 30:03lot of variation when you're doing that 30:04and anybody who's ever fine-tuned a 30:06model will know this right and I'm I'm a 30:09you know part-time fine-tuner hack in 30:12the evening and the sort of things you 30:14need to think about is what is the 30:17learning rate for this you know how am I 30:20going to put the data in there to get 30:21the best output from my model etc and it 30:24and it's it is hard to get that right 30:27and and actually to the point as you're 30:29sort of trying to mix all this up if you 30:31put the learning rate you know the 30:32learning rate too high and it's 30:33aggressive then you know then that's the 30:36sort of thing what will call cause the 30:38uh the the gradients explode etc. What 30:42we have seen in the past is especially 30:44with these big labs and you'll have 30:45heard it before is that you did a big 30:48training run and then about 3 months in 30:50it goes 30:53and and it was just like and and every 30:55you know you you had something it was 30:57wrong and the training run blew up and 30:59you just lost a few million dollars and 31:01and that still sort of thing happens 31:04today as well. It costs a lot of money 31:06to do these training runs. It costs a 31:08lot if things go off. So actually by 31:10being able to stabilize things in that 31:12way and have things become more 31:13predictable, it means the cost comes 31:16down and it means that we are going to 31:18get better AI in the future and to the 31:21point as they're trying to solve have 31:23things be a little bit more predictable 31:24and deterministic. So that's I think 31:26that's a big save is we're going to get 31:28more AI quicker, better, more reliably. 31:30>> Yeah, absolutely. Yeah, that's kind of 31:31how I thought about it was like, you 31:33know, if you think about these training 31:34runs as kind of these rocket launches 31:36and it's just like it has to be a really 31:38precise thing or else it kind of breaks 31:40at huge cost. It's like kind of what 31:42this isself into space like you were 31:45saying. 31:45>> I think you brought it back to my 31:46analogy, Tim. 31:47>> Yes, exactly. You're welcome. That's 31:49where I was headed. So, 31:55>> all right, last topic of the day I want 31:57to talk about. So there's a great 31:58publication uh called works in progress. 32:00They do a bunch of interesting 32:02investigative reporting research 32:04effectively on technology and how it 32:06happens. Um and this story uh entitled 32:10the algorithm will see you now which 32:12published a few weeks ago author by the 32:13name of Dena Musa uh really caught my 32:16eye. Um and uh this article does a 32:18pretty simple thing. It basically says, 32:20look, a few years ago, there was a lot 32:22of hype that computer vision 32:24technologies were going to replace all 32:26radiologists. The idea was, well, look, 32:29what does a radiologist do? They look at 32:30a scan and they try to find anomalies in 32:33that scan and they label that scan and 32:35uh and that's what they do. And so, 32:37surely with computer vision, 32:40radiologists are going to be the first 32:41job to get replaced in the AI 32:44revolution. And the paper just says, 32:46okay, let's or the article just says 32:47like let's let's take a look at what 32:49happened. And what's interesting and I 32:51I'll just quote really here is that like 32:53things have actually moved not just in a 32:55flatline direction but like in the other 32:57direction for radiologists. So quote 33:00demand for human labor is higher than 33:02ever. Uh in 2025 American diagnostic 33:05radiology residency programs offered a 33:07record,28 33:09positions across all radiology 33:10specialties, a 4% increase from 2024. 33:13and the field's vacancy rates are at an 33:15all-time high. And in 2025, radiology 33:18was the second highest paid medical 33:20specialty in the country with an average 33:22income of $520,000, 33:25over 48% higher than the average salary 33:27in 2015. 33:29So, this is kind of a really interesting 33:32anomaly um and I think violates I think 33:34a lot of our sort of anticipations, 33:37assumptions about what AI is going to do 33:38to the job market. M I see you already 33:41going off of mute so I'll let you just 33:42get the hot take in first here. 33:43>> No, I was just saying wow I'm in the 33:45wrong profession. What was that salary? 33:47>> Yes. 520,000. You should have become a 33:49radiologist. Okay. 33:50>> But like why why is this? So like not 33:52only is the cop going up but demand is 33:54going up even in the same time you know 33:57computer vision models are incredible 33:58now right? Um and so what what do you 34:00think is happening here? I think part of 34:02it is just the human interfaces which is 34:05no matter how good an AI model is going 34:08to be at doing the job of a 34:09cardiologist. They can only work through 34:12the interfaces which are provided to 34:13them. They don't have senses. They can't 34:16speak. They can't interact with the 34:17patient. They can't interact with other 34:19doctors. They can't leverage their 34:21previous expertise with that particular 34:23patient. And unless something has been 34:25either written down or has been built 34:28into an agent where every single input 34:29and output is defined, they're not going 34:31to have the same data as a real human 34:34doctor. And who's going to go write that 34:36data? Are you going to have nurses 34:37running around and saying, "Oh, can you 34:39describe everything? Does this hurt when 34:40I touch you here?" The AI agent is 34:42saying or the model is saying, "Co, can 34:44you please give me a bit more about your 34:46background? Has your mother had similar 34:48issues?" So I think part of it is 34:51establishing the right interfaces and 34:53establishing trust and no matter how you 34:56look at it an AI model is never going to 34:58be able to fill in that role. You can 35:00maybe enhance or give a second opinion 35:02or work together with a cardiologist to 35:04help I would say provide a second 35:08verification layer to their um 35:11assessment. But I don't think we're at a 35:13point where we can say no matter how 35:15good these models can be individually 35:17that they're going to be able to replace 35:19all the human interface things that 35:21these folks are doing. 35:22>> Olivia, do you have any reflections uh 35:24on this piece? I mean, I guess one of 35:26the questions just building off of what 35:27Mihi said was like, okay, well, maybe 35:30the history of this is, you know, 35:32computer vision didn't have this effect, 35:33but even Mihi said it himself is like, 35:35well, we didn't have an agent that would 35:37go ahead and collect all this context 35:38and be this interface. Is this kind of 35:40maybe a temporary phenomenon? Like could 35:41I say, look, as agents get better, 35:44radiologists really are going to be in 35:45trouble. 35:46>> I think that's hard to say. Um 35:48especially because 35:50you're not just talking about diagnostic 35:53accuracy, but you're also talking about 35:56trust and uh whether or not we can 35:59actually put critical decisionmaking in 36:02the hands of machines. And I think you 36:06know as a society we absolutely have not 36:09settled on an answer for that question. 36:12Um classically I think yeah there the 36:15the limitation of uh of machine learning 36:19to solve these problems has been the 36:22bigger issue. Um, I think the first time 36:25that I saw, you know, can we replace 36:27radiologists, a lot of the points that 36:29they made were were basically that the 36:32scans that they were using tended to 36:35have like the same uh they were all from 36:37the same hospital. So, they had certain 36:39things on them that were causing it to 36:42use absolutely the wrong features to 36:44differentiate. And I do think, you know, 36:47in in our in more recently the when you 36:50look at the the way LLMs are doing 36:52things, yes, they're able to get this 36:54kind of diagnostic accuracy a little bit 36:56better. Um, but I think in general, 36:59you're always going to end up doing the 37:02lowhanging fruit with uh machine 37:04learning before you're able to replace 37:07experts. So if we want to build systems 37:10that we have trust in, then those 37:14systems should be doing things that only 37:16they have extremely high confidence in. 37:18And the difficult cases should be left 37:20up for to really, you know, to humans 37:24who have the ability to bring in more 37:27data than you could ever build an 37:29sufficient integrations for for an 37:31agent. Um, I think if we were talking 37:33about a world where yes, we truly have 37:37AGI, then maybe it's a different story. 37:40But I don't think LLMs today, and 37:42granted, this is a personal opinion that 37:44is widely debated in the field. I don't 37:46think LLMs today represent that AGI. 37:49>> I think you're spot on, Olivia, on 37:52today's, but I think there comes a point 37:54when we know the AI is better than the 37:56humans, then we should be handing over 37:59some of that. And I know that sounds 38:01really harsh and it's definitely not an 38:03IBM view. It's a Chris view. We're going 38:04to clarify that right now. But but but 38:06let's let's imagine this for a second, 38:09right? It's let's say you want to play a 38:10game of chess, right, Tim? Who would you 38:13rather play at chess? Would you rather 38:15play Magnus Carlson, Magnus Nean, or 38:17would you rather play Stockfish? What 38:20would you Who would you rather play? 38:22>> I mean, if I had a choice, I guess 38:23Magnus Carlson, right? 38:25>> Great. Future of humanity. We're going 38:28to have a chess game. It's going to be 38:29you versus the alien, some alien has 38:32come from a different planet. And it's 38:34either Magnus Carlson or Stockfish 38:36that's going to take take them on. And 38:38by the way, whoever loses that game, the 38:40the planet is gone. Are you choosing 38:42Magnus or you choosing Stockfish? 38:44>> Uh, I guess Stockfish then, right? 38:47>> Exactly. And then therefore, I think 38:49there comes a point where you're using 38:51AI for entertainment, and that's fine. 38:53or you're using it to for productivity. 38:55But there comes a point if something is 38:57better and and life is is hanging on a 39:01balance there. You should be using the 39:03absolute best tools that you've got your 39:05your disposition at that point to be 39:08able to solve that problem. And and and 39:11I agree with you Olivia completely 100%. 39:13We are not there just now. But there 39:15will come a point where we are there. 39:17And therefore the moral question will 39:18come around to is is actually should we 39:22not be putting that in the AI hands of 39:24the AI versus the human because the AI 39:26is going to get it right more times than 39:28the human is. And I and we're not there 39:30yet but that question is going to be 39:31coming. 39:32>> I I think I fundamentally disagree and I 39:34think it's because machine learning is 39:36fundamentally probabilistic and humans 39:38are not. And so I think there is always 39:41going to be an error rate with machine 39:44learning techniques as they have 39:46currently been developed that we I I I 39:50think it is highly unlikely that we are 39:52ever able to fully account for that. Um, 39:55and I think that there is would have to 39:58be some kind of other advance, some kind 40:00of other technology that would allow the 40:03uh an LLM type technology or a machine 40:06learning type technology to act like a 40:10human and have actual like conscious 40:13decisionm. So I don't think it's a 40:15matter of so of what's the best. Um, 40:20radiology fundamentally is subject to 40:23interpretation to a certain degree. So, 40:25I've recently had to get scans on 40:28various things. And, uh, the thing I've 40:31noticed is that one radiologist will 40:33look at the scan and say, "Hey, I'm 40:35noticing this one thing." The other 40:37radiologist will look at the scan and 40:39say, "Hey, I I see something different." 40:42Um, and one one thing that one of my 40:44doctors pointed out recently, you know, 40:46we were looking at a spinal image who 40:48said, you know, I'm actually not sure 40:50personally whether this is uh T7 or T8 40:54because I don't know which one we're 40:56looking at. And the only way you could 40:57know that is by talking to the original 40:59imagers and say, okay, where was this 41:02person positioned? How can you tell the 41:04difference between these vertebrae? So 41:07maybe maybe you can imagine an agent 41:09being able to do that, but the amount of 41:12data that you have to pull in in order 41:14to make that decision as best as 41:16possible is actually not as trivial as 41:19we think. So the it's actually a huge 41:22engineering challenge to bring 41:24sufficient data to replace radiologists. 41:28>> It's context communication and 41:30accountability. 41:31But we also have to looks at look at 41:33bugs vendors and hackers. So for 41:37example, 41:38one of the health care systems here got 41:41hacked and they took down the whole 41:42system for more than a year where 41:46everything every system had to be 41:48disconnected cuz most of the systems 41:49were still running on Windows XP 41:53and the moment they gotworked they got 41:56crypto locked and all the radiology 42:00equipment 42:01everything had to go through human 42:03doctors. They would be disconnected from 42:05the network. You would go in, you would 42:06do something, you write that down, you'd 42:09take the note, you rip it out, you give 42:10it to the next doctor and so on and so 42:12forth. So even if we want to do this 42:14today, I think it will take more than 50 42:17years 50 to roll it out in a 42:21well-governed way to all the hospitals 42:24in major well-developed countries to the 42:28point where it can be reliable enough. 42:30And if you don't have your radiologist 42:33to fall back to, those systems are going 42:35to continue to remain vulnerable to 42:37hackers, uh, to vendors who are going to 42:40say, "Oops, you know, we're going to 42:41turn off your cardiologist if you don't 42:43pay your your token bills." Um, so I I 42:48kind of see a system where it's not 42:49going to be hybrid, where the AI is 42:52going to provide a review or help with 42:54things like triage or measure objects or 42:56give a second opinion, but the 42:58accountability and the context and 43:00communication will fall into the hands 43:02of the human. 43:03>> Well, we'll have to see. I think we're 43:04going to check in basically if we're 43:06still running at that point fore in a 43:08few years. I do want to revisit this 43:10again and see sort of where we are cuz I 43:12think everybody here has kind of put out 43:14quite different visions of what might 43:15happen in the future and I think it kind 43:17of points to just ultimately how 43:18uncertain some of this all is. Um well 43:21that's all the time that we have for 43:22today. Um Mihi Chris, good to have you 43:25on the show as always. Olivia, hope to 43:26have you back at some point. Thanks for 43:27joining all you listeners. Uh if you 43:29enjoyed what you heard, you can get us 43:30on Apple Podcast, Spotify, and podcast 43:32platforms everywhere. And we'll see you 43:34next week on Mixture of Experts. 43:36[Music]