Learning Library

← Back to Library

OpenAI's Open-Source Shift Debate

Key Points

  • The Mixture of Experts podcast introduced its latest episode, featuring experts Chris Hay, Kaoutar El Maghraoui, and newcomer Bruno Aziza to discuss rapid AI developments.
  • The panel highlighted several breaking stories, including Genie 3, Claude Code rate limiting, Mark Zuckerberg’s “superintelligence train,” and the headline news of OpenAI’s release of two open‑source models (120 B and 20 B parameters).
  • Kaoutar noted that OpenAI is balancing competitive pressure to open up with ethical responsibilities to contain powerful capabilities, suggesting a cautious but possible shift toward openness.
  • Bruno emphasized that the open‑source move reflects a broader industry trend aimed at expanding enterprise engagement, though he stopped short of predicting full openness by 2030.
  • Chris expressed a dissenting view, indicating that not all experts agree on whether OpenAI will transition to an open‑source model in the near future.

Sections

Full Transcript

# OpenAI's Open-Source Shift Debate **Source:** [https://www.youtube.com/watch?v=Dtr0scHQVXc](https://www.youtube.com/watch?v=Dtr0scHQVXc) **Duration:** 00:43:31 ## Summary - The Mixture of Experts podcast introduced its latest episode, featuring experts Chris Hay, Kaoutar El Maghraoui, and newcomer Bruno Aziza to discuss rapid AI developments. - The panel highlighted several breaking stories, including Genie 3, Claude Code rate limiting, Mark Zuckerberg’s “superintelligence train,” and the headline news of OpenAI’s release of two open‑source models (120 B and 20 B parameters). - Kaoutar noted that OpenAI is balancing competitive pressure to open up with ethical responsibilities to contain powerful capabilities, suggesting a cautious but possible shift toward openness. - Bruno emphasized that the open‑source move reflects a broader industry trend aimed at expanding enterprise engagement, though he stopped short of predicting full openness by 2030. - Chris expressed a dissenting view, indicating that not all experts agree on whether OpenAI will transition to an open‑source model in the near future. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=0s) **AI News Roundup: GPT‑OSS & More** - Tim Hwang’s Mixture of Experts podcast previews the week’s biggest AI stories—GPT‑OSS, Genie 3, Claude code rate limits, and Mark Zuckerberg’s superintelligence push—joined by a panel of leading tech experts. - [00:03:10](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=190s) **OpenAI's Open‑Source Dilemma** - The speakers debate whether OpenAI will fully open‑source its models by 2030, weighing profitability and competitive advantages against the risk of losing market share to emerging open‑source AI alternatives. - [00:06:14](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=374s) **Defensive Hybrid Strategy & Branding** - The speaker urges firms to adopt hybrid, open‑source AI models and diversify beyond consumer‑only offerings as a defensive move against competition, while highlighting the importance of branding in a market trending toward vertically integrated, Apple‑like AI ecosystems. - [00:09:19](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=559s) **Edge AI vs. Backend Giants** - The speaker contrasts lightweight, consumer‑grade models that run quickly on edge devices with massive multimodal back‑end systems, noting the former’s speed and platform flexibility but limited competitiveness, while highlighting brand benefits and the continued need for large backend models. - [00:12:23](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=743s) **DeepMind Unveils Genie 3** - The speakers discuss DeepMind’s new Genie 3 model, which generates immersive 3‑D worlds from textual descriptions, and debate whether it represents a groundbreaking research advance or merely an impressive demo. - [00:15:28](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=928s) **Consumer vs Enterprise Video Generation** - The speaker debates whether AI‑driven video and 3D world generation will stay a professional, enterprise‑only capability or evolve into a everyday consumer tool, referencing recent product launches and seeking expert perspective. - [00:18:39](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=1119s) **Immersive AI Worlds for Enterprise** - The speaker envisions AI‑driven, infinite 3‑D environments as the next evolution for corporate training, onboarding, and sales communication, while cautioning that the required compute could make the solution very expensive. - [00:21:47](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=1307s) **AI, Quantum Hype and Claude Code Limits** - The speaker compares AI to a “machine God,” speculates about quantum breakthroughs for gaming, then critiques Anthropic’s new rate‑limit policy on Claude Code for $200‑per‑month pro users. - [00:24:52](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=1492s) **Optimizing AI Model Costs** - The speakers discuss leveraging hardware improvements, adaptive token caching, and continuous software optimization to lower the high subscription fees of AI models, framing it as a race to keep pricing sustainably low. - [00:27:57](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=1677s) **Costly AI Coding Agent Overload** - The speakers discuss how running many AI coding agents across Claude and ChatGPT leads to soaring expenses, constant cloud rate‑limit hits, and strategic decisions about model usage and subscription tiers. - [00:31:11](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=1871s) **Balancing AI Performance and Cost** - The speaker outlines the technical hurdles of delivering fast, affordable generative AI at scale—such as batching, compiled execution, and tiered routing—while emphasizing the steep expense of AI‑driven search versus traditional methods and urging careful assessment of use‑case value. - [00:34:16](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=2056s) **Comparing Corporate Superintelligence Visions** - The conversation highlights differing approaches to superintelligence among OpenAI, Meta, and Anthropic, with Bruno noting varied interfaces like Meta’s glasses, privacy concerns, and the expansive data collection underlying each strategy. - [00:37:23](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=2243s) **Multi‑Device Future and Subscription Robots** - The speaker argues that brain implants, glasses, neck wearables and phones will coexist, with smart glasses taking a leading role, while subscription‑based robots costing around $2,000 a month will handle errands in a multi‑device economy. - [00:40:31](https://www.youtube.com/watch?v=Dtr0scHQVXc&t=2431s) **Skepticism Over Meta's AGI Hype** - The speaker voices cautious doubt that current AGI announcements are more marketing than breakthrough, while acknowledging Meta's hardware push—especially AR glasses—as a potential platform for a personal AI operating system. ## Full Transcript
0:00It's going to be crazy time. 0:02That's the world I live in already. 0:03So I'm now just jealous that I couldn't, 0:07I could have been running agents in the background 24/7. 0:10The window of opportunity where people were literally 0:12just taking money out of, you know, Dario's pocket. 0:15Basically, that could have been me. 0:17It could have been me. 0:18You could have been a big star, Chris. 0:20All that and more on today's Mixture of Experts. 0:28I'm Tim Hwang and welcome to Mixture of Experts. 0:32Each week, MoE brings together a panel of the smartest 0:34and wittiest voices in technology to explain, debate, and analyze our way 0:38through the truly overwhelming wave of news 0:40each week in Artificial intelligence. Today, 0:42I'm joined by a stellar crew. Chris Hay, Distinguished Engineer 0:45and CTO of Customer Transformation 0:48Kaoutar El Maghraoui, Principal Research Scientist and Manager 0:50for Hybrid Cloud Platform. 0:52And joining us for the very first time is Bruno Aziza, 0:55Vice President, Data, AI and Analytics Strategy. 0:58We have a packed episode today, 1:00and in fact, we're going to be publishing 1:01early, given all the news that's just come in. 1:04I think in like literally the last 72 hours, 1:06we're going to talk about Genie 3, 1:08Claude Code rate limiting and Zuck getting on the superintelligence train. 1:12But first, let's talk about the big news of the week, which is gpt-oss. 1:20So let's get into gpt-oss. 1:23I really want to bring up this topic 1:24because I think this is like one of the things 1:26a little bit like GPT-5, where I think it's been rumored 1:28that OpenAI has been working on this for months and months and months, 1:32and it is finally here. 1:33The quick recap of the news is that they've released two open source models, 1:37120B model and a 20B model. 1:41Um, and I think just to do our usual quick round the horn question, 1:44you know, I think the one that I want to get from all the panelists 1:46is basically how big of a trend is this? 1:48And the question is, in the next five years, 1:50will OpenAI be transitioned fully 1:53into being an open source play versus a proprietary model play? 1:57Um, Kaoutar, what do you think? Maybe. 1:59I think because OpenAI is walking this tightrope between, uh, 2:03competitive pressure to open up 2:05and also, ethically, responsibility to keep up with, 2:08you know, these dangers, capabilities contained. It's a good thought. Uh, 2:12Bruno, predictions for, uh, 2030. 2:15Is OpenAI fully open source at that point? 2:17Well, first, thanks for having me. 2:18I'm really excited to be part of this crew. 2:20Uh, second, it's really hard to predict the future. 2:23Uh, what? So I'm not going to take a chance on that. 2:25But what I will say, though, is I think it's indicative of a trend 2:29here in this market where OpenAI, 2:31I think, is seeing the opportunity 2:33to do something a little different and probably get access 2:35to enterprise issues that we deal with. 2:38And so I think overall for the industry, it's a great move. 2:41Great. And Chris, finally, what do you think. 2:43No. 2:44Okay. 2:46So we've got a real difference of opinion here. 2:48Uh, I mean, Chris, as usual being a little spoiler on it. 2:51Uh, Chris, do you want to put forward the argument like this open source thing? 2:54I don't know, some people have been saying like, this is just marketing for them. 2:57I don't know if you agree with that take. 2:59I don't think it's just marketing. 3:01I think the open white models are really, really important 3:06and I love GPT OS. 3:10They have done a fantastic job, so I expect them to continue on that trend. 3:14I hope they continue on that trend. 3:16But will they go fully open source by 2030? 3:21I doubt it because they're going to want to keep 3:24the big models to themselves. 3:26Um, and they want to keep their competitive advantage 3:29and they want to be able to make money. 3:30But I, I applaud the move counter. 3:33This is a little bit of a dangerous move, right? 3:35I think to Chris's point, right. 3:37Like ultimately OpenAI needs to needs to make money. Um, 3:40and it sure feels like releasing 3:43these very, very performant oss models. 3:46It does really kind of compete with their core products, right? Like, 3:49aren't aren't some of their customers going to just adopt OpenAI, open 3:52source rather than having to pay, pay them. 3:55How sustainable do you think this sort of thing is? 3:57I think maybe they were pressured to do these things, uh, 4:00because, uh, 4:02if OpenAI stopped short here with no access to training, data, 4:05architecture, design or ecosystem level tooling, um, 4:09so they may win, you know, the short term market share. 4:12But I think long term influence in this global 4:15AI, you know, 4:17competing, uh, world where these competing 4:20open models and governance backed, open 4:23initiatives are already filling that void. 4:25So it is, you know, I think an issue with the competition 4:29because we already seen like Mistral 4:31and, uh, you know, Meta and, you know, 4:33their models are also doing very well. 4:35So if they keep everything closed, uh, 4:38they might lose that competitive edge. 4:41Uh, but again, here, they don't give access to the training data, the ecosystem level. 4:46And I think providing the open weights is, is very important, 4:50you know, because they can also get, 4:53you know, that researchers, you know, to play with their models and fine tune. 4:57And so that is I think an important play for them uh, to have. 5:02They will continue on that. 5:03Well, whether they will become fully open source eventually. 5:06I kind of agree with Chris. 5:08Maybe not fully open source. 5:10Uh, they might maybe have a adopt a hybrid strategy 5:13because of this competitive pressure. 5:15Uh, because if the US firms don't open up completely 5:18deep seek and Chinese alternatives, 5:20they can dominate the open AI ecosystem, 5:23not technologically but also culturally. 5:25So there is, you know, this legitimacy pressure, 5:28uh, that is kind of pushing them to open source, some of their models. 5:32And the open weight is a great initiative, I feel. 5:34Yeah. Bruno, I see you nodding. Yeah, yeah. 5:36I'm going to provide a little bit of uh, maybe a different perspective on this 5:39because, you know, I spend a lot of time with customers 5:41and I think the future is hybrid, right? 5:43There's not going to be one model to rule at all. 5:45There's not going to be one deployment model. 5:47And often we have to make this choices one or the other. 5:50I think the enterprise is going to get both. 5:53And I think the reason for why it's in a way it could be interpreted 5:56as a good move from OpenAI, at least for the customers. 5:59One is they have to do this offensively, right, I think. 6:02You've got a lot of consumers that are familiar 6:04with the model, and they're using it all day long. 6:06Kind of like what we started using the iPhone. 6:08And now they have the opportunity of well, now you could be in the enterprise. 6:11You can use that model for yourself. 6:14And so I think it's a great way for 6:16for them to lean on the familiarity of consumers 6:19using their model and also now starting to. 6:21So I think that's one second one I think is what you're saying is defensively, 6:25I think if they don't do this, 6:27there's a lot of competition in this space 6:29and they might lose the 6:31what could be, in fact a very profitable market 6:34in the enterprise today. 6:36I mean, doing this for the consumers, 6:38I think, as everybody knows, 6:40is fairly expensive game. 6:41And so I think they ought to do it for their business 6:45to diversify their approach and really focus on 6:48what we know the future is, which is not just cloud, 6:51it is not just closed. 6:53It is hybrid, and it's hybrid forever. Yeah. For sure. 6:56I mean, that defensive point I think is worth pushing on. 6:58I mean, you know, Chris, I think it's a really interesting development 7:01I did want to talk a little bit about kind of like the branding 7:03in some sense of these models. Um, 7:05like, I mean, what I expect, what I hear is that, you know, 7:10like you're going to be able to get like gpt-oss 7:12on watsonx pretty soon, right? 7:15Which is like pretty different from like, 7:17I think the way I was thinking about this, this market evolving, 7:20which is we're going to see a little bit 7:22like more of what happened in mobile, right, 7:24where, you know, OpenAI was going to be the Apple. 7:26They even hired Jony Ives, right. 7:27Like the Apple of AI, 7:29where everything is going to be vertically integrated, 7:31you can only touch their models through their infrastructure. 7:34It seems like, I mean, that that wall has fallen. 7:36It seems like we're not moving towards Apple world in the market for AI. 7:39Is that a good way of thinking about it? I don't know. 7:42I mean, the way I like to think about this one is 7:45these models are very specific to consumer grade hardware, right? 7:51They have been specifically designed for that. 7:53So if we take the 20 billion parameter model, it's 7:56designed to run on a machine 7:59with 16 GB of memory and a single GPU. 8:02And then, you know, and they've even quantized it down to that level. 8:05So and then for even the 120 billion parameter model 8:09that's designed to run on a single A100 card, 8:12so you can go and fine tune that and not hit across multiple cards. 8:15And it's actually even designed to run on a high end MacBook Pro. 8:19I'm running that on my machine at home, 8:21not my IBM issued machine, but my personal one. 8:24Um, you know, to be clear, take that how you want IBM. 8:27Um, so, um, so it is designed specifically 8:31and they've had to make trade offs as well. 8:33So if we if we look at the model there, 8:36you can see it's a text only model. 8:38It's not a um it's not a multimodal model in that sense. 8:42It is specific to the English language 8:44and it is focused on code 8:46and it's focused on agents, as you can imagine. 8:49So it's hugely sort of, uh, 8:52designed one to be fast, 8:54but it's also designed for tool calling. 8:56And in that sense it gives away a lot. 8:59It's a reasoning model. 9:01Um, which again, is a great move 9:04because it gives away a lot of their architecture 9:06that they're running in the back end there. 9:08Um, so I think it is very, very 9:12and, and actually, the fact is it's a mixture of experts model 9:16where we get to see the amount of experts that they have. 9:19And it's and the act of experts is, is actually tiny. 9:23The number of parameters are active at any 9:25point is really tiny, which gives you such fast speed. 9:30So this is a model that's specifically designed 9:34for consumer grade hardware and edge devices etc.. 9:37You know, so it's designed for US usage. 9:39And now if we compare that to the back 9:42end models that they're running, they're multimodal 9:44They're going to be much larger, larger. 9:46They're going to span across multiple H100s. 9:49You know they're going to be multi-language 9:52that they're going to have a lot more data put in there as well. 9:55So I don't think these models that run on our machine 9:58are going to be competitive with their back end systems. Now, 10:01it's great that they have equivalence to the kind of the 10:06the zero, three and zero for minis, which is great. 10:08but again, I think they're not giving away a lot in that sense. 10:13So we get to feel good. 10:15We're going to be able to go and build authentic systems. 10:17You're going to build up great brand affinity with OpenAI 10:21and then, but ultimately you're probably going to 10:23still be using some of the large models on the back 10:26end system now. 10:27So I think it's a great move, 10:30and I absolutely applaud that you can run it on other people's platforms as well. 10:34So, um, and I think that's going to it's going to come through. 10:37So yeah, I'm excited. 10:39And if you look at the enterprise usage patterns. 10:42You're not going to get a lot of people that have been experimenting 10:45with the models. 10:47The OpenAI models that now can take this home or in their own environment 10:51and actually optimize for, for costs in their own environment. 10:55So I actually think for customers, there's a lot of upside to this. 10:59Definitely I agree. 11:00Yes. Basically I think it is a very strategic move. 11:03I agree with what Bruno and uh, Chris said. 11:07Uh, because they're facing, you know, this increasing competition 11:10from the powerful open source models. 11:12So when they release these things, they can first recapture 11:16goodwill from the open source community. 11:18Second, compete directly with these other open source players. 11:23And third, drive adoption for their technology 11:26on a wider scale, like Chris and also Bruno mentioned, 11:28especially in enterprises that have these strict data privacy and security requirements 11:33and need on premise solutions. 11:35And it's still going to be on OpenAI rails. 11:38So it's really important for them to, 11:39you know, continue that adoption. 11:42And and I think it's also around 11:44also framing the narrative around democratic 11:47AI, especially in force in US leadership in the field, 11:51which is a political savvy move here, because we don't want 11:55all the open source models to come from other countries. 11:59So I think they're trying also to reinforce the US leadership here. 12:02So this this open way distinction I think is crucial 12:05because open AI is still trying to maintain 12:08a level of control and competitive advantage 12:10with their proprietary models like GPT four, 12:13while still trying to reap the benefits of a more open approach. 12:17So it's like, have your cake 12:19and eat it too strategy. 12:21Yeah, we're gonna have to see whether or not they can walk this tightrope. 12:23It's going to be very, very interesting to see. 12:29I'm going to move us on to our next topic. Um, 12:32really wanted to cover this very, 12:34very interesting thing that just popped up, 12:36I think, just earlier this week, so DeepMind 12:39launched a blog post describing its latest edition 12:43of an open world generative model they call Genie 3. 12:47And I super encourage you to go online and look it up. 12:51I could describe it, 12:52but words are not going to do a good job describing it. 12:54I'm going to attempt to anyways to set up the discussion. Um, 12:57the Genie kind of generation of models that DeepMind has been working on. Uh, 13:01I think for me is like a truly magical demo 13:04where the idea is you sort of describe what you want, 13:07and then it basically creates like an immersive 3D world that you can sort of 13:10walk around in and navigate in, uh, 13:13on the on demand basically, 13:15which is, I mean, as someone who played a lot of video 13:18games growing up is a truly wild idea 13:20that, like, you can basically just say, I would like this kind of virtual environment, 13:23and that virtual environment just appears out of the other side. 13:26Um, Bruno, maybe I'll toss it to you. 13:29I mean, this is like a very impressive demo. 13:32Why is this important from like a research standpoint? 13:34Is this kind of just like a toy, 13:35or should we actually be more focused on this for more reasons than. 13:39It's a really cool demo. It's a big deal. 13:41I'll admit I'm a little biased, of course, 13:44because I just drawn from Google, so I, 13:47I do experiment with a lot of the Google technology, 13:51and I think this move to immersive 13:53regenerative models 13:56for video like this, I think is a big deal. 13:58It's a big deal on a few dimensions. 14:00I think one in the way that we experience information. 14:04I don't know about you all, but I use NotebookLM 14:07to prepare for some of my conversations. 14:11And so and the book is going to get video mode, uh, 14:14when I present information 14:17and just did a presentation this weekend for one of my kids, 14:20I used a video model from Google inside, uh, slides in order to communicate. 14:25So the way we get influence 14:27and how we consume and communicate, I think is is huge. 14:30Now, this model I guess is not available 14:33yet is a little bit different, right? It's, um. 14:36When it's available, I guess we get it all. 14:37Play with it, but it's not a typical of them. Right. 14:40So and it's, it's going to have an impact 14:42not just on how we communicate consume, 14:44but how you think about the experience in movies and games. 14:47And you can change on the spot what the experience is going to be. 14:52And so I can't wait to see, uh, 14:54what people are going to do with it, 14:56because I think beyond just the consumer aspect of it, 14:59I also see our ability to communicate 15:01more effectively and experience different things, 15:04and I think it's not going to take very long 15:06before it changes the way we think about information. 15:09Bruno, if if you are using PowerPoint to communicate with your kids, 15:13then IBM is the right company for you. 15:15Well welcome aboard. 15:18Um, I try to influence them any way I can. 15:22You know, so when words are limited 15:25and language isn't efficient, I got to use images and videos. 15:28That's right. I mean, just imagine 15:30it's like puppy play. Say it after me. 15:33Puppy. Next slide. 15:35Dog. 15:36As you see in the next slide. 15:39Well, I think this is like a question that I had is 15:42and this is, I think a discussion that's been playing out in the video 15:44gen space as well, which is like 15:47is there a consumer market for video gen. Right. 15:50Like ultimately I could see. Right. Bruno. 15:53You know, game designers using this 15:55and, you know, VR designers using this. 15:58But it's like on a day to day basis, 16:00do we think consumers are going to want to be able 16:02to just generate 3D worlds, 16:04you know, on the fly in the same way that do 16:06they really want to generate video kind of on the fly? 16:08Seems to be a really big question to me. 16:09And I know, you know, for example, Grok just announced their video generation feature 16:13and they're selling it as, oh, this is the new Vine, right? 16:16Like if you like video, short form video, social media, this is it. 16:20But on demand I guess. 16:21Chris question to you is just like, do you think ultimately 16:23this kind of tech is like it's like an enterprise thing. 16:25It's like for professionals that are designing movie and game experiences, 16:29or do you really envision a world where this is like consumer 16:32like you log on to your computer and like, I'm going to just type 16:34in the game that I want to play and the computer generates it. 16:37I think this is so transformational 16:39that Google should change their company name immediately to, uh, to jump on this trend. 16:44That's what I think should happen. 16:46Maybe some some kind of metaverse meta. 16:48I don't know. I don't know. I don't know. 16:50I actually do think this is really important. Right. 16:53Which is that I think 3D is the natural next space 16:59for, uh, AI models, 17:01because you're going to want to interact with things. 17:03And as you're imagining new things, you can start to say, 17:06okay, I'm going to want my code to run over here. 17:08Or maybe you're an architect, you want to design your building, you want to see how it looks. 17:12Maybe you're designing your kitchen. 17:13You want to bring those models straight in and imagine how it's going to look. 17:17And here's the placement there. So I think there is an enterprise case for this. 17:20I think there's a consumer case actually just hanging out. 17:24I want some chill vibes etc. I want to customize this space for me, 17:28play some music are generated. 17:30Of course. 17:31Um, and then I think, just immerse yourself in these spaces 17:34So I really do believe that reality is here. 17:38And again, even simple things like this podcast, the, the 17:40four of us or, you know, 17:422D spaces, we could be in the same space together 17:45or interacting with each other and throwing things and all that sort of thing. 17:49So I, I do think 3D is the space 17:52that is going to become super important, I guess, uh, 17:56with Veo 3, it the demos looked incredible. 18:00Um, what wasn't clear to me was how real time that was. 18:04I don't know if that video was sped up or not. 18:07So that would be that would be one thing. 18:11The probably the other thing is, 18:12I imagine that the amount of compute used to generate 18:16those scenes is probably, 18:19you know, heating up small countries as we speak. 18:22So I, I think that's probably going to be the kind of blocker there 18:28But you know, it's very early in this technology and I imagine that it's, uh, 18:32you know, it's going to improve in time and it's going to get faster 18:35and it's going to be cheaper to run in the same way as 18:37LLMS have done the same over time. 18:39So I'm excited about this. 18:41This is this is really where 18:44I think the world is going to go. 18:46And and I think it's going to lead to the personifications of our 18:49AI helpers and, and all that sort of thing. 18:52I this is where I want to be, you know, think about today 18:54in the enterprise, the world of training, 18:56you know, how many employees are happy 18:58about the training programs they have to attend in videos 19:00or the onboarding experience or even internal communication? Right. 19:03If you're in sales and you want to communicate 19:06and pump up your sales team, 19:08this is going to open a whole type of new world, 19:10I think, that we haven't seen just yet. 19:12So it's really a big deal, I think, in how we consume, 19:16but how we influence and get people an experience 19:19that is very different from what they've gotten 19:22in the 2D kind of model that we're in today. 19:25I did want to kind of pick up on this question of cost, you know. 19:28So I saw the demo and immediately was messaging my friend 19:30being like, oh, imagine this future world where you like, subscribe. 19:34And it's a massively multiplayer online world, but it's infinite. 19:37You can go in any direction, right? 19:39Because the computer just keeps generating more world for you to explore. 19:42And we're like, oh, that'd be incredible. 19:44And then my friend was like, well, you know, 19:45the problem is it's going to cost you $1,000 a month. 19:47It's going to cost you $2,000 a month, because like the amount of compute 19:51you need to generate this at any level of, 19:54you know, eye popping detail is like still very, very expensive. 19:57And so I guess I have a question for you on just like 19:59how quickly you think the costs will come down. 20:01It's kind of relevant to whether or not this will become a consumer thing 20:04or really even becomes a thing where, you know, 20:07Chris, you're almost like casually like, oh, 20:09I just want a virtual environment for a meeting that we're going to have. 20:12Um, you know, that kind of implies 20:14a sort of cost of producing these things, 20:16which is way cheaper than where we are right now. 20:19Do you think those costs will come down quickly, or is it is it, 20:21you know, actually a pretty hard problem from here 20:23to mass distribution at a pretty inexpensive cost. 20:27Yeah. I think. I mean, just with the regular generative AI, we're already right now. 20:32You know, kind of struggling with the cost of inference and the inference scaling. 20:36So there's a lot of effort, you know, 20:38to reduce that cost for generative AI 20:41because, you know, even inferencing right now with the, 20:44you know, the token generation and so on, the sequential nature of that. 20:47So it is a tough problem to solve. 20:50Um, and you know, how long is it going to take for that cost 20:53to be, uh, to, to go down? 20:55Um, I think it might take some time. 20:58Uh, I don't know if quantum, you know, 21:00technology can help, you know, with some of these things, 21:03uh, to accelerate some of these simulations and, uh, 21:06you know, quantum machine learning, that would be really cool. 21:09You know, once we get to the quantum advantage and the useful quantum. 21:13Um, so, um, yeah, I don't know exactly how long. 21:17Uh, you know, I think there might be maybe some, uh, 21:19breakthroughs in hardware and memory technologies and so on. 21:23And the bandwidth that we're facing right now with generative AI. 21:27But for sure, this is very exciting. 21:29And I think it's the glimpse, a glimpse of the future of content generation 21:34where anyone can become a game designer, 21:36a word builder with a simple text prompt. 21:39And of course, if done right, done cheaply, 21:42I can see this becoming both huge in 21:44both the consumer space and the enterprise space. 21:47Yeah, I love this comment about like, 21:49well, we may need quantum to get this to work. 21:51You know, it's almost like with AI, it's like we've created this like machine God. 21:55And we're like, well, we really need you to generate recipes 21:57for, you know, cooking, you know, dinner. 21:59And it's a little bit like, well, in order 22:01to get these video games to work, we really need quantum, 22:03like these massive, massive technological leaps to achieve, 22:07I think, which is something very everyday, which I think is very important 22:10in its own way. 22:14Let's move to the next topic. 22:16This is actually following in some ways on the theme 22:19that we've been talking a little bit about. 22:20I think the way to introduce this is to say I love Claude Code. 22:24One of my favorite sort of technologies of this 22:26era has been Claude Code. 22:28And Claude 22:29Code has- turns out to have these power users 22:33who are running Claude Code agents, 22:3524/7, 365, many instances all at the same time. 22:40And this really kind of fascinating policy change took place 22:43where Anthropic basically came out and said, look, 22:46if you're on pro, you're on our max plan, 22:48our $200 a month plan, 22:52what we're going to do is we're going to implement some rate limits. 22:55You can only actually get a certain amount of access to our models, 22:58a certain amount of access to Claude Code, certain 23:00amount of access to our base models. 23:02And this also obviously created a little bit of a clamor. 23:05But I think the main thing I really wanted to talk about, 23:08which I think is like a really fun discussion, is, 23:11you know, I know when $200 a month plans 23:14hit the market, people are like, this is crazy. 23:16Who's going to spend this kind of money on not one, 23:19but multiple services at this rate. 23:28You know, people then said, 23:29okay, well, the reason you do that is because you have to pay 23:32for the cost of all this infrastructure. 23:33It's actually turns out to be really expensive if it's not VC subsidized. 23:37How I kind of read this rate limiting is 23:39even once you raise the price to $200, 23:42it's still really hard to make this sustainable 23:44because of how much people use AI. 23:46Is this sustainable? 23:47Like, what's the what's the real cost that we eventually will need to pay 23:51for the dollars and cents to work out on, 23:53on these proprietary models? I think there's a few things here. 23:56I think, first of all, like we have all use code, uh, from cloud. 24:00It's terrific. 24:01Uh, I think, um, model. 24:04And I think it's kind of like victim 24:05of their own success to some extent. Right? 24:06I mean, if you look at there's two stats here I think they shared 24:09is that, you know, they've had seven outages. 24:12So I think clearly this is kind of being victim of your own success. 24:15And this new I think, 24:17limiting model here is only going to affect 5% of people. 24:21So in fact most people won't even see it. 24:23Now, if I look at my own usage and the usage of my customers, 24:26it is a challenge because, you know, 24:27it's so good. You just use it all the time. 24:29And I think the cost of per token is what, $9? 24:32And so if you now do the math, 24:34uh, it's really going to be difficult. 24:36Um, for these models, the better they get into, 24:39you know, giving you the, the, the, the answer 24:42how they're going to be able to monetize this. 24:44And so I think we're getting to that level. 24:46It's like they're trying to figure out where the monetization is. 24:50There's clearly pressure for getting there. 24:52I mean I like what you said earlier is like, 24:54you know, will you be able to figure out through optimization, through hardware? 24:58I think, you know, we've seen this before, you know, 25:00in the analytics world that I did a startup 25:03that had something called adaptive caching. 25:05I'm waiting to see when we're going to have, uh, token caching. 25:09And we're going to start seeing ways that we can use software to optimize 25:13the interface between the query, the request and the infrastructure cost of it. 25:17And so I guess that's what's probably going to happen next for these models. 25:20Yeah, totally. I mean, what you're describing is I think a really super interesting catch. 25:23Curious about any thoughts on this is like 25:25it's almost like a race against time, right. 25:26What I mean by that is, um, either 25:30you all, all of us on a call 25:32are paying $4,000 a month for a subscription, 25:36or they figure out a way to optimize it 25:38to keep the cost sustainably lower. Right. 25:41And by sustainably lower, we might mean $200 plus still. Um, 25:45Kaoutar, is that kind of the world? 25:47Like, who's are we going to win that race? 25:49Are we going to end up in a world where it's like, yeah, 25:51you just got to pay $4,000 a month for this? No, 25:53I think it's going to be an ongoing optimization thing 25:56that they have to, you know, that, of course, you know, 25:59these companies that have all these massive infrastructure. 26:01So they're continuously trying also to reduce their costs. 26:04And of course they will reduce that. 26:06That's going to reflect also on their prices. 26:09So what Anthropic did I think was unnecessary and inevitable 26:13because, you know, it just shows the sign of the mature- 26:16maturation of the AI markets because the drama, 26:20you know, around this rate limiting, I think it's a bit overblown 26:22because it's a simple matter of economics. 26:26You know, these are models, they're incredibly expensive to run, 26:29and the small number of super users 26:31can make these fixed price, 26:34you know, subscriptions and profitable. 26:36So the you know, it's like the end of this free lunch, 26:39you know, the early days of generative AI, 26:41you know, they were characterized by let's let's have a land grab, 26:45you know, for the users where companies are offering, 26:47you know, generous free tiers, etc., 26:49you know, to really grab the user base. 26:51But now that the market is more established, 26:54companies are really focusing on profitability here. 26:57And so it means more restrictions, more tiers 26:59and clear connections between the price and the usage. 27:02So so I think this market for pro users, 27:05where the $200 price, you know, is creating 27:08basically a clear distinction between, you know, this 27:11casual and professional users 27:13and it's not it's it's just it's not just more about usage 27:17but also access to these more powerful models, 27:19new features, better support, etc. and that comes with the price. 27:23So but of course, you know, it's going to be an ongoing race. 27:27How do we optimize these models like Bruno mentioned. 27:29You know, the caching of the tokens 27:31there is, you know, all of these techniques that we're also leveraging in research, 27:35you know, to figure out how do you basically do, you know, 27:38the kV cache optimizations and the token optimizations and all of these things together, 27:43you know, in vLLM and other infrastructures? 27:45that's going to be really important to continue to advance, 27:48to be able to lower the cost that, you know, whatever it takes. 27:51I want to talk a little bit about sort of what 27:54this implies, not on the kind of provider side. 27:57We've been talking about a lot of infrastructure, cost benefit, profits. 28:00Um, Chris, do you want to talk a little bit 28:02about what this implies about usage? 28:04Like it's it's a little bit crazy to me because what Claude or Claude, 28:08what Anthropic is suggesting, 28:10is that there are people who are running 28:13just a truly mind boggling number of coding agents, 28:16to the point where it's kind of breaking their bank. 28:19Is that the right way of reading it? 28:20Like, I don't know if your experience with Claude Code is similar 28:22where you're spinning up, you know, 100 instances 28:24to run 24/7 for you? 28:27Um, no, I'm I'm on the 80 bucks plan for Claude Code, 28:31and I was thinking about the 200 bucks plan. And, 28:34you know, 28:36if I thought I could have done that, I would have done that. 28:38I would have run my agents in the background all the time. 28:41So, um, so now I'm not going to upgrade to the 200 plan 28:44because I already I already have stress with cloud. 28:48I've got the 200 bucks on ChatGPT, but I. I 28:50refuse to bring that down 28:53just in case GPT-5 comes anytime soon. 28:55So I'm just stressed all of the time. 28:58So I hit I hit the cloud rate limits all of the time today. 29:03In fact, I know what's going to happen on the 200 29:05buck plan. It's going to go like this. 29:07It's going to be like, oh, I was using opus to answer this question, 29:11but I really feel that Sonnet could answer this question. 29:14I'm going to give it to Sonnet, and then you'll be like, 29:16oh, I'm just creating unit tests now. 29:18I'll give that across to, to 29:21to ChatGPT it can, it can do the unit test. 29:24I'm not going to burn my tokens on cloud with that one. 29:26And you're going to start to really think about 29:29which model you're using for which use case, etc., etc. 29:32and then you're going to automate that within cloud code and say, 29:34okay, now I want this routed over here, I want this over. 29:36You know, it's going to be crazy time. 29:39That's the world I live in already. 29:40So I'm not just jealous that I couldn't. 29:44I could have been running agents in the background 29:4624/7, the window of opportunity 29:48where people were literally just taking money out of, you know. 29:51Yeah. Dario's pocket, basically. 29:52That could have been me. 29:54It could have been me. You could have been a big star. 29:56Chris isn't that good, though. 29:58I mean, I just want to double click on what Chris just said. 30:00Shouldn't we be thinking about optimization and not wasting resources? 30:05Like it's almost kind of. kind of it feels to me like. 30:07Exchange in value. Right. 30:08So if I'm going to pay $80, I don't want to waste that Right. 30:12And I know somebody else is paying for it. 30:14But at some point, I think 30:16I think about it a lot for enterprise customers 30:18is that they're trying to hit that line of like, 30:21am I getting more value than what I'm paying for? 30:23And I think Claude is Anthropic is trying to figure out where is that? 30:27Where is that been? Right. Where's the ceiling? Where's the floor? 30:29But I would say is like- I'm paying for artificial intelligence, 30:31I don't want to be thinking, that's the AI's job. 30:35I mean, actually, this is like I mean, it's worth mentioning because. Oh, 30:38sorry um, Kaoutar, I'll let you in. 30:39Is like the original vision. 30:41Was intelligence "too cheap to meter", right. Like that was the. 30:43That was the Sam Altman, you know, slogan. 30:46We're like in a world right now, which is like, looks very different from that. 30:48But sorry, Kaoutar. Yeah, 30:50I think this the optimizations and of course, 30:52you know, the end users need don't need to think about these things. 30:55But the companies, you know, delivering these platforms, that's really crucial. 30:59So you know, these custom optimizations, it's I think a multi-front battle ground battleground 31:04things like cave cache management through reuse, compression, quantization, pruning, 31:08speculative decoding, desegregating, storage. 31:11of techniques are being you know, these are like 31:14it's like a top tier weapon here. 31:16And you know, how do you combine that with batching and compiled execution 31:21and, you know, tiered routing and all of that. 31:23There's a lot of techniques here. 31:25And it's not an easy, you know, thing 31:27because with generative AI, you have, 31:29you know, this prefilled and decoding which add complexity, additional complexity, 31:34especially as the context lends, you know, keep increasing. 31:37So so I think the winners here will deliver, 31:41you know, scalable and fast and affordable 31:44AI at real world volume. 31:46So and I think it's still, you know, a competition ongoing, 31:50um, you know, that these companies 31:52delivering these services need to figure out. 31:54And you don't even have to think about a difficult use case 31:56like code, like you just think about the use case of searching. 32:00I think I read in The Economist the cost of searching 32:02using an OpenAI or Claude is seven times more than searching on Google. 32:07So there's a real impact, 32:09like in using the tool poorly. 32:11And so I think it's a good thing that we're all thinking about, you know, 32:15what's the value versus the cost here 32:17and what's the best use case for the tool? 32:19don't know, because I get my information back from OpenAI 32:23and and clawed with the answer to my question. 32:27Whereas I maybe in order to find out a thing, 32:29I want to find out in Google, 32:30I may be clicking on 50 different links and seeing adverts for, 32:34you know, let me convince you, I. 32:37Yeah. 32:38Have you used AI mode now? 32:40Now I'm telling you Google on Google's AI mode, 32:42but use the AI mode more. 32:44There is a clue in the title AI mode, 32:46which is guess what's running behind their GPUs is very good. 32:51So the same thing is happening. 32:53It's very good. 32:54All right. Um, I'm going to move us on 32:56before we get into a protracted battle about AI mode, 32:59which we should talk about at some point, but it's not on the agenda 33:02for today's Mo. 33:06All right. 33:07I'm going to move us on to our final, uh, topic. 33:09There's this, uh, a joke that I feel like 33:12is is circulating around the AI space where, 33:14you know, every time someone wants to make a big proclaiming, 33:17uh, proclamation around AI, 33:20they publish a single serving website 33:22that has their essay on it. 33:24And, uh, finally, I think 33:26after every other major figure in AI has done 33:29this, Zuck is finally out with his. 33:32Um, so he released an essay called Personal Superintelligence Short Essay. 33:36And in many ways, it is sort of like Meta's vision 33:39of where all this technology is going. 33:41And I do want to kind of just like read a quick paragraph 33:44to kind of set the scene a little bit, though 33:45I think the whole thing is worth reading. 33:47So Zuck writes, quote, "as profound 33:49as the abundance produced by AI May 1st day 33:51be an even more meaningful impact on our lives will likely come from everyone 33:55having a personal superintelligence that helps you achieve your goals. 33:59Create what you want to see in the world. 34:00Experience any adventure. 34:02Be a better friend to those you care about and grow 34:04to become the person you aspire to be." 34:06And I kind of want to bring this one up 34:08just because obviously Zuck is a major voice. 34:11They've been on a tear recently building their superintelligence lab. 34:14We talked about Scale just a few weeks ago. 34:16And so, Bruno, maybe I'll kick it to you 34:18is, I think the question I want us to start with is 34:21everybody wants to do superintelligence, but it kind of feels like everybody. Well, 34:25and you're gonna have to explain what you're doing in just a second, 34:27but, uh, that, like, it seems like there's many different visions 34:31of superintelligence emerging, 34:33and I do want to spend some time to just talk about, like, 34:35what are the differences that were emerging between 34:37what OpenAI is trying to achieve, 34:38what Meta is trying to achieve, what Anthropic is trying to achieve. 34:40They really seem like different visions of superintelligence. But 34:43Bruno, I'll let you explain your very quick 34:45and shades and respond to the question. 34:48I think. I think you're right. 34:49I think there's multiple ways to think about it. I'm 34:51wearing the, uh, the meta glasses, 34:53um, which, uh, which I love because they are giving me a different, 34:57uh, interface than than my phone. 34:59And I think there's no doubt that in the future, 35:02this will become obsolete in the way we interface with data 35:04and then the rest of the world. 35:06There are some issues around privacy, I think, to solve for this. 35:09You know, I mean, you can real time post online 35:12and then people might not like that. 35:14Um, I think this I was reading just, uh, it's, 35:18um, it takes it collects about 32 out of 35 35:22possible data types and data points 35:24And so this is this is super powerful in comparison to what you had. 35:28And so I definitely kind of 35:30like where he's going in terms of, 35:32you know, if you think about OpenAI, OpenAI 35:34is I think approach is telling us 35:36is about productivity and power in a way. Uh, 35:38Anthropic, I think, is on to this, the safety first, 35:42you know, topic, which we need to talk about 35:45governance, safety, attribution of content. 35:48And I think, uh, you know, what uh, meta is after is, is, you know, 35:52how do you give people super power? 35:54And I definitely think that, You know, 35:56sometimes we think about this 35:58trend as the wrong way to think about it, right? 36:00We think about simple use cases like searching or maybe doing something better than Google 36:04But there's so much more right that I look in the way. 36:06For instance, I consume and understand content much, much better. 36:11I might actually be slower, but I'm actually be better. 36:15Like I talked about notebook earlier. 36:17NotebookLM gives me a mind map. 36:19It gives me a podcast, 36:21give me a synthesis of a paper that I might not read, 36:24but I might end up understanding better because I have a different interface. 36:27So definitely this idea of using gen AI as the argument 36:32for us as humans to make us more useful 36:35is that I like the way to think about it 36:37versus the rest of the narrative that you see in the industry, which is about, 36:42you know, a conflict between the human and the machine, 36:44which I, I don't believe in. Yeah. 36:46For sure. And I think actually, I mean, 36:48you bringing up the sunglasses, I think is pretty interesting 36:50because I think, Chris, another way of looking at this 36:53is there's these essays, 36:55but we can also think a little bit about like, 36:57what are the next gen technologies these companies are demoing. 37:00And it is kind of interesting to me that like Meta's 37:02like, we've got the sunglasses. 37:04That'll be the form factor for maybe AI in the future. 37:06You know, there's the Jony Ives thing. 37:08We've already mentioned him, so maybe worth returning to. 37:10That is like they were talking about like, oh, it's going to be like this 37:12like device with no screen, right? 37:15going to be like the sort of future, 37:17um, I guess among all these visions, 37:19is there one that kind of, like, really sticks out to you 37:21as someone who's, like, very deep in the space? 37:23I think they all have their place. 37:26Um, I really do think they all have their place, right? 37:30Whether you're, um, 37:32sticking something in your brain, whether you're sticking something in your glasses, 37:36whether you're sticking something around your neck 37:39or, you know, or whether you're, 37:41um, you know, just interacting with your phone or 37:45I think all of these things have a place I, 37:48I think we'll know when we know. 37:50And I and I hate I hate that answer in that sense, 37:52but I think we know when we know. 37:54And I, 37:57I do think the glasses probably has a big thing. 38:02We're all used to wearing glasses anyway, or some of us are. 38:06I think this enhancement, as you go around in, 38:10you know, have a look at something, etc., 38:12I think that's going to make a lot of sense. 38:14I don't think the phone's going away, though, 38:16because I think you're still going to pick up and read things. 38:19I just think we're going to be in a multi-device world 38:22and we're all going to be spending a lot of money. 38:24I, I do like the robotic subscription cost of $2,000 a month. 38:28So yeah, everything's $200 a month, you know, 38:32and if I get a $200 a month robot, I'm 38:34going to be sending that thing running around, right? All at a time. 38:38It's like you run, robot, I want my money. 38:40You go to the shop and get me this. 38:42You get me that. My glasses I'm going to wear all the time I have. 38:46By the way, I have a pair of those glasses. 38:48Bruno and I put it on, and I do. 38:50I do conference calls with them and it's really nice in the summer. 38:53You wander around pretty good. 38:55But But people think you're nuts. 38:57They think you're speaking to yourself. 38:59People give me money in the street 39:02because they think I'm a nutter. 39:04So I you've monetized 39:06the use of your of your Meta glasses is basically. Yeah. 39:08I'm a shift, Chris. 39:11People maybe want more people use it. 39:14Yeah, but I think about somebody 39:16saw you like using your phone? 39:18I don't know, hundreds of years ago. 39:20They would. You would think you're nuts. Yeah, 39:22but, you know, you know, 39:24even more nuts because now we're not even going to be speaking to people. 39:26You're going to be like, I'm speaking to my AI. 39:28And they're like, "ah, yeah, your AI? Okay, that's nice.". 39:32But don't you run into this already. I mean, I don't have hair. 39:34So when, when I wear those people, see, I'm on the phone. 39:37But if I had a long hair, you could imagine that I'm talking to myself. 39:41So I think we've crossed that already. 39:43I'll give you one use case that we, I think will be very useful. 39:46My mother is French, right. 39:48So I'm French and my mother in law is American. 39:50Neither of them speak the same language. 39:52I would love to have subtitles here 39:54so that they can talk to each other and understand. 39:57That's where I think a use case like the classic is super helpful, right? 40:01And I agree with you, Chris. 40:02Like I do phone calls. 40:03I wouldn't listen to music with it because it's 40:05just not the quality that I want. 40:07But it's great every once in a while to be on the phone call 40:10and having to put something in your ear. 40:12So I think really good for this. Yeah, 40:13am podcast. If you want to listen to Mixture of Experts put on your 40:16Meta glasses and listen through your temples. Kaoutar, 40:19maybe I'll turn to you to give you sort of the last word here. Um, 40:23I think in the in the past, when we've talked about the superintelligence topic, 40:27you've tended to be a little bit more skeptical on all things superintelligence. 40:31I feel like, uh, it's a good time to be like, 40:33especially with all the announcements this week. 40:34Are you are you feeling the AGI like, do you feel like 40:37we're on a superintelligence path, or do you still think that essays like this, 40:40or maybe you're a little skeptical? It's mostly marketing. 40:43Of course. I still have some skeptic skepticism here. 40:47Uh, some of it, you know. 40:49Seems to me it's like marketing, 40:51uh, because, you know, 40:53you know, this is a rebranding of the existing ideas 40:56and the way to distract from Meta's other challenges. 40:58So there is some truth to this. 41:00You know, superintelligence is a buzzy term. 41:02And the essay, you know, is light on concrete details. 41:06However, it does provide a clear and compelling vision. 41:09Like, you know, I think everybody hears that. 41:11It's a nice vision. 41:13Uh, and, uh, you know, 41:15and, you know, it's basically it's distinct 41:18from what we're hearing from other major players. 41:20You know, so it's really fascinating 41:21to see how this vision evolves as the technology mature. 41:25But one thing here, the hardware here is key. 41:27So this vision of a personal superintelligence 41:30is basically linked to Meta's hardware ambitions. 41:33So if AR glasses become the next major computing platform. 41:37Meta of course, will be in a prime position 41:39to own what we call the operating system of personal AI. 41:43So who's going to win that battle What devices. 41:45Of course we're going to be surrounded by all kinds of devices. 41:48Like Chris said, maybe something in your brain, something around your neck, 41:51something in your hand, something or a robot walking with you. 41:54So who's going to own that? 41:56Or are we just going to be like a hybrid world 41:58where you have multiple OS' or personal, 42:01you know, I computers or devices that are running around 42:05and everybody prefers certain devices versus others. 42:08So it's going to be an interesting, uh, play to see 42:11and how these things all evolve. 42:13Are we going to converge into one single platform like the phone? 42:16Sometimes I feel it's one unifying platform. 42:19We're all using it, but are we getting into a world 42:22where it's a multitude of devices, 42:24pretty diverse, fit per person, 42:28or we're still going to see some convergence? 42:30So it's going to be an interesting thing to see. Totally. 42:33Yeah. The idea of like multi-layered personal superintelligence is, 42:36is both interesting and also very funny 42:38is kind of like the idea, like in the future, 42:40I have a hyper intelligent sunglasses. 42:42At the same time, I have a hyper intelligent watch 42:44and a hyper intelligent phone, and they all come from different companies, right? 42:48And like, they will actually be this very funny period where, I don't know, 42:50maybe my Apple Watch is like those glasses always get it wrong. 42:54Yeah. Trying to like, undermine one another. 42:57Um, but even something you could design yourself 43:00using AI, like a device that you design, 43:02that you can wear, that you can, you know, who knows? 43:05It could be like, super personalized. 43:07That's all, you know, generated by AI in a virtual 3D world. 43:12So it's going to be interesting. Well, this is great. Uh, 43:14it's all the time that we have for today. Uh, 43:16Kaoutar, Chris, good as always to have you on the show. 43:19And, Bruno, hopefully we'll have you back some time here on MoE. 43:22Thanks to all you listeners for joining us. 43:24If you enjoyed what you heard, you can get us on Apple Podcasts, 43:26Spotify and podcast platforms everywhere, 43:29and we'll see you next week on Mixture of Experts.