Learning Library

← Back to Library

When Quantum Computing Hits Consumer Devices

Key Points

  • Blake points out that quantum tech has already crept into consumer experiences, citing a demo of a quantum‑powered game running on a phone.
  • Volkmar predicts quantum computing will reach consumer devices mainly via cloud‑connected services, accelerating once clear‑cut applications deliver real benefits.
  • Chris offers a tongue‑in‑cheek forecast that quantum will both appear on consumer hardware and remain unavailable there simultaneously.
  • The show’s host frames the episode as a pivot from AI to quantum, noting the recent surge of hype and media coverage despite the field’s cyclical visibility.
  • Blake notes that roughly two years ago quantum entered a new era with the launch of the first practical quantum computer, marking a shift from pure research toward emerging applications.

Sections

Full Transcript

# When Quantum Computing Hits Consumer Devices **Source:** [https://www.youtube.com/watch?v=iYRdhSEGpg4](https://www.youtube.com/watch?v=iYRdhSEGpg4) **Duration:** 00:45:22 ## Summary - Blake points out that quantum tech has already crept into consumer experiences, citing a demo of a quantum‑powered game running on a phone. - Volkmar predicts quantum computing will reach consumer devices mainly via cloud‑connected services, accelerating once clear‑cut applications deliver real benefits. - Chris offers a tongue‑in‑cheek forecast that quantum will both appear on consumer hardware and remain unavailable there simultaneously. - The show’s host frames the episode as a pivot from AI to quantum, noting the recent surge of hype and media coverage despite the field’s cyclical visibility. - Blake notes that roughly two years ago quantum entered a new era with the launch of the first practical quantum computer, marking a shift from pure research toward emerging applications. ## Sections - [00:00:00](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=0s) **Predicting Quantum Consumer Adoption** - Experts debate the timeline for quantum computing reaching consumer devices, citing early demos, cloud‑linked implementations, and wildly differing forecasts. - [00:03:07](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=187s) **Seeking Quantum Utility and Advantage** - The speaker explains that once a quantum device can perform tasks beyond classical simulation—a state termed quantum utility—the current effort is to pinpoint valuable, real‑world problems where this quantum advantage translates into faster, cheaper, or better outcomes. - [00:06:12](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=372s) **Quantum Advantage and AI Overlap** - The speaker outlines efforts to scale quantum simulations of larger molecules toward parity with classical methods and then explores whether a real intersection exists between quantum computing and AI, highlighting two avenues: using quantum hardware to enhance AI and applying AI techniques to accelerate quantum research. - [00:09:19](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=559s) **AI‑Powered Quantum Transpilation & Code Assistant** - The speaker explains how reinforcement‑learning‑driven AI passes enhance quantum circuit transpilation and how a fine‑tuned Watsonx code assistant, embedded in the IDE, helps developers write Qiskit programs. - [00:12:28](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=748s) **Quantum-Generated Data for AI Training** - The speakers discuss using fast quantum simulations to produce training data for neural networks, enabling AI-driven approximations of quantum phenomena without continuous quantum hardware. - [00:15:40](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=940s) **Quantum‑AI Hybrid Simulation Paradigm** - The speaker proposes that quantum computers will act as accelerators alongside classical methods, using AI to identify promising regions of parameter space and then applying quantum simulations for detailed analysis of those selected problems. - [00:19:03](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1143s) **Quantum Computing Progress Without Error Correction** - The speaker warns against the belief that quantum computers are useless until full error correction is achieved, noting that current machines already execute circuits beyond classical simulation and that continual scaling of performance will unlock practical utility. - [00:22:12](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1332s) **Federated Tool Access for AI Agents** - The speaker explains how AI agents can orchestrate external tools—like diagram generators, compilers, and deployment services—through federated marketplaces, enhancing coding environments while debating whether standardization or agent intelligence will drive this integration. - [00:25:18](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1518s) **Anthropic's Potential as AI Standard** - The speaker argues that Anthropic may become the de‑facto standard for model integration because its value rises from ecosystem compatibility, while OpenAI has not taken the lead in defining such standards. - [00:28:25](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1705s) **CoreWeave IPO and AI Compute Niche** - The speakers discuss CoreWeave’s evolution from crypto‑mining infrastructure to an AI‑focused cloud provider, its close NVIDIA partnership and upcoming IPO, and question whether a specialist AI compute firm can succeed alongside the dominant cloud giants. - [00:31:29](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1889s) **CoreWeave’s Cluster Edge Over Cloud Providers** - The speaker explains that CoreWeave’s focus on delivering ready‑to‑use, tightly coordinated GPU clusters—without the complexity of virtual private clouds—gives it a practical advantage over traditional cloud giants whose infrastructure is built around loosely coupled, individual machines ill‑suited for large‑scale AI training. - [00:34:34](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=2074s) **Shifting AI Compute Market Landscape** - The speakers debate whether AI pre‑training, fine‑tuning, and inference will remain cloud‑centric or migrate to powerful desktop devices, and how incumbents like AWS and Azure might respond. - [00:37:38](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=2258s) **AI Chip Competition and Market Scale** - The speakers debate the speed of AI chip development, contrast market sizes from billions to trillions, and argue that large inference clusters and custom chip designs are reshaping an underserved compute market. - [00:40:47](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=2447s) **Anthropic Voice Model Breakthrough** - The speakers marvel at Anthropic’s new voice system—now smooth, low‑latency, and eerily human‑like enough to fool a spouse—and debate whether it finally delivers on the long‑promised, game‑changing conversational experience. - [00:43:51](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=2631s) **Emotional AI Model Breakthrough** - The hosts laud a new voice model that convincingly captures human‑like emotions, discuss how mathematically encoding such affect could be transformative, and tease its upcoming open‑source release and demo. ## Full Transcript
0:00How many years do you think it'll 0:01be until quantum computing finds 0:03its way into a consumer device? 0:06Blake Johnson is a Distinguished 0:07Engineer and Quantum Engine Lead. 0:09Uh, Blake, welcome to the 0:10show for the very first time. 0:11What do you think? 0:12Uh, so I might say that in some 0:14ways it's already happened, right? 0:15Some of the early explorations with quantum 0:16were kind of fun with, with games and those 0:19you can kind of make available anymore. 0:20So I think I've seen a demo of a 0:22quantum powered game on a phone. 0:23Volkmar Uhlig is Vice 0:25President, AI Infrastructure Portfolio Lead. 0:27Volkmar, welcome back, uh, do  you have a prediction here? 0:29Quantum itself and a phone will be a big fridge. 0:32So I guess it's connected over the internet. 0:35Um, and I can see that once there are 0:38actual applications which get the benefit 0:40out of it, they will go very fast. 0:41And finally, last but not least, uh, 0:43Chris Hay is Distinguished Engineer 0:45and CTO of Customer Transformation. 0:47Chris, I can usually rely on 0:48you for the wildest estimates. 0:50Uh, what do you think here? 0:51I think it will be available on a 0:52consumer device, uh, and not available 0:56on a consumer device at the same time. 0:59All right. 1:00All that and more on today's Mixture of Experts. 1:08I'm Tim Hwang and welcome 1:08to Mixture of Experts. 1:09Each week, MoE helps you navigate the 1:11biggest headlines in technology with a 1:13set of brilliant minds from research, 1:15product, engineering, and more. 1:17As always, we have a slew 1:19of AI news to get through. 1:20We're going to talk about Anthropic's new 1:22Model Context Protocol,  CoreWeave filing to go IPO. 1:25And a new voice demo from 1:27a company called Sesame. 1:28But uniquely today, we're actually going to 1:30kind of like step a little bit adjacent to our 1:32usual AI topic to talk about quantum because 1:35we have Blake, uh, here on the show with us. 1:38Um, Blake, maybe I can just have you 1:39kind of kick it off a little bit. 1:40You know, if you've been reading the headlines. 1:43You know, quantum's weird. 1:44It kind of like disappears from 1:44the headlines and then occasionally 1:46it just like comes back in force. 1:47And you just see quantum 1:48headlines like all the time. 1:50And I think one of the reasons we wanted 1:51to have you on the show is that like 1:52we're, we're, we're like quantum spring. 1:54Like everybody's talking about it, uh, suddenly. 1:57But it, to my question that I opened 1:58with, it's sometimes hard to get a sense 1:59of like how close or far this technology 2:02is from becoming something like, that we 2:04like practically feel the impacts of as 2:07you know, consumer or even 2:08I guess like an enterprise. 2:09Um, and so I guess maybe a good place to start 2:11if you want to kind of just quickly give us 2:13a capsule is, you know, cut through the hype. 2:15Like where are we now? 2:16Like are we very close? 2:17Is quantum nigh? 2:18Or is it like still, you know, in 2:20this kind of basic research, research 2:21and development sort of world? 2:24Yeah, I mean, I think something quite 2:25interesting has happened in the past, 2:27uh, about two years or so where quantum's 2:29entered a really a new era, right? 2:30The, the, the first quantum 2:32computer that IBM 2:33put online were really educational 2:36tools, research tools, right? 2:38Like they're about in some ways about 2:39teaching quantum computing, teaching 2:41quantum mechanics, uh, and, and useful to 2:45students and educators and researchers. 2:47Um, and that was, you know, limited by 2:50the, the size of the computation we 2:51could execute before sort of the quantum 2:53noise overwhelmed, uh, the situation you 2:55kind of were left with, um, just noise. 2:59Uh, but in the last couple of years, 3:01we've now finally are at the state 3:02where we can do computations with, uh, 3:04our most powerful quantum computers. 3:07Uh, that we cannot, uh, simulate with 3:09brief Flourish's classical simulation. 3:12Uh, so there, um, this is a moment that, 3:14that at IBM we refer to as quantum utility. 3:18Um, and you can see that like, uh, uh, you 3:21at least need this property for something 3:22to be useful at all because if, if, uh, if I 3:24get similar with a, with a classical computer 3:26then I don't need a quantum one, right? 3:28So we're finally in this regime where I can do 3:30something kind of unique on the quantum device. 3:32Um, but then it, we're kind of, now it's, now 3:35it's the real, uh, Now that I would say that the 3:38hunt is on to be able to put that power connect 3:40that power to an application with that that's 3:43the someone really cares about and matters and 3:45has has value and so that we're kind of now in 3:48this this sprint towards actually trying to take 3:50these devices and finding quantum advantage. 3:53Which is that moment when we can do 3:54something faster, cheaper, or better. 3:56Yeah, and what are the kind of most, 3:57uh, promising areas for that, right? 3:59Because I guess almost it's like, it sounds 4:00like the technology is almost looking for 4:02its demo, or it's like, okay, here's a place 4:04where it's like really better than traditional 4:06computers, um, uh, and, and you have to find a 4:09kind of a quantum shaped problem, if you will. 4:12Sure, right, I mean, so like, there, I think 4:14there's things that we know about with, if we 4:16had, like, the, the most powerful machines that 4:18could do, kind of, arbitrary sized computations. 4:20There's things that people can write down 4:21mathematical proofs that, like, this has 4:24better scaling behavior with a quantum 4:25algorithm than a classical algorithm. 4:27In particular, right, there's simulating nature, 4:29which is like, Richard Feynman's original idea 4:31for the quantum computer came through thinking 4:33about the problems of simulating nature. 4:35And so that like has applications in chemistry, 4:37materials design, drug discovery, and so on. 4:39That's, that becomes like a rich, uh, area. 4:43But then you also have, uh, problems, 4:45uh, mathematical problems with structure. 4:47This is where you find things 4:48like, uh, factoring, for 4:50instance, um, or machine learning. 4:53And then you have kind of optimization problems 4:55where we have sort of weaker, uh, mathematical 4:58proofs about the advantage, but there's still. 5:00Because of its importance to business, uh, it 5:03still deserves and gets a lot of attention. 5:07So, I mean, in terms of where we're 5:08placing our bets, I mean, I think there's 5:09a number of, uh, areas which we think 5:12are ripe for early quantum advantage. 5:14In particular, uh, this issue of, of um, 5:17simulating nature, uh, particularly simulating 5:20the, the time dynamics of, of a quantum 5:22system, uh, seems like it's very possible. 5:25And chemistry is another area where, um, 5:29uh, uh, the field's kind of, uh, had its 5:32ebbs and flows in terms of people being 5:33very excited and thinking and being very 5:35optimistic and then, uh, finding pessimism 5:37again because they dug into the problem 5:38is like, oh, it's harder than we thought. 5:40Actually, it's really difficult. 5:41But, uh, that again had a really cool 5:43moment, uh, last year where we were. 5:45By sort of combining the power of quantum and 5:47classical computing, and it's something that 5:49IBM calls quantum-centric supercomputing, 5:52where you're sort of splitting a problem 5:54apart and having the quantum computer and 5:56classical computer really work together. 5:58Um, we were able to, for the first time, 6:01show, uh, really make headway on the 6:03problem of chemistry with quantum, uh, 6:06and show like we could really finally 6:08be competitive with classical methods 6:10for a certain kinds of molecules. 6:12And now we're sort of expanding to 6:14larger molecules and trying to show 6:16again that we can kind of actually sort 6:18of reach parity with, uh, the state of 6:20the art methods in the classical world. 6:22And then of course the hope is that, 6:23you know, by pushing hard enough, we 6:25finally enter that territory of advantage. 6:26Yeah, that's really exciting. 6:27So, I want to make sure we bring Chris and 6:29Volkmar in, but, you know, I think one last 6:31question maybe to kind of get us there. 6:33Like, so, MoE typically focuses on AI. 6:37Um, and uh, you know, I think because of 6:39the hype cycle, I'm in lots of conferences 6:41where people are always like, quantum and 6:43AI and like, you know, it's right up there 6:45with like blockchain and like kind of like 6:46all the other hype technologies are kind of 6:48like munged together in one big kind of blob. 6:51Um, and I guess maybe the question 6:52for you is like, is there actually an 6:54overlap between AI and quantum here? 6:56Like if so, what is it? 6:58Um, you know, I think a lot of our 6:59listeners kind of work in, you know, 7:01machine learning day in day out. 7:03Um, and you know, I think it's been kind of 7:05voiced that quantum might have this overlap, 7:07but I think from where I'm sitting, it's 7:08like still very unclear what that would be. 7:10But I'm curious about like what people are kind 7:12of talking about in your world, and if there 7:14is actually a genuine interesting overlap here. 7:16I think there's two interesting 7:17directions to think about, right? 7:19You can think about using quantum 7:20to make AI better, and you can think 7:22about using AI to make quantum better. 7:24Um, and, uh, I think the two 7:26pictures look pretty similar. 7:27kind of different, pretty different today 7:30in the sense that, you know, a lot of the 7:31early hope was about using quantum for AI. 7:35Um, and, and this I think we know, 7:38um, it's, it's very interesting, 7:40but we know a lot less, right? 7:42Like, um, it wasn't until a couple years 7:45ago that we could find one of these kind 7:46of formal mathematical proofs that there 7:48was something that you could definitely do 7:49better with a quantum machine, but it kind of 7:51required a contrived sort of quantum data set. 7:54And so of course, usually where people 7:56are applying AI as are, uh, you know, 7:59they're applying it to classical data. 8:00And so like, uh, that's an area where what we 8:04have available is more about heuristic methods, 8:06things which are harder to make proofs about. 8:08And yet we have definitely a 8:09different computational paradigm. 8:10So can you do something better with it? 8:12I would say the jury is still out. 8:14People are definitely trying. 8:15Uh, and we're partnering, uh, with, 8:17with startups and other companies that 8:19are, um, that like that is their focus. 8:22Um, in fact, you can, you can use a 8:24sort of a quantum powered AI tool, 8:26um, on our, uh, quantum platform. 8:29The other direction though, uh, definitely, 8:33uh, like at that moment where we're 8:34applying AI to quantum is now, and we're 8:36really finding a lot of value in that. 8:39In particular, there's two, uh, two 8:41new tools that we just released last 8:43year, that are sort of directly. 8:46uh, enabled by AI. 8:47One is, um, one of the problems you have 8:50when executing a quantum program is you 8:53start with some sort of description, uh, 8:55in quantum computing, your program ends up 8:58taking the form of a quantum circuit, um, 9:00and you need to optimize that circuit so 9:02that it will run with the best performance 9:04or quality on the quantum hardware. 9:07And so we build a, a kind of compiler in 9:10the, with a tool called Qiskit that does that 9:13compilation task or transpilation task, uh, 9:16to, to optimize your, your, your circuit. 9:19Uh, last year though, we, uh, we, uh, 9:21upgraded that transpilation technology with 9:23AI powered passes, uh, using that kind of 9:25a reinforcement learning technique to find, 9:28to automatically build, um, uh, optimization 9:32passes and sort of recognize patterns and 9:34circuits and find, uh, good reductions. 9:36So that was sort of one, uh, sort of 9:38novel piece that we, we introduced. 9:40And the other was really, uh, very much in 9:43this category of generative AI that we used 9:45our, our, our watsonx code assistant 9:48tool and kind of fine tuned it with, uh, 9:51quantum programming patterns, uh, based 9:54in sort of Qiskit, uh, Qiskit problems and 9:56Qiskit tutorials and so on to, to build 9:59a what we call the Qiskit code assistant. 10:00So this is, lives directly in your development 10:03environment and helps you sort of learn 10:05quantum programming and helps you, you 10:06know, uh, sort of, uh, uh, helps you write, 10:10helps your developer, the developer write. 10:11quantum programs, uh, directly 10:13in their development environment. 10:14Well, actually, I want to bring 10:15in Volkmar and Chris here. 10:16I mean, Volkmar, maybe I'll turn to 10:17you first because it feels like we can 10:19kind of talk a little bit about that, 10:21um, sort of quantum for AI overlap. 10:24You know, you work with enterprises, you think 10:25a lot about AI infrastructure, like what's 10:27just like the hardware that we need to do AI. 10:30Like, I'm curious both, like, on two fronts, 10:32like, I guess whether or not you have, like, 10:34there are customers starting to be like, 10:35here, I keep reading about this quantum thing, 10:37are you guys going to support that soon? 10:38And then I guess the second question is, like, 10:40is that kind of, like, in the sort of long 10:42term forecast for infrastructure to say, oh, 10:44well, you know, we predict in four or five 10:46years, you know, we really need to have these 10:48quantum computers online, or if that's not 10:50really kind of in how you guys think about 10:51this sort of, like, long term planning here. 10:53Yeah, I don't think that there 10:55is a clear path to unify the two. 10:58Um, but on the flip side, you know, 11:01like, there are these different 11:02compute paradigms which are showing up. 11:04So I think we are going away from 11:06traditional, hey, here's an x86 11:08box, you know, go and hang yourself. 11:11Uh, and we are getting more into a world 11:13where, um, the compute capacity is much 11:16more specialized for specific tasks. 11:19Um, and so, you know, we have AI 11:21computers now, you know, there is. 11:23We will talk about this later. 11:24I'm going to show the whole companies which are 11:26just saying, okay, we only focus on AI capacity. 11:29Um, I think similarly with quantum, 11:32there will be a bunch of players 11:34which have quantum computers online. 11:36The way I see this is, and this, you know, goes 11:38back to my prior life in self driving cars. 11:41We always had the issue that, um, 11:44When, when you, when you build, um, 11:46an AI model, um, you in fact need some 11:49ground truth you train it, uh, with. 11:51And, and we are seeing this now also 11:53coming in large language models. 11:55And the beginning of large language models was 11:57like, okay, let's just download the internet. 11:58That's my ground truth, right? 12:00And, and I do next token prediction. 12:02So in self driving, you need massive 12:04scale, um, you know, observational data. 12:07So now if you think about where we are 12:08heading is, uh, and, the model 12:12is an approximation of that reality. 12:14So if you go into biology or chemistry 12:17or, you know, uh, physical phenomena, 12:21what you need is, is a good sample 12:23set, which is then can be trained into 12:25a network to act as an approximation. 12:28Now, if that production of data for 12:31training a neural network becomes, 12:34you know, decades or centuries. 12:37That's where I can see a quantum computer 12:39being extremely useful because you can now 12:41say, let's use a quantum computer to explore 12:43the, uh, the solution space because it's fast, 12:47produce a bunch of data, and then use that data 12:49to train a neural network as an approximator. 12:52And now you can actually work without the 12:54quantum computer, but you can actually look 12:56at Um, you know, phenomena on a desktop 12:59machine and, you know, like company before 13:03the last company I, my, my head of data 13:05science, he, he came actually from CERN, 13:07CERN is doing this for decades now that 13:11there are training, uh, neural networks, 13:14uh, which are just physics approximations. 13:16And so this is where I think the 13:17two things can really come together. 13:19Totally. 13:19Yeah. I think it's application of kind 13:20of like the AI to quantum space. 13:23You know, Blake, you talked a little bit 13:24about, oh, well, you know, we kind of trained 13:25a trained a code assistant, right, to be able 13:28to kind of program specifically and kind of a 13:30language that's kind of specialized for quantum. 13:32I guess Volkmar, you're kind of raising another 13:34thing, which is pretty interesting is like, 13:35well, in order to kind of take advantage of 13:37a bunch of these applications, we might just 13:38need to be able to generate data and like 13:40AI is effectively like a way to get there. 13:42It seems like. 13:43So Chris, you've been 13:44uncharacteristically quiet. 13:45I'm curious if you have a view on, 13:46on all this, uh, in terms of kind of 13:48like future prospects for quantum. 13:50And I guess specifically, like as someone 13:51who kind of plays with models a lot, there's 13:53an interesting story here about models 13:55getting more specialized with time, um, but 13:58I'm curious about your take on all this. 14:00I was really quiet because I feel really dumb 14:02in this subject when we got some super smart 14:05guys talking about quantum and I'm like, 14:06oh my goodness, I, I, I'm just not there. 14:09But actually, I will take my super dumb 14:11approach here, which is actually, I think, 14:14to your point, you are going to need AI to be 14:17able to interact with quantum machines because 14:19you know, guess what AI is really good at 14:22explaining things like you're a six year old 14:24and I, and I personally, I'm going to need 14:26that if I'm going to program a quantum computer 14:28and I'm going to need to vibe code it right 14:30so I think that that is definitely a path 14:34with code assistance is vibe coding quantum. 14:37A probably more kind of serious one 14:40in my mind, and again, I'm just sort 14:41of thinking out there at the moment. 14:43If I think about what quantum's really good 14:45at, and again, I really don't understand 14:47quantum, um, but it's really all about 14:50probability, and it's all about things like 14:52error correction, and it's all about sampling. 14:54And if actually, if we think about what AI is 14:57about, it's really about probability, it's about 15:00next token prediction, and it's about sampling. 15:02So, In the world that we have two very separate 15:06and different things, which are really focused 15:08on probability, sampling, and essentially 15:11prediction, I can't help think that in some 15:15way, shape, or form, these things are going 15:17to come together, whether that's AI helping 15:21quantum be able to predict better or whether 15:23that's going to be quantum being able to help 15:25AI predict better. But I think there 15:27is a then diagram somewhere which bring these 15:31things together if you ask me any further 15:33questions on this please don't, Blake, because 15:36then I am just going to look really super dumb 15:38but I think there's something there. 15:40I mean, there is definitely, I mean, 15:41like, do you want to respond to that? 15:42Like, there's kind of a fun take there, I think. 15:45I think maybe you can kind of combine a little 15:47bit of what Volkmar and Chris have added here. 15:50Like, you know, I, we don't see a 15:52world where quantum, uh, replaces 15:54classical computing, right? 15:55Like it's, uh, it's an accelerator 15:57for certain kinds of problems. 15:59And something that we see really exciting, 16:02uh, prospects for the future is about 16:04being able to, the convergence of, of 16:06bringing different methods together. 16:07Um, and it's kind of like you see this paradigm, 16:09this pattern already, uh, widely used in the 16:13computational science field where people will, 16:15will want to study some sort of system, but the 16:17computational space is so, uh, overwhelmingly 16:21large that they don't know like where to start. 16:23And so they'll use AI models to try to 16:25identify interesting regions of parameter 16:27space and then plug in their their detailed 16:29simulation model with non AI methods. 16:32But now and like something that's kind of an 16:34obvious upgrade to that pattern is plug in 16:37a quantum simulation model for the detailed 16:39simulation of an AI identified interesting 16:42problems or interesting feature space. 16:43And so I think like the future 16:45really is in like quantum. 16:47Or AI, it's quantum and AI. 16:49Yeah, and I think it's, it's 16:50particularly interesting, I think, 16:51in the context of, um, sort of like, 16:56uh, the kind of history of computing. 16:57Right, which you started out with all these 16:58devices, and then you're like, oh, we're going 16:59to converge towards a general computing device. 17:02And then kind of like, it feels like in 17:032025 now, we're suddenly like, well, you 17:05know, like we're going to have a special 17:06hardware just for AI and like also quantum 17:08might be like a specific platform that we 17:10use to explore certain kinds of problems. 17:12There's kind of this like redivergence, I 17:14guess in some ways from like the kind of 17:16like general purpose sort of model that 17:18you'll have like kind of one sort of hardware 17:20platform that kind of will do everything. 17:22So I think there was a paper by Google, 17:25I don't know, it was like 10, 15 years 17:26ago, um, and this was like an, an allegory 17:30to the Watson statement, the world 17:32has, you know, needs five computers. 17:35And, um, so Google said, you know, 17:37the world needs five computers. 17:39And they said, you know, there's a 17:40general purpose computer as we know it. 17:42And then they said there's search and then 17:43we don't know what the other three are. 17:45And so I'm, I'm like tracking now. 17:47And I think so number three is 17:49probably the AI training supercomputer, 17:52which is very different structure. 17:54And then number four is probably quantum. 17:55And so who knows what number five is. 17:58Yeah, the fifth unknown. 17:59Yeah. 18:00This is just very different compute patterns 18:02and you're trying to find, you know, an 18:04optimization and the moment you can optimize 18:07something by a factor of, let's say a thousand 18:08or 10, 000, then it's worthwhile to actually 18:11completely relook at the architecture. 18:14And I think quantum is one of these things like. 18:16Sorry, if you can do something, you know, 18:18in minutes, which takes a hundred years, 18:20that's worthwhile actually looking at 18:21a completely different architecture. 18:23Yeah, absolutely. 18:24And so it's, it's hard to say like, oh, you 18:26know, these are not replacements because they 18:28are just so different in their design space. 18:30And, um, and then they can solve 18:32that one problem very, very well. 18:34Well, Blake, I was, uh, producer Hans 18:36was like, you're going to have to cover 18:37all of quantum in 15 to 20 minutes. 18:40So I think we have done the best we can. 18:42I know you need to go, but before we let 18:43you go, I think one final question is, you 18:46know, you've done a really great job, I 18:47think, kind of parsing out what's kind of 18:49important, what's not, and what's happening, 18:51you know, on a kind of ongoing basis, I guess 18:52the question for you is, like, how can our 18:54audience sort of cut through all the noise 18:56in quantum news, like, what's the important 18:57news to be paying attention to, what should 19:00people be reading, just if you have any 19:01final parting recommendations on that front. 19:03I would guess, I would caution our listeners 19:05that there is, there is a narrative out 19:07there that quantum computers can't do 19:08anything until we have error correction. 19:10Um, and certainly like the most general 19:12purpose algorithms we know of are large 19:14computations and we need systems that 19:16can execute really large circuits. 19:18Um, but like there's, uh, we're already 19:21in this, this, uh, this realm where we can 19:24execute circuits that we can't simulate. 19:26Um, and, uh, I think it's actually harder 19:30to believe that, that nature doesn't permit 19:32anything useful to be done between now and 19:35something which is a billion times larger. 19:37Um, and so, like, I think, I think the 19:39pay, the thing to pay attention to is, 19:41is the kind of steady march of progress 19:43of the performance of the machines. 19:45Um, as people sort of build up the 19:47kind of the fundamental ingredients to 19:48just do larger and larger computations. 19:50Because, uh, what we can do with these 19:52machines is, is going to be directly connected 19:54to the scale of computation that we can. 19:57We can reliably 19:58execute. 19:58Yeah, that's great to keep in mind. 20:00Well, Blake, thanks for joining us, spending 20:01some time this morning, and hopefully we'll 20:03get you back on a future episode, because 20:04I'm, uh, I'm sure, I'm very sure, that there's 20:06going to be more quantum news this year. 20:13Well, that was great. 20:13I'm going to move us on to our next topic. 20:16Um, so the thing that was dominating all of 20:18my group chats and my machine learning AI 20:21social media this week was the Model Context 20:23Protocol released by Anthropic, or the MCP 20:27for short. 20:28Um, and, uh, the way Anthropic describes 20:30that, I'm just going to quote from 20:31their website, is "MCP, right, 20:33provides a universal open standard for 20:35connecting AI systems with data sources". 20:38And people have just been frothing at the 20:40mouth on how excited they are about MCP. 20:44Chris, let me turn it over to you. 20:45When I read something like universal open 20:47standard for connecting AI systems with data 20:49sources, Are they just talking about APIs? 20:51Like, why is MCP, uh, important, uh, and I'm 20:54curious about what you think about the release. 20:56Okay, so MCP has been around 20:58for a little bit of time. 20:59So, what's actually made it super cool, 21:01though, is that it's actually hooked up into 21:03some of the editors, like Cursor or Klein, 21:06which is my particular favorite in this case. 21:08So, you can then go and 21:10access MCP from on there. 21:12Um, what is cool about MCP? 21:13Underneath the hood, it is just 21:16JSON RPC, so remote procedure 21:18calls under the hood there, right? 21:19So there's, there's nothing wonderful, 21:21but what they have done is absolutely 21:22standardized it, and they standardized it 21:25in probably three ways, which is important. 21:27Number one is that you 21:28can expose your resources. 21:30So resources is going to be things like, 21:33maybe it's a database schema, maybe 21:34it's your GitHub schema, and then you 21:36can go and look at an individual file. 21:38Uh, and then the second one, which is probably 21:40the most important one, is tool calling. 21:42So I can say, these are the tools that I've got 21:45available, these are the parameters that you 21:47need, and then I can go and execute those tools. 21:50Why is that important? 21:51Because traditionally, we've been 21:53using a thing called function calling. 21:55And the thing about function 21:56calling is you need the functions 21:57to be, uh, on your machine locally. 22:00But with MCP, I can have these servers 22:03serve up the tools that are available, and 22:06they can be hosted in different locations, 22:07they could be remote, and therefore I can 22:10start to mix and match and do cool things. 22:12So, coming back into the cursor example, 22:15there might be a tool server that has got, 22:18uh, sequence diagrams, mermaid diagrams, or 22:20bar charts, and therefore I can say, "Hey, you 22:23know, I've coded up something," and then I'm 22:25going to say, "Hey, go and, go and generate 22:27me a, an architectural diagram for this." 22:29Or maybe there's a compiler, go compile this. 22:32Or maybe, uh, there's an MCP server for AWS. 22:37And then I would just say, go deploy 22:38this piece of code that I've built 22:40and put it on the server there. 22:41So actually this ability to access 22:44tools in a federated fashion, and 22:46everybody's building marketplaces around 22:48this, really starts to supercharge the 22:51models and the VS coding environments. 22:53Because I'm no longer just restricted to 22:55working with my code, but now I've got 22:57access to my tools and ecosystems and I 22:59can mix and match and do them together. 23:01And more importantly, the model and the 23:03agent is orchestrating and in control of 23:05that, which makes it super, super cool. 23:07Were we always going to end up here? 23:08I know, like, there's kind of a very 23:10hyped dream of agents, I don't know, 12, 23:1318 months ago, which is eventually the 23:15agents just get good enough that they 23:16can kind of just like do this integration 23:18without having any special standard Like 23:21but I guess maybe I don't know maybe that 23:22was always kind of a pipe dream, right? 23:23Like that we would always have to rely on 23:25some standardization to allow these agents 23:28to effectively use these tools versus the 23:30agent just gets smart enough and kind of 23:31the problem gets solved out of the box. 23:33Yeah, we were always going to end up 23:35here and I think there's a few shifts 23:37in the technology that has got us there. 23:38So number one is MCP is a good one. 23:41But actually before that, we really 23:42needed function call and we needed a 23:44standardized way for models to be able 23:46to know how to interact with APIs. 23:49Um, and you also needed a 23:50thing called structured output. 23:51So one of the things that models have been 23:53really bad at in the past is if you ask a model 23:56to say, go generate me this piece of text, then. 24:01It can just go and generate it 24:02in whatever format that it needs, 24:04you know, which is, which is fine. 24:06But the problem is that if I'm dealing with 24:08an API and there's maybe a schema behind 24:10this, I don't want it sort of hallucinating, 24:13uh, the schema that it comes out. 24:15It needs to be exact to be 24:16able to make that interaction. 24:18And then the last one that's really important. 24:20is the, the context length, right? 24:22So the work, the working short 24:24term memory that the models have, if 24:26they're too small, you're not going to 24:27be able to do a lot with agents in the 24:29first place because it's working with 24:31tiny paragraphs of information, right? 24:33Whereas now the models are all sort 24:35of 128k by standard, 256k for some 24:40of the large, the newer models. 24:42in the millions for some of 24:43the Google models as well. 24:45So actually the working memory 24:46the models have are huge. 24:48So they understand standards, how to do 24:49function call, and now we've standardized this 24:52at an API level in the same way that REST was 24:55standardized for sort of microservices, etc. 24:58So this now opens this up into the tool 25:01marketplace, and then as I was saying the other 25:04week, then the next logical step is going to be 25:06that we're going to have the agent marketplaces  and then we're going to have agents 25:10multi collaborating with each other. 25:11So this step of MCP and tool marketplaces is, is 25:16really the first step, but there's more to come. 25:18Well, my question 25:19for you is like, if you think 25:20Anthropic has the juice, right? 25:22Like this is kind of a battle of standards. 25:23They're throwing out their standard 25:25and it's already very popular. 25:26It's integrated into a bunch of editors, 25:28but it's like by no means certain that 25:30the model Transcribed producer, the model 25:32creator is the one that defines the standard. 25:34Like I can imagine a world where I guess like 25:36the databases are the ones that say okay well 25:38if you want to talk with our kind of protocol 25:40this is the standard that you're going to use. 25:42I guess kind of question for you is 25:43kind of in this competitive landscape 25:45if like you think Anthropic's got the 25:46edge like this will eventually be the 25:48kind of base standard for everybody. 25:50I don't think that this is, I think we 25:53should ask the question differently. 25:54So I think Anthropic is hitting a point 25:57where the value of their model is higher if 26:01they can interface with an ecosystem, right? 26:03And, um, because they cannot build 26:05every application under the sun or 26:08house every application under the sun. 26:10And so what they are doing is they're 26:11opening up their access to information, 26:14which is potentially locked away and make 26:16their own model, you know, more useful. 26:19And so someone has to drive the standard. 26:22OpenAI right now hasn't been, you 26:24know, stepping up to the task. 26:26So it's coming from somewhere else. 26:27That's how I see it. 26:28Um, now, Deb. 26:30If you look at what OpenAI did, like, 26:31almost a year ago, a year and a half ago, 26:33they just said, point us at the Swagger 26:35API and we just integrate with it, right? 26:38And that was their answer. 26:39And, um, so I think we are, we are getting 26:42to a point where, um, it's an indicator 26:45that models will actually be much more 26:49autonomous, not constantly human supervised. 26:52If you look at the traditional 26:54initial models, like the interface 26:56language to the model is English. 26:58And now suddenly we're saying, well, not really. 27:00It's remote procedure calls to computers. 27:02So inside computers talking to computers. 27:04And, uh, you know, there's already 27:06like, what, what is the model? 27:08If two models want to talk to each other, 27:09should they do communicate through English? 27:11Or should they have some gibberish 27:12they could talk to, right? 27:14So there is, um, we are now just enabling 27:17software to be directly invoked from the model, 27:21um, and so making the models effectively access 27:24the rest of the world, um, you know, we already 27:27accessing search, so that's deeply integrated. 27:29So, you know, models go out and do 27:31research and they go to search engines. 27:34Um, but search engines are still 27:36kind of written for humans. 27:37So the queries you are 27:38sending are kind of natural. 27:41But I think this is just the next logical step. 27:43Um, also you need to standardize 27:45to actually do quality assurance. 27:47And so if you don't have a standard, how do you 27:49say my model is actually doing the right thing? 27:51And so if you don't have those standards, 27:53it just becomes very, very hard to do that. 27:56Um, because, as you know, as Chris said, 27:57like, the model just gives you some tokens 27:59back and sometimes they are misformed. 28:01And so you want to actually have a, 28:03you know, a unit test which says, 28:05no, it's actually correct syntax. 28:07And, you know, and so those things 28:08all happen through standardization. 28:10And then suddenly, you know, 28:11things can talk to each other. 28:13So it's, it's a natural progression. 28:14I'm surprised it hasn't happened before. 28:16That's right. 28:17It's actually like maybe 28:17actually delayed a little bit. 28:24Well, great. 28:25I'm going to move us on to our next topic. 28:26Um, one of the big news stories of the week, 28:28at least for me was, um, CoreWeave, uh, which 28:31was, uh, if you haven't been watching kind of 28:34the AI hardware, AI cloud space, um, CoreWeave 28:37is kind of like one of the most exciting kind 28:39of, I would say like upstarts in the space. 28:42Um, they originally started, I think as 28:43a sort of crypto infrastructure company. 28:45So building sort of specialized. 28:47Clouds for crypto mining. 28:48Um, I think they noticed that AI was 28:50gonna be a big market and they've 28:51kind of like gone fully in on AI. 28:53Um, and they've benefited, I think from the 28:55fact that they have a very close relationship 28:57with NVIDIA and have had kind of early 28:59access to, um, a lot of the kind of like next 29:02generation chips as they've been coming out. 29:04And as a result, CoreWeave has grown hugely. 29:06Um, and it is now filing 29:07to go IPO, um, and I guess. 29:11Volkmar, maybe I'll turn it to you. 29:12I think you're the obvious person to kind of 29:13respond initially to this is, you know, I'm 29:16interested in how you think about the market 29:17for sort of like companies that specialize 29:20in AI compute because, you know, I almost 29:22kind of thinking about it, I'm like, well, 29:24there's these like 10,000 pound gorillas in the 29:26space that really dominate the cloud market. 29:29Um, it's interesting to believe that 29:31like, hey, a company that just kind 29:32of specializes in this one area. 29:34can, like, survive and become 29:36its own gigantic company, right? 29:38Um, but I guess I'm curious about, like, what 29:40you think the prospects are for sort of these 29:42more specialized kind of compute companies, and 29:44I guess specialized compute in AI in particular. 29:47So this goes back to the thing we 29:50talked about, like, 10 minutes ago. 29:51Yeah, there's five computers. 29:53Five computers, and I think there is 29:56a wave of new companies coming out, 29:59which are entering the cloud space. 30:02Uh, to serve that specific 30:04niche of, uh, supercomputers. 30:06So if you look at, at what CoreWeave is 30:09doing, they are not only giving you, uh, 30:11AI capacity, but they very specifically 30:14give you an AI training cluster. 30:16And so when you are going  to CoreWeave if you are not 30:19buying 10,000 H100. 30:21You're buying 10,000 H100 30:25wired up into a single supercomputer and they 30:29are running that single supercomputer for you. 30:31And so, uh, the IBM, for example, 30:34we announced it a while back. 30:35We are having a relationship with CoreWeave. 30:38And so we all, we all running 30:39training jobs in CoreWeave. 30:41Um, and it's simply the 30:44um, it's a, it's a very natural progression, uh, 30:47in these compute demands to say, you know, I'm, 30:50I don't want to have the asset on the books or 30:52I don't want to build the in house capability 30:55to operate these very, very large machines. 30:57And so there's an economy of scale, 30:58similar to the cloud, how to operate 31:00this and get really good at it. 31:02And I think CoreWeave is probably 31:04one of the leading companies. 31:06They are really, really good at their job. 31:08And so, um, I think it's a natural progression. 31:11Um, but on the flip side, uh, it's, it's a 31:15new market will evolve of these, you know, 31:17high performance computing hosting companies. 31:21So now the big question is can the 31:23traditional ones like in Azure, Google, 31:25and AWS, how are they doing in this world? 31:29And it seems so far, uh, CoreWeave has an 31:31edge here because they are saying we don't 31:33worry about virtual private cloud networks 31:36internally, etc. We are just giving you a 31:38computer and it just has a lot of GPUs in it. 31:41It's still a little counterintuitive to me. 31:43It's kind of like, you know, like you 31:44think about like the deep amount of like 31:46capital that like an Azure has access to, 31:49like it kind of feels like they would be 31:51like, sure, we can just offer that too. 31:52We just have like way more money to do that 31:54than these kind of smaller providers can. 31:56So it almost kind of feels like, I don't 31:58know, uh, Chris, you've got opinion on 31:59this or Volkmar, if you do like, it seems 32:01like this additional edge they have is 32:03that there's something about deploying 32:05these clusters, which is kind of so. 32:08Sort of unique, I guess, in terms of know 32:10how, that it's actually pretty difficult 32:11for these, like, kind of, maybe traditional 32:13players to kind of, like, easily just kind of 32:15shove these other companies out of the space. 32:17Is that the right way of 32:17reading what's going on? 32:18If you look at traditional cloud 32:20vendors, um, they are rendering out 32:23thousands of individual computers. 32:26So they are not, they're their DNA 32:28is not clusters, um, of making a 32:32thousand machines work in concert. 32:34But their approach is 32:36I have a thousand machines and 32:37they all kind of limp along. 32:40And once in a while, one fails 32:41and I give you another one. 32:43And if you look at training  workloads, that's not sufficient. 32:46So you have, um, you need a thousand 32:49machines which actually stay up. 32:51And so any, any fault or any network 32:56congestion also has quite dramatic impact. 32:59Um, so, you know, NVIDIA had 33:01a lot of challenges with HBM. 33:04So if one of these GPUs has an HBM 33:07issue and there were silent, uh, high 33:08bandwidth memory errors and corruptions 33:11your training job will just, you 33:13know, either fail or not make forward 33:15progress or forget what it just learned. 33:17And so, uh, one of the things CoreWeave 33:20is doing is they are monitoring 33:22literally every wire in their cluster. 33:24So the connectivity from the CPU to the 33:27PCIe switch, from the PCIe switch down 33:29to the card, all the links, et cetera, 33:31they deal with link flapping, et cetera. 33:33So to keep that one computer up. 33:36And the traditional approach in the 33:38cloud is like, yeah, just take it 33:39offline and give you a different one. 33:40And so there's a DNA, uh, a different 33:43DNA you need to have as an operator 33:45to actually operate these machines. 33:46And this is pervasive in your control plane 33:49and your monitoring infrastructure, et cetera. 33:51Because in the cloud, typically you just 33:52take one machine offline, nothing happens. 33:54Chris, you wanna jump in with your hot take? 33:56If I think about the bit, I think I've said 33:58this before, but if I think about the Bitcoin, 34:00everybody started on, uh, you know, CPU, 34:04then they moved to GPU, then they moved to 34:05FPGAs, and then they moved to ASICs, etc. 34:08For inference, we're seeing 34:10the exact same thing. 34:11Everybody's building their own inference 34:12chips, etc. So what does that leave it down to? 34:14It leaves it down to training compute, right? 34:17I, I'm going to train models. 34:18And what's the very thing at the moment we were 34:23literally discussing last week on the podcast, 34:27which is is the era of pre-training dead, right? 34:30So so actually we've moved 34:32into kind of reasoning models. 34:34We're going to take a base pre-train, so 34:36there's going to be a few companies that 34:38are doing massive trains at that point 34:40perhaps, and then everybody's going to be 34:42into this sort of inference time compute. 34:44So, so where is the market? 34:46There is a big market at the moment, 34:48but does that market stay there? 34:51Um, and then you've got to look at what's 34:53going on in the, the desktop market. 34:55So if we look yesterday, Apple with their new 34:58M4 studios, where you're going to be able to 35:01take something like the DeepSeek-R1 model 35:03and be able to run that on your desktop. 35:06And then we discussed on the podcast a 35:07few months ago, the video boxes where 35:10you're going to be able to train. 35:12Uh, so I think that the fine tuning market, 35:16is that going to stay on the cloud or 35:18is that going to be devices that people 35:20have, or is it going to be anywhere else? 35:23And then if I think of the cloud providers, 35:25back to your point, do you think AWS and Azure 35:28is going to let somebody else eat their lunch? 35:30They're going to be like: 35:31No! We are going to put that capability in 35:34ourselves, and we're going to squish you. 35:37Yeah. I mean, that, that shifts 35:39basically to reasoning models. 35:41It's, it's interesting to believe that it 35:43basically favors the incumbents because 35:46I guess an inference world looks a 35:48little bit less like the training world. 35:50Uh, like, effectively, like, like almost 35:52the meta moves back to, well, it's broken, 35:55just pull it out, put a new one in. 35:56You don't think about that? 35:57Yeah. No, the, the post-training is now much 36:00more than the pre-training, um, and 36:02post-training, but simply because It's 36:05a mix of training and inferencing. 36:08And so I think that, and the complexity for 36:12that post training phase is sometimes now 5 36:15to 10x more expensive to do the post-training. 36:19Now on post-training, you still have your 36:21model, your training, live in a cluster. 36:24But what you're doing is you need, your 36:26loss function just went from a couple 36:29of milliseconds to a couple of, minutes. 36:32So that's the actual challenge here. 36:34And so there is a bigger balance between, 36:37you know, how much you have in your, 36:39in your training cost versus how much 36:42infrastructure you need to have live to 36:44do your, your loss function calculation. 36:47Now the loss function calculation there still 36:50needs the weights of what you just trained. 36:52So these are very, very large training 36:54costs, which now have more of a mixed 36:56workload because the computational cost 36:59has shifted, but the fundamental problem 37:01that you need a big HPC machine hasn't. 37:04And so, I mean, from my perspective, the 37:06big question is, is the market big enough 37:09that Google, Amazon, and Microsoft are 37:13saying, this is so critical to us, because 37:16otherwise the workloads move to these 37:18esoteric vendors, and then they will have 37:20a drag that we don't want to allow this. 37:23And then there are effectively 37:24two options for them. 37:26BioBuild, right? 37:27And if you look, like you, you go across 37:30Google, Amazon, and Microsoft, and their heads 37:33of the training clusters are all ex HPC guys. 37:37So they have the talent in house. 37:38So now it's a question, 37:39how fast are they moving? 37:41And then, you know, do they see this 37:43as a market which is big enough? 37:45If you look at CoreWeave, you know, 37:46it's a couple of billions. 37:48If you look at Microsoft, 37:49it's a couple of trillions. 37:50So there are three orders of magnitude, 37:52you know, uh, of, of market cap. 37:55So, like, it may just not 37:57be so critical right now. 37:59Or now that those companies are coming online, 38:02they will go and have to do it themselves. 38:05So I think they, we will see. 38:07Um, I think, Chris, you're right. 38:09The chances that, you know, trillion 38:12dollar businesses take out billion 38:13dollar businesses is extremely high. 38:16And so, and the negotiation 38:17power is better, we'll see. 38:19But I think, fundamentally, what we 38:21are seeing is that there's a different 38:23compute paradigm, which allowed that 38:25market to exist, and it was underserved. 38:28And because it was underserved, 38:30this company exists. 38:31So now let's see if they close the gap. 38:33I, I agree with that. 38:34It is definitely an underserved market. 38:36But as I said, every single one of these 38:38companies, whether you're an AI provider or a 38:40cloud provider are invested in designing and 38:42building their own chips to bring down the cost. 38:45And, and, and, and Volkmeyer, I totally agree 38:48with you, which is latency on inference is 38:51key, but actually that's the biggest focus 38:53at the moment is getting  the inference chips right. 38:56So, so actually, 38:57to my point is having big, massive clusters. 39:01And yes, it does take a lot 39:03of data and it is big train and runs 39:05a big clusters to do these kind of, 39:07uh, you know, post post tune phases. 39:09But the reality is it's a 39:11different mix of workload, right? 39:12In that sense. 39:13And therefore there's new techniques and where 39:16those guys are really just saying, here's 39:17my big cluster kind of, kind of go for it. 39:19So I, I just can't help thinking that 39:23anyone who is an AI model provider. 39:27is really going to be investing in that, 39:29in that space themselves with their 39:30own chips, their own infrastructure. 39:32And I get your buy versus build 39:34point of view, but as you say, kind 39:36of small numbers at the moment. 39:39And, and I just, I just don't see the big cloud 39:41providers being prepared to hand over so much 39:44cash to a kind of third party in that sense. 39:46Uh, we could go on at length on this and I 39:48actually do want to return to this point because 39:49I think it's, it's very interesting about like 39:51how like the landscape of infrastructure is 39:53going to look with all of these pressures. 39:56Um, and I think this kind of third path 39:58where the really big companies are basically 40:00like, ah, what's a few billion dollars? 40:02And they kind of leave the market alone. 40:04Um, it is kind of a, it's definitely a 40:06path that I'd ever thought of before. 40:08Um, though we shall see. 40:14Well, great. 40:15Well, I think for the last segment, because 40:16we only have a minute, I just wanted to 40:18quickly touch on a new story that popped up. 40:20Um, we'll mention it just, 40:21I think, because it was. 40:22Sort of, again, widely chattered about online. 40:25Uh, there's a startup called Sesame, 40:26um, which was launched by one of the 40:27Oculus co-founders that's been working 40:29on sort of, uh, synthetic voice. 40:32Um, and they released a demo that I think, 40:34at least personally for me, has kind of like 40:36gone over Like what they argue is to be kind 40:39of like the uncanny valley of like voice 40:41interfaces like, you know To be totally clear 40:44right like I don't really use voice on Open AI. 40:47I don't really use voice on Anthropic but this 40:50is like the first time where I was demo is 40:51like, okay yeah, this is like almost getting 40:52smooth enough to the point where it does 40:54sort of feel like interacting with a human 40:56maybe we can just kind of do a quick around 40:58the horn on just like you know, if Chris, 41:00Volkmar, you guys have played around with it. 41:02You know, do you think it's worth for 41:03people to check out or is it overhyped? 41:05Do you think we're finally 41:06there from a voice standpoint? 41:08Just kind of quick takes before 41:09we close out the episode. 41:10Oh my goodness, that model got 41:11me in trouble with my wife. 41:13I put that model on at about 11 o'clock 41:15at night just to interact with it and my 41:17wife's like Who are you speaking to what 41:20it's like I hear a woman's voice and I was 41:22like, oh my goodness...And I had to 41:24switch that thing off right because it was so 41:26realistic and I was just like Oh my goodness. 41:29I can't talk to this anymore. 41:31So that model is incredible. 41:34Actually, they, they have solved a few things. 41:37They've solved the latency problem. 41:40They've, they've solved the 41:41kind of utterance problem. 41:43Kind of just, you know, the, the silence, 41:46the waiting, and the model will come back. 41:48It is truly like a natural interaction. 41:52And you feel it. 41:53You feel as if you're talking to 41:54somebody else at the other end. 41:57So... 41:58I think this is going to change everything, 42:01you know, contact, you know, I did a thing 42:03about contact centers about latency, etc. 42:05You know, you don't think that we're going 42:07to see those types of models powering 42:08customer service experiences in the future. 42:11Absolutely, this is coming down 42:13that road and they're going to 42:14kick off agents to do workflows. 42:17The model is incredible and I think if you 42:19Anybody who has interacted with any other 42:22voice model before and like, ah, it's not 42:25quite there yet or, oh, that's terrible. 42:27Go check out this model because actually 42:29what they've done there is incredible. 42:31All right. 42:32Volkmar, parting 42:32shots. 42:33Hyped? 42:33Overhyped? 42:34I, I agree with Chris. 42:36It's amazing. 42:38I tried it in the office. 42:40Um, so my wife was not listening to it. 42:43Um, no, I think the, the, it's very 42:46interesting because it shows the 42:48other end of the spectrum, right? 42:50So we have these kind of military style 42:52Siri conversations, you know, which 42:54command you around with the drive. 42:56And this was really smooth, like chatty, you 42:59know, friendly, funny, and so I think there is 43:02now, now we have two ends of the spectrum, and I 43:05think now we can populate all the other points. 43:07And so now you can make these models, you 43:09know, for pretty much any human interaction. 43:11So I can't wait that, you know, when I call 43:14into, you know, any airline that doesn't 43:16tell me that I need to wait for like a head. 43:20Um, Uh, one of the airlines telling me 43:22it's two hours and 40 minutes until the 43:25next agent can talk to me and they can 43:26just pick up the phone and talk to me. 43:28So I think this is really, this is 43:30a great, great extension, um, to, 43:33to the, uh, to the spectrum here. 43:35And, you know, it's also good 43:36that someone is nice to you and 43:37when you're driving the wrong way. 43:39Yeah, also critical. 43:40It might be too sassy for the airline scenario. 43:43You know, you call up, go out, 43:44my flight is delayed, you go. 43:45Ah, but did you actually get 43:47there on time, Volkmar? 43:48Did you, you know, did you, did you plan enough? 43:51Did you, you know, and so maybe, maybe 43:53it might be a little bit too chatty 43:54for that scenario. 43:56I think it's a, it's a really good, uh, way 43:58to see, you know, where, where we can go. 44:00It's, it's like, if you can 44:01do that, you can do anything. 44:03It's really. 44:04a different emotional state 44:06they managed to capture. 44:07And so I think the, the real interesting 44:09part is how do they express that? 44:11You know, how do, how could they make 44:13a model which, where they could get 44:15those types of emotions into the model, 44:17um, and express it mathematically. 44:19And I think if, if you get that dial, 44:21then that dial is the powerful part. 44:24I agree with you actually that is probably 44:26that word emotional is the most important one 44:29because that was my real point when I interacted 44:33with the model it was like, oh my goodness. 44:35This feels real It was there was a there 44:37was that a or it was really weird It was 44:40just that feeling that you had and no 44:42other voice model has been able to do that. 44:44So I I I think This is something different. 44:47Um, they're very open about the techniques 44:49and in fact, I think it's getting kind 44:51of open weighted pretty soon as well. 44:53So, um, I, I think this is just a game changer. 44:56Well, uh, you heard it here first. 44:57You should check out the Sesame demo. 44:59Uh, and that's all the time we have for today. 45:01Uh, Chris Volkmar, thanks 45:02for joining us as always. 45:03We'll have to do the duo 45:04show again at some point. 45:06Um, and thanks to all you listeners 45:07for tuning into Mixture of Experts. 45:09Uh, if you like what you heard, 45:10you can get us on Apple podcasts. 45:12It's Spotify and podcast platforms everywhere, 45:14and we will see you next week here on MoE.