When Quantum Computing Hits Consumer Devices
Key Points
- Blake points out that quantum tech has already crept into consumer experiences, citing a demo of a quantum‑powered game running on a phone.
- Volkmar predicts quantum computing will reach consumer devices mainly via cloud‑connected services, accelerating once clear‑cut applications deliver real benefits.
- Chris offers a tongue‑in‑cheek forecast that quantum will both appear on consumer hardware and remain unavailable there simultaneously.
- The show’s host frames the episode as a pivot from AI to quantum, noting the recent surge of hype and media coverage despite the field’s cyclical visibility.
- Blake notes that roughly two years ago quantum entered a new era with the launch of the first practical quantum computer, marking a shift from pure research toward emerging applications.
Sections
- Predicting Quantum Consumer Adoption - Experts debate the timeline for quantum computing reaching consumer devices, citing early demos, cloud‑linked implementations, and wildly differing forecasts.
- Seeking Quantum Utility and Advantage - The speaker explains that once a quantum device can perform tasks beyond classical simulation—a state termed quantum utility—the current effort is to pinpoint valuable, real‑world problems where this quantum advantage translates into faster, cheaper, or better outcomes.
- Quantum Advantage and AI Overlap - The speaker outlines efforts to scale quantum simulations of larger molecules toward parity with classical methods and then explores whether a real intersection exists between quantum computing and AI, highlighting two avenues: using quantum hardware to enhance AI and applying AI techniques to accelerate quantum research.
- AI‑Powered Quantum Transpilation & Code Assistant - The speaker explains how reinforcement‑learning‑driven AI passes enhance quantum circuit transpilation and how a fine‑tuned Watsonx code assistant, embedded in the IDE, helps developers write Qiskit programs.
- Quantum-Generated Data for AI Training - The speakers discuss using fast quantum simulations to produce training data for neural networks, enabling AI-driven approximations of quantum phenomena without continuous quantum hardware.
- Quantum‑AI Hybrid Simulation Paradigm - The speaker proposes that quantum computers will act as accelerators alongside classical methods, using AI to identify promising regions of parameter space and then applying quantum simulations for detailed analysis of those selected problems.
- Quantum Computing Progress Without Error Correction - The speaker warns against the belief that quantum computers are useless until full error correction is achieved, noting that current machines already execute circuits beyond classical simulation and that continual scaling of performance will unlock practical utility.
- Federated Tool Access for AI Agents - The speaker explains how AI agents can orchestrate external tools—like diagram generators, compilers, and deployment services—through federated marketplaces, enhancing coding environments while debating whether standardization or agent intelligence will drive this integration.
- Anthropic's Potential as AI Standard - The speaker argues that Anthropic may become the de‑facto standard for model integration because its value rises from ecosystem compatibility, while OpenAI has not taken the lead in defining such standards.
- CoreWeave IPO and AI Compute Niche - The speakers discuss CoreWeave’s evolution from crypto‑mining infrastructure to an AI‑focused cloud provider, its close NVIDIA partnership and upcoming IPO, and question whether a specialist AI compute firm can succeed alongside the dominant cloud giants.
- CoreWeave’s Cluster Edge Over Cloud Providers - The speaker explains that CoreWeave’s focus on delivering ready‑to‑use, tightly coordinated GPU clusters—without the complexity of virtual private clouds—gives it a practical advantage over traditional cloud giants whose infrastructure is built around loosely coupled, individual machines ill‑suited for large‑scale AI training.
- Shifting AI Compute Market Landscape - The speakers debate whether AI pre‑training, fine‑tuning, and inference will remain cloud‑centric or migrate to powerful desktop devices, and how incumbents like AWS and Azure might respond.
- AI Chip Competition and Market Scale - The speakers debate the speed of AI chip development, contrast market sizes from billions to trillions, and argue that large inference clusters and custom chip designs are reshaping an underserved compute market.
- Anthropic Voice Model Breakthrough - The speakers marvel at Anthropic’s new voice system—now smooth, low‑latency, and eerily human‑like enough to fool a spouse—and debate whether it finally delivers on the long‑promised, game‑changing conversational experience.
- Emotional AI Model Breakthrough - The hosts laud a new voice model that convincingly captures human‑like emotions, discuss how mathematically encoding such affect could be transformative, and tease its upcoming open‑source release and demo.
Full Transcript
# When Quantum Computing Hits Consumer Devices **Source:** [https://www.youtube.com/watch?v=iYRdhSEGpg4](https://www.youtube.com/watch?v=iYRdhSEGpg4) **Duration:** 00:45:22 ## Summary - Blake points out that quantum tech has already crept into consumer experiences, citing a demo of a quantum‑powered game running on a phone. - Volkmar predicts quantum computing will reach consumer devices mainly via cloud‑connected services, accelerating once clear‑cut applications deliver real benefits. - Chris offers a tongue‑in‑cheek forecast that quantum will both appear on consumer hardware and remain unavailable there simultaneously. - The show’s host frames the episode as a pivot from AI to quantum, noting the recent surge of hype and media coverage despite the field’s cyclical visibility. - Blake notes that roughly two years ago quantum entered a new era with the launch of the first practical quantum computer, marking a shift from pure research toward emerging applications. ## Sections - [00:00:00](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=0s) **Predicting Quantum Consumer Adoption** - Experts debate the timeline for quantum computing reaching consumer devices, citing early demos, cloud‑linked implementations, and wildly differing forecasts. - [00:03:07](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=187s) **Seeking Quantum Utility and Advantage** - The speaker explains that once a quantum device can perform tasks beyond classical simulation—a state termed quantum utility—the current effort is to pinpoint valuable, real‑world problems where this quantum advantage translates into faster, cheaper, or better outcomes. - [00:06:12](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=372s) **Quantum Advantage and AI Overlap** - The speaker outlines efforts to scale quantum simulations of larger molecules toward parity with classical methods and then explores whether a real intersection exists between quantum computing and AI, highlighting two avenues: using quantum hardware to enhance AI and applying AI techniques to accelerate quantum research. - [00:09:19](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=559s) **AI‑Powered Quantum Transpilation & Code Assistant** - The speaker explains how reinforcement‑learning‑driven AI passes enhance quantum circuit transpilation and how a fine‑tuned Watsonx code assistant, embedded in the IDE, helps developers write Qiskit programs. - [00:12:28](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=748s) **Quantum-Generated Data for AI Training** - The speakers discuss using fast quantum simulations to produce training data for neural networks, enabling AI-driven approximations of quantum phenomena without continuous quantum hardware. - [00:15:40](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=940s) **Quantum‑AI Hybrid Simulation Paradigm** - The speaker proposes that quantum computers will act as accelerators alongside classical methods, using AI to identify promising regions of parameter space and then applying quantum simulations for detailed analysis of those selected problems. - [00:19:03](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1143s) **Quantum Computing Progress Without Error Correction** - The speaker warns against the belief that quantum computers are useless until full error correction is achieved, noting that current machines already execute circuits beyond classical simulation and that continual scaling of performance will unlock practical utility. - [00:22:12](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1332s) **Federated Tool Access for AI Agents** - The speaker explains how AI agents can orchestrate external tools—like diagram generators, compilers, and deployment services—through federated marketplaces, enhancing coding environments while debating whether standardization or agent intelligence will drive this integration. - [00:25:18](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1518s) **Anthropic's Potential as AI Standard** - The speaker argues that Anthropic may become the de‑facto standard for model integration because its value rises from ecosystem compatibility, while OpenAI has not taken the lead in defining such standards. - [00:28:25](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1705s) **CoreWeave IPO and AI Compute Niche** - The speakers discuss CoreWeave’s evolution from crypto‑mining infrastructure to an AI‑focused cloud provider, its close NVIDIA partnership and upcoming IPO, and question whether a specialist AI compute firm can succeed alongside the dominant cloud giants. - [00:31:29](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=1889s) **CoreWeave’s Cluster Edge Over Cloud Providers** - The speaker explains that CoreWeave’s focus on delivering ready‑to‑use, tightly coordinated GPU clusters—without the complexity of virtual private clouds—gives it a practical advantage over traditional cloud giants whose infrastructure is built around loosely coupled, individual machines ill‑suited for large‑scale AI training. - [00:34:34](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=2074s) **Shifting AI Compute Market Landscape** - The speakers debate whether AI pre‑training, fine‑tuning, and inference will remain cloud‑centric or migrate to powerful desktop devices, and how incumbents like AWS and Azure might respond. - [00:37:38](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=2258s) **AI Chip Competition and Market Scale** - The speakers debate the speed of AI chip development, contrast market sizes from billions to trillions, and argue that large inference clusters and custom chip designs are reshaping an underserved compute market. - [00:40:47](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=2447s) **Anthropic Voice Model Breakthrough** - The speakers marvel at Anthropic’s new voice system—now smooth, low‑latency, and eerily human‑like enough to fool a spouse—and debate whether it finally delivers on the long‑promised, game‑changing conversational experience. - [00:43:51](https://www.youtube.com/watch?v=iYRdhSEGpg4&t=2631s) **Emotional AI Model Breakthrough** - The hosts laud a new voice model that convincingly captures human‑like emotions, discuss how mathematically encoding such affect could be transformative, and tease its upcoming open‑source release and demo. ## Full Transcript
How many years do you think it'll
be until quantum computing finds
its way into a consumer device?
Blake Johnson is a Distinguished
Engineer and Quantum Engine Lead.
Uh, Blake, welcome to the
show for the very first time.
What do you think?
Uh, so I might say that in some
ways it's already happened, right?
Some of the early explorations with quantum
were kind of fun with, with games and those
you can kind of make available anymore.
So I think I've seen a demo of a
quantum powered game on a phone.
Volkmar Uhlig is Vice
President, AI Infrastructure Portfolio Lead.
Volkmar, welcome back, uh, do you have a prediction here?
Quantum itself and a phone will be a big fridge.
So I guess it's connected over the internet.
Um, and I can see that once there are
actual applications which get the benefit
out of it, they will go very fast.
And finally, last but not least, uh,
Chris Hay is Distinguished Engineer
and CTO of Customer Transformation.
Chris, I can usually rely on
you for the wildest estimates.
Uh, what do you think here?
I think it will be available on a
consumer device, uh, and not available
on a consumer device at the same time.
All right.
All that and more on today's Mixture of Experts.
I'm Tim Hwang and welcome
to Mixture of Experts.
Each week, MoE helps you navigate the
biggest headlines in technology with a
set of brilliant minds from research,
product, engineering, and more.
As always, we have a slew
of AI news to get through.
We're going to talk about Anthropic's new
Model Context Protocol, CoreWeave filing to go IPO.
And a new voice demo from
a company called Sesame.
But uniquely today, we're actually going to
kind of like step a little bit adjacent to our
usual AI topic to talk about quantum because
we have Blake, uh, here on the show with us.
Um, Blake, maybe I can just have you
kind of kick it off a little bit.
You know, if you've been reading the headlines.
You know, quantum's weird.
It kind of like disappears from
the headlines and then occasionally
it just like comes back in force.
And you just see quantum
headlines like all the time.
And I think one of the reasons we wanted
to have you on the show is that like
we're, we're, we're like quantum spring.
Like everybody's talking about it, uh, suddenly.
But it, to my question that I opened
with, it's sometimes hard to get a sense
of like how close or far this technology
is from becoming something like, that we
like practically feel the impacts of as
you know, consumer or even
I guess like an enterprise.
Um, and so I guess maybe a good place to start
if you want to kind of just quickly give us
a capsule is, you know, cut through the hype.
Like where are we now?
Like are we very close?
Is quantum nigh?
Or is it like still, you know, in
this kind of basic research, research
and development sort of world?
Yeah, I mean, I think something quite
interesting has happened in the past,
uh, about two years or so where quantum's
entered a really a new era, right?
The, the, the first quantum
computer that IBM
put online were really educational
tools, research tools, right?
Like they're about in some ways about
teaching quantum computing, teaching
quantum mechanics, uh, and, and useful to
students and educators and researchers.
Um, and that was, you know, limited by
the, the size of the computation we
could execute before sort of the quantum
noise overwhelmed, uh, the situation you
kind of were left with, um, just noise.
Uh, but in the last couple of years,
we've now finally are at the state
where we can do computations with, uh,
our most powerful quantum computers.
Uh, that we cannot, uh, simulate with
brief Flourish's classical simulation.
Uh, so there, um, this is a moment that,
that at IBM we refer to as quantum utility.
Um, and you can see that like, uh, uh, you
at least need this property for something
to be useful at all because if, if, uh, if I
get similar with a, with a classical computer
then I don't need a quantum one, right?
So we're finally in this regime where I can do
something kind of unique on the quantum device.
Um, but then it, we're kind of, now it's, now
it's the real, uh, Now that I would say that the
hunt is on to be able to put that power connect
that power to an application with that that's
the someone really cares about and matters and
has has value and so that we're kind of now in
this this sprint towards actually trying to take
these devices and finding quantum advantage.
Which is that moment when we can do
something faster, cheaper, or better.
Yeah, and what are the kind of most,
uh, promising areas for that, right?
Because I guess almost it's like, it sounds
like the technology is almost looking for
its demo, or it's like, okay, here's a place
where it's like really better than traditional
computers, um, uh, and, and you have to find a
kind of a quantum shaped problem, if you will.
Sure, right, I mean, so like, there, I think
there's things that we know about with, if we
had, like, the, the most powerful machines that
could do, kind of, arbitrary sized computations.
There's things that people can write down
mathematical proofs that, like, this has
better scaling behavior with a quantum
algorithm than a classical algorithm.
In particular, right, there's simulating nature,
which is like, Richard Feynman's original idea
for the quantum computer came through thinking
about the problems of simulating nature.
And so that like has applications in chemistry,
materials design, drug discovery, and so on.
That's, that becomes like a rich, uh, area.
But then you also have, uh, problems,
uh, mathematical problems with structure.
This is where you find things
like, uh, factoring, for
instance, um, or machine learning.
And then you have kind of optimization problems
where we have sort of weaker, uh, mathematical
proofs about the advantage, but there's still.
Because of its importance to business, uh, it
still deserves and gets a lot of attention.
So, I mean, in terms of where we're
placing our bets, I mean, I think there's
a number of, uh, areas which we think
are ripe for early quantum advantage.
In particular, uh, this issue of, of um,
simulating nature, uh, particularly simulating
the, the time dynamics of, of a quantum
system, uh, seems like it's very possible.
And chemistry is another area where, um,
uh, uh, the field's kind of, uh, had its
ebbs and flows in terms of people being
very excited and thinking and being very
optimistic and then, uh, finding pessimism
again because they dug into the problem
is like, oh, it's harder than we thought.
Actually, it's really difficult.
But, uh, that again had a really cool
moment, uh, last year where we were.
By sort of combining the power of quantum and
classical computing, and it's something that
IBM calls quantum-centric supercomputing,
where you're sort of splitting a problem
apart and having the quantum computer and
classical computer really work together.
Um, we were able to, for the first time,
show, uh, really make headway on the
problem of chemistry with quantum, uh,
and show like we could really finally
be competitive with classical methods
for a certain kinds of molecules.
And now we're sort of expanding to
larger molecules and trying to show
again that we can kind of actually sort
of reach parity with, uh, the state of
the art methods in the classical world.
And then of course the hope is that,
you know, by pushing hard enough, we
finally enter that territory of advantage.
Yeah, that's really exciting.
So, I want to make sure we bring Chris and
Volkmar in, but, you know, I think one last
question maybe to kind of get us there.
Like, so, MoE typically focuses on AI.
Um, and uh, you know, I think because of
the hype cycle, I'm in lots of conferences
where people are always like, quantum and
AI and like, you know, it's right up there
with like blockchain and like kind of like
all the other hype technologies are kind of
like munged together in one big kind of blob.
Um, and I guess maybe the question
for you is like, is there actually an
overlap between AI and quantum here?
Like if so, what is it?
Um, you know, I think a lot of our
listeners kind of work in, you know,
machine learning day in day out.
Um, and you know, I think it's been kind of
voiced that quantum might have this overlap,
but I think from where I'm sitting, it's
like still very unclear what that would be.
But I'm curious about like what people are kind
of talking about in your world, and if there
is actually a genuine interesting overlap here.
I think there's two interesting
directions to think about, right?
You can think about using quantum
to make AI better, and you can think
about using AI to make quantum better.
Um, and, uh, I think the two
pictures look pretty similar.
kind of different, pretty different today
in the sense that, you know, a lot of the
early hope was about using quantum for AI.
Um, and, and this I think we know,
um, it's, it's very interesting,
but we know a lot less, right?
Like, um, it wasn't until a couple years
ago that we could find one of these kind
of formal mathematical proofs that there
was something that you could definitely do
better with a quantum machine, but it kind of
required a contrived sort of quantum data set.
And so of course, usually where people
are applying AI as are, uh, you know,
they're applying it to classical data.
And so like, uh, that's an area where what we
have available is more about heuristic methods,
things which are harder to make proofs about.
And yet we have definitely a
different computational paradigm.
So can you do something better with it?
I would say the jury is still out.
People are definitely trying.
Uh, and we're partnering, uh, with,
with startups and other companies that
are, um, that like that is their focus.
Um, in fact, you can, you can use a
sort of a quantum powered AI tool,
um, on our, uh, quantum platform.
The other direction though, uh, definitely,
uh, like at that moment where we're
applying AI to quantum is now, and we're
really finding a lot of value in that.
In particular, there's two, uh, two
new tools that we just released last
year, that are sort of directly.
uh, enabled by AI.
One is, um, one of the problems you have
when executing a quantum program is you
start with some sort of description, uh,
in quantum computing, your program ends up
taking the form of a quantum circuit, um,
and you need to optimize that circuit so
that it will run with the best performance
or quality on the quantum hardware.
And so we build a, a kind of compiler in
the, with a tool called Qiskit that does that
compilation task or transpilation task, uh,
to, to optimize your, your, your circuit.
Uh, last year though, we, uh, we, uh,
upgraded that transpilation technology with
AI powered passes, uh, using that kind of
a reinforcement learning technique to find,
to automatically build, um, uh, optimization
passes and sort of recognize patterns and
circuits and find, uh, good reductions.
So that was sort of one, uh, sort of
novel piece that we, we introduced.
And the other was really, uh, very much in
this category of generative AI that we used
our, our, our watsonx code assistant
tool and kind of fine tuned it with, uh,
quantum programming patterns, uh, based
in sort of Qiskit, uh, Qiskit problems and
Qiskit tutorials and so on to, to build
a what we call the Qiskit code assistant.
So this is, lives directly in your development
environment and helps you sort of learn
quantum programming and helps you, you
know, uh, sort of, uh, uh, helps you write,
helps your developer, the developer write.
quantum programs, uh, directly
in their development environment.
Well, actually, I want to bring
in Volkmar and Chris here.
I mean, Volkmar, maybe I'll turn to
you first because it feels like we can
kind of talk a little bit about that,
um, sort of quantum for AI overlap.
You know, you work with enterprises, you think
a lot about AI infrastructure, like what's
just like the hardware that we need to do AI.
Like, I'm curious both, like, on two fronts,
like, I guess whether or not you have, like,
there are customers starting to be like,
here, I keep reading about this quantum thing,
are you guys going to support that soon?
And then I guess the second question is, like,
is that kind of, like, in the sort of long
term forecast for infrastructure to say, oh,
well, you know, we predict in four or five
years, you know, we really need to have these
quantum computers online, or if that's not
really kind of in how you guys think about
this sort of, like, long term planning here.
Yeah, I don't think that there
is a clear path to unify the two.
Um, but on the flip side, you know,
like, there are these different
compute paradigms which are showing up.
So I think we are going away from
traditional, hey, here's an x86
box, you know, go and hang yourself.
Uh, and we are getting more into a world
where, um, the compute capacity is much
more specialized for specific tasks.
Um, and so, you know, we have AI
computers now, you know, there is.
We will talk about this later.
I'm going to show the whole companies which are
just saying, okay, we only focus on AI capacity.
Um, I think similarly with quantum,
there will be a bunch of players
which have quantum computers online.
The way I see this is, and this, you know, goes
back to my prior life in self driving cars.
We always had the issue that, um,
When, when you, when you build, um,
an AI model, um, you in fact need some
ground truth you train it, uh, with.
And, and we are seeing this now also
coming in large language models.
And the beginning of large language models was
like, okay, let's just download the internet.
That's my ground truth, right?
And, and I do next token prediction.
So in self driving, you need massive
scale, um, you know, observational data.
So now if you think about where we are
heading is, uh, and, the model
is an approximation of that reality.
So if you go into biology or chemistry
or, you know, uh, physical phenomena,
what you need is, is a good sample
set, which is then can be trained into
a network to act as an approximation.
Now, if that production of data for
training a neural network becomes,
you know, decades or centuries.
That's where I can see a quantum computer
being extremely useful because you can now
say, let's use a quantum computer to explore
the, uh, the solution space because it's fast,
produce a bunch of data, and then use that data
to train a neural network as an approximator.
And now you can actually work without the
quantum computer, but you can actually look
at Um, you know, phenomena on a desktop
machine and, you know, like company before
the last company I, my, my head of data
science, he, he came actually from CERN,
CERN is doing this for decades now that
there are training, uh, neural networks,
uh, which are just physics approximations.
And so this is where I think the
two things can really come together.
Totally.
Yeah. I think it's application of kind
of like the AI to quantum space.
You know, Blake, you talked a little bit
about, oh, well, you know, we kind of trained
a trained a code assistant, right, to be able
to kind of program specifically and kind of a
language that's kind of specialized for quantum.
I guess Volkmar, you're kind of raising another
thing, which is pretty interesting is like,
well, in order to kind of take advantage of
a bunch of these applications, we might just
need to be able to generate data and like
AI is effectively like a way to get there.
It seems like.
So Chris, you've been
uncharacteristically quiet.
I'm curious if you have a view on,
on all this, uh, in terms of kind of
like future prospects for quantum.
And I guess specifically, like as someone
who kind of plays with models a lot, there's
an interesting story here about models
getting more specialized with time, um, but
I'm curious about your take on all this.
I was really quiet because I feel really dumb
in this subject when we got some super smart
guys talking about quantum and I'm like,
oh my goodness, I, I, I'm just not there.
But actually, I will take my super dumb
approach here, which is actually, I think,
to your point, you are going to need AI to be
able to interact with quantum machines because
you know, guess what AI is really good at
explaining things like you're a six year old
and I, and I personally, I'm going to need
that if I'm going to program a quantum computer
and I'm going to need to vibe code it right
so I think that that is definitely a path
with code assistance is vibe coding quantum.
A probably more kind of serious one
in my mind, and again, I'm just sort
of thinking out there at the moment.
If I think about what quantum's really good
at, and again, I really don't understand
quantum, um, but it's really all about
probability, and it's all about things like
error correction, and it's all about sampling.
And if actually, if we think about what AI is
about, it's really about probability, it's about
next token prediction, and it's about sampling.
So, In the world that we have two very separate
and different things, which are really focused
on probability, sampling, and essentially
prediction, I can't help think that in some
way, shape, or form, these things are going
to come together, whether that's AI helping
quantum be able to predict better or whether
that's going to be quantum being able to help
AI predict better. But I think there
is a then diagram somewhere which bring these
things together if you ask me any further
questions on this please don't, Blake, because
then I am just going to look really super dumb
but I think there's something there.
I mean, there is definitely, I mean,
like, do you want to respond to that?
Like, there's kind of a fun take there, I think.
I think maybe you can kind of combine a little
bit of what Volkmar and Chris have added here.
Like, you know, I, we don't see a
world where quantum, uh, replaces
classical computing, right?
Like it's, uh, it's an accelerator
for certain kinds of problems.
And something that we see really exciting,
uh, prospects for the future is about
being able to, the convergence of, of
bringing different methods together.
Um, and it's kind of like you see this paradigm,
this pattern already, uh, widely used in the
computational science field where people will,
will want to study some sort of system, but the
computational space is so, uh, overwhelmingly
large that they don't know like where to start.
And so they'll use AI models to try to
identify interesting regions of parameter
space and then plug in their their detailed
simulation model with non AI methods.
But now and like something that's kind of an
obvious upgrade to that pattern is plug in
a quantum simulation model for the detailed
simulation of an AI identified interesting
problems or interesting feature space.
And so I think like the future
really is in like quantum.
Or AI, it's quantum and AI.
Yeah, and I think it's, it's
particularly interesting, I think,
in the context of, um, sort of like,
uh, the kind of history of computing.
Right, which you started out with all these
devices, and then you're like, oh, we're going
to converge towards a general computing device.
And then kind of like, it feels like in
2025 now, we're suddenly like, well, you
know, like we're going to have a special
hardware just for AI and like also quantum
might be like a specific platform that we
use to explore certain kinds of problems.
There's kind of this like redivergence, I
guess in some ways from like the kind of
like general purpose sort of model that
you'll have like kind of one sort of hardware
platform that kind of will do everything.
So I think there was a paper by Google,
I don't know, it was like 10, 15 years
ago, um, and this was like an, an allegory
to the Watson statement, the world
has, you know, needs five computers.
And, um, so Google said, you know,
the world needs five computers.
And they said, you know, there's a
general purpose computer as we know it.
And then they said there's search and then
we don't know what the other three are.
And so I'm, I'm like tracking now.
And I think so number three is
probably the AI training supercomputer,
which is very different structure.
And then number four is probably quantum.
And so who knows what number five is.
Yeah, the fifth unknown.
Yeah.
This is just very different compute patterns
and you're trying to find, you know, an
optimization and the moment you can optimize
something by a factor of, let's say a thousand
or 10, 000, then it's worthwhile to actually
completely relook at the architecture.
And I think quantum is one of these things like.
Sorry, if you can do something, you know,
in minutes, which takes a hundred years,
that's worthwhile actually looking at
a completely different architecture.
Yeah, absolutely.
And so it's, it's hard to say like, oh, you
know, these are not replacements because they
are just so different in their design space.
And, um, and then they can solve
that one problem very, very well.
Well, Blake, I was, uh, producer Hans
was like, you're going to have to cover
all of quantum in 15 to 20 minutes.
So I think we have done the best we can.
I know you need to go, but before we let
you go, I think one final question is, you
know, you've done a really great job, I
think, kind of parsing out what's kind of
important, what's not, and what's happening,
you know, on a kind of ongoing basis, I guess
the question for you is, like, how can our
audience sort of cut through all the noise
in quantum news, like, what's the important
news to be paying attention to, what should
people be reading, just if you have any
final parting recommendations on that front.
I would guess, I would caution our listeners
that there is, there is a narrative out
there that quantum computers can't do
anything until we have error correction.
Um, and certainly like the most general
purpose algorithms we know of are large
computations and we need systems that
can execute really large circuits.
Um, but like there's, uh, we're already
in this, this, uh, this realm where we can
execute circuits that we can't simulate.
Um, and, uh, I think it's actually harder
to believe that, that nature doesn't permit
anything useful to be done between now and
something which is a billion times larger.
Um, and so, like, I think, I think the
pay, the thing to pay attention to is,
is the kind of steady march of progress
of the performance of the machines.
Um, as people sort of build up the
kind of the fundamental ingredients to
just do larger and larger computations.
Because, uh, what we can do with these
machines is, is going to be directly connected
to the scale of computation that we can.
We can reliably
execute.
Yeah, that's great to keep in mind.
Well, Blake, thanks for joining us, spending
some time this morning, and hopefully we'll
get you back on a future episode, because
I'm, uh, I'm sure, I'm very sure, that there's
going to be more quantum news this year.
Well, that was great.
I'm going to move us on to our next topic.
Um, so the thing that was dominating all of
my group chats and my machine learning AI
social media this week was the Model Context
Protocol released by Anthropic, or the MCP
for short.
Um, and, uh, the way Anthropic describes
that, I'm just going to quote from
their website, is "MCP, right,
provides a universal open standard for
connecting AI systems with data sources".
And people have just been frothing at the
mouth on how excited they are about MCP.
Chris, let me turn it over to you.
When I read something like universal open
standard for connecting AI systems with data
sources, Are they just talking about APIs?
Like, why is MCP, uh, important, uh, and I'm
curious about what you think about the release.
Okay, so MCP has been around
for a little bit of time.
So, what's actually made it super cool,
though, is that it's actually hooked up into
some of the editors, like Cursor or Klein,
which is my particular favorite in this case.
So, you can then go and
access MCP from on there.
Um, what is cool about MCP?
Underneath the hood, it is just
JSON RPC, so remote procedure
calls under the hood there, right?
So there's, there's nothing wonderful,
but what they have done is absolutely
standardized it, and they standardized it
in probably three ways, which is important.
Number one is that you
can expose your resources.
So resources is going to be things like,
maybe it's a database schema, maybe
it's your GitHub schema, and then you
can go and look at an individual file.
Uh, and then the second one, which is probably
the most important one, is tool calling.
So I can say, these are the tools that I've got
available, these are the parameters that you
need, and then I can go and execute those tools.
Why is that important?
Because traditionally, we've been
using a thing called function calling.
And the thing about function
calling is you need the functions
to be, uh, on your machine locally.
But with MCP, I can have these servers
serve up the tools that are available, and
they can be hosted in different locations,
they could be remote, and therefore I can
start to mix and match and do cool things.
So, coming back into the cursor example,
there might be a tool server that has got,
uh, sequence diagrams, mermaid diagrams, or
bar charts, and therefore I can say, "Hey, you
know, I've coded up something," and then I'm
going to say, "Hey, go and, go and generate
me a, an architectural diagram for this."
Or maybe there's a compiler, go compile this.
Or maybe, uh, there's an MCP server for AWS.
And then I would just say, go deploy
this piece of code that I've built
and put it on the server there.
So actually this ability to access
tools in a federated fashion, and
everybody's building marketplaces around
this, really starts to supercharge the
models and the VS coding environments.
Because I'm no longer just restricted to
working with my code, but now I've got
access to my tools and ecosystems and I
can mix and match and do them together.
And more importantly, the model and the
agent is orchestrating and in control of
that, which makes it super, super cool.
Were we always going to end up here?
I know, like, there's kind of a very
hyped dream of agents, I don't know, 12,
18 months ago, which is eventually the
agents just get good enough that they
can kind of just like do this integration
without having any special standard Like
but I guess maybe I don't know maybe that
was always kind of a pipe dream, right?
Like that we would always have to rely on
some standardization to allow these agents
to effectively use these tools versus the
agent just gets smart enough and kind of
the problem gets solved out of the box.
Yeah, we were always going to end up
here and I think there's a few shifts
in the technology that has got us there.
So number one is MCP is a good one.
But actually before that, we really
needed function call and we needed a
standardized way for models to be able
to know how to interact with APIs.
Um, and you also needed a
thing called structured output.
So one of the things that models have been
really bad at in the past is if you ask a model
to say, go generate me this piece of text, then.
It can just go and generate it
in whatever format that it needs,
you know, which is, which is fine.
But the problem is that if I'm dealing with
an API and there's maybe a schema behind
this, I don't want it sort of hallucinating,
uh, the schema that it comes out.
It needs to be exact to be
able to make that interaction.
And then the last one that's really important.
is the, the context length, right?
So the work, the working short
term memory that the models have, if
they're too small, you're not going to
be able to do a lot with agents in the
first place because it's working with
tiny paragraphs of information, right?
Whereas now the models are all sort
of 128k by standard, 256k for some
of the large, the newer models.
in the millions for some of
the Google models as well.
So actually the working memory
the models have are huge.
So they understand standards, how to do
function call, and now we've standardized this
at an API level in the same way that REST was
standardized for sort of microservices, etc.
So this now opens this up into the tool
marketplace, and then as I was saying the other
week, then the next logical step is going to be
that we're going to have the agent marketplaces and then we're going to have agents
multi collaborating with each other.
So this step of MCP and tool marketplaces is, is
really the first step, but there's more to come.
Well, my question
for you is like, if you think
Anthropic has the juice, right?
Like this is kind of a battle of standards.
They're throwing out their standard
and it's already very popular.
It's integrated into a bunch of editors,
but it's like by no means certain that
the model Transcribed producer, the model
creator is the one that defines the standard.
Like I can imagine a world where I guess like
the databases are the ones that say okay well
if you want to talk with our kind of protocol
this is the standard that you're going to use.
I guess kind of question for you is
kind of in this competitive landscape
if like you think Anthropic's got the
edge like this will eventually be the
kind of base standard for everybody.
I don't think that this is, I think we
should ask the question differently.
So I think Anthropic is hitting a point
where the value of their model is higher if
they can interface with an ecosystem, right?
And, um, because they cannot build
every application under the sun or
house every application under the sun.
And so what they are doing is they're
opening up their access to information,
which is potentially locked away and make
their own model, you know, more useful.
And so someone has to drive the standard.
OpenAI right now hasn't been, you
know, stepping up to the task.
So it's coming from somewhere else.
That's how I see it.
Um, now, Deb.
If you look at what OpenAI did, like,
almost a year ago, a year and a half ago,
they just said, point us at the Swagger
API and we just integrate with it, right?
And that was their answer.
And, um, so I think we are, we are getting
to a point where, um, it's an indicator
that models will actually be much more
autonomous, not constantly human supervised.
If you look at the traditional
initial models, like the interface
language to the model is English.
And now suddenly we're saying, well, not really.
It's remote procedure calls to computers.
So inside computers talking to computers.
And, uh, you know, there's already
like, what, what is the model?
If two models want to talk to each other,
should they do communicate through English?
Or should they have some gibberish
they could talk to, right?
So there is, um, we are now just enabling
software to be directly invoked from the model,
um, and so making the models effectively access
the rest of the world, um, you know, we already
accessing search, so that's deeply integrated.
So, you know, models go out and do
research and they go to search engines.
Um, but search engines are still
kind of written for humans.
So the queries you are
sending are kind of natural.
But I think this is just the next logical step.
Um, also you need to standardize
to actually do quality assurance.
And so if you don't have a standard, how do you
say my model is actually doing the right thing?
And so if you don't have those standards,
it just becomes very, very hard to do that.
Um, because, as you know, as Chris said,
like, the model just gives you some tokens
back and sometimes they are misformed.
And so you want to actually have a,
you know, a unit test which says,
no, it's actually correct syntax.
And, you know, and so those things
all happen through standardization.
And then suddenly, you know,
things can talk to each other.
So it's, it's a natural progression.
I'm surprised it hasn't happened before.
That's right.
It's actually like maybe
actually delayed a little bit.
Well, great.
I'm going to move us on to our next topic.
Um, one of the big news stories of the week,
at least for me was, um, CoreWeave, uh, which
was, uh, if you haven't been watching kind of
the AI hardware, AI cloud space, um, CoreWeave
is kind of like one of the most exciting kind
of, I would say like upstarts in the space.
Um, they originally started, I think as
a sort of crypto infrastructure company.
So building sort of specialized.
Clouds for crypto mining.
Um, I think they noticed that AI was
gonna be a big market and they've
kind of like gone fully in on AI.
Um, and they've benefited, I think from the
fact that they have a very close relationship
with NVIDIA and have had kind of early
access to, um, a lot of the kind of like next
generation chips as they've been coming out.
And as a result, CoreWeave has grown hugely.
Um, and it is now filing
to go IPO, um, and I guess.
Volkmar, maybe I'll turn it to you.
I think you're the obvious person to kind of
respond initially to this is, you know, I'm
interested in how you think about the market
for sort of like companies that specialize
in AI compute because, you know, I almost
kind of thinking about it, I'm like, well,
there's these like 10,000 pound gorillas in the
space that really dominate the cloud market.
Um, it's interesting to believe that
like, hey, a company that just kind
of specializes in this one area.
can, like, survive and become
its own gigantic company, right?
Um, but I guess I'm curious about, like, what
you think the prospects are for sort of these
more specialized kind of compute companies, and
I guess specialized compute in AI in particular.
So this goes back to the thing we
talked about, like, 10 minutes ago.
Yeah, there's five computers.
Five computers, and I think there is
a wave of new companies coming out,
which are entering the cloud space.
Uh, to serve that specific
niche of, uh, supercomputers.
So if you look at, at what CoreWeave is
doing, they are not only giving you, uh,
AI capacity, but they very specifically
give you an AI training cluster.
And so when you are going to CoreWeave if you are not
buying 10,000 H100.
You're buying 10,000 H100
wired up into a single supercomputer and they
are running that single supercomputer for you.
And so, uh, the IBM, for example,
we announced it a while back.
We are having a relationship with CoreWeave.
And so we all, we all running
training jobs in CoreWeave.
Um, and it's simply the
um, it's a, it's a very natural progression, uh,
in these compute demands to say, you know, I'm,
I don't want to have the asset on the books or
I don't want to build the in house capability
to operate these very, very large machines.
And so there's an economy of scale,
similar to the cloud, how to operate
this and get really good at it.
And I think CoreWeave is probably
one of the leading companies.
They are really, really good at their job.
And so, um, I think it's a natural progression.
Um, but on the flip side, uh, it's, it's a
new market will evolve of these, you know,
high performance computing hosting companies.
So now the big question is can the
traditional ones like in Azure, Google,
and AWS, how are they doing in this world?
And it seems so far, uh, CoreWeave has an
edge here because they are saying we don't
worry about virtual private cloud networks
internally, etc. We are just giving you a
computer and it just has a lot of GPUs in it.
It's still a little counterintuitive to me.
It's kind of like, you know, like you
think about like the deep amount of like
capital that like an Azure has access to,
like it kind of feels like they would be
like, sure, we can just offer that too.
We just have like way more money to do that
than these kind of smaller providers can.
So it almost kind of feels like, I don't
know, uh, Chris, you've got opinion on
this or Volkmar, if you do like, it seems
like this additional edge they have is
that there's something about deploying
these clusters, which is kind of so.
Sort of unique, I guess, in terms of know
how, that it's actually pretty difficult
for these, like, kind of, maybe traditional
players to kind of, like, easily just kind of
shove these other companies out of the space.
Is that the right way of
reading what's going on?
If you look at traditional cloud
vendors, um, they are rendering out
thousands of individual computers.
So they are not, they're their DNA
is not clusters, um, of making a
thousand machines work in concert.
But their approach is
I have a thousand machines and
they all kind of limp along.
And once in a while, one fails
and I give you another one.
And if you look at training workloads, that's not sufficient.
So you have, um, you need a thousand
machines which actually stay up.
And so any, any fault or any network
congestion also has quite dramatic impact.
Um, so, you know, NVIDIA had
a lot of challenges with HBM.
So if one of these GPUs has an HBM
issue and there were silent, uh, high
bandwidth memory errors and corruptions
your training job will just, you
know, either fail or not make forward
progress or forget what it just learned.
And so, uh, one of the things CoreWeave
is doing is they are monitoring
literally every wire in their cluster.
So the connectivity from the CPU to the
PCIe switch, from the PCIe switch down
to the card, all the links, et cetera,
they deal with link flapping, et cetera.
So to keep that one computer up.
And the traditional approach in the
cloud is like, yeah, just take it
offline and give you a different one.
And so there's a DNA, uh, a different
DNA you need to have as an operator
to actually operate these machines.
And this is pervasive in your control plane
and your monitoring infrastructure, et cetera.
Because in the cloud, typically you just
take one machine offline, nothing happens.
Chris, you wanna jump in with your hot take?
If I think about the bit, I think I've said
this before, but if I think about the Bitcoin,
everybody started on, uh, you know, CPU,
then they moved to GPU, then they moved to
FPGAs, and then they moved to ASICs, etc.
For inference, we're seeing
the exact same thing.
Everybody's building their own inference
chips, etc. So what does that leave it down to?
It leaves it down to training compute, right?
I, I'm going to train models.
And what's the very thing at the moment we were
literally discussing last week on the podcast,
which is is the era of pre-training dead, right?
So so actually we've moved
into kind of reasoning models.
We're going to take a base pre-train, so
there's going to be a few companies that
are doing massive trains at that point
perhaps, and then everybody's going to be
into this sort of inference time compute.
So, so where is the market?
There is a big market at the moment,
but does that market stay there?
Um, and then you've got to look at what's
going on in the, the desktop market.
So if we look yesterday, Apple with their new
M4 studios, where you're going to be able to
take something like the DeepSeek-R1 model
and be able to run that on your desktop.
And then we discussed on the podcast a
few months ago, the video boxes where
you're going to be able to train.
Uh, so I think that the fine tuning market,
is that going to stay on the cloud or
is that going to be devices that people
have, or is it going to be anywhere else?
And then if I think of the cloud providers,
back to your point, do you think AWS and Azure
is going to let somebody else eat their lunch?
They're going to be like:
No! We are going to put that capability in
ourselves, and we're going to squish you.
Yeah. I mean, that, that shifts
basically to reasoning models.
It's, it's interesting to believe that it
basically favors the incumbents because
I guess an inference world looks a
little bit less like the training world.
Uh, like, effectively, like, like almost
the meta moves back to, well, it's broken,
just pull it out, put a new one in.
You don't think about that?
Yeah. No, the, the post-training is now much
more than the pre-training, um, and
post-training, but simply because It's
a mix of training and inferencing.
And so I think that, and the complexity for
that post training phase is sometimes now 5
to 10x more expensive to do the post-training.
Now on post-training, you still have your
model, your training, live in a cluster.
But what you're doing is you need, your
loss function just went from a couple
of milliseconds to a couple of, minutes.
So that's the actual challenge here.
And so there is a bigger balance between,
you know, how much you have in your,
in your training cost versus how much
infrastructure you need to have live to
do your, your loss function calculation.
Now the loss function calculation there still
needs the weights of what you just trained.
So these are very, very large training
costs, which now have more of a mixed
workload because the computational cost
has shifted, but the fundamental problem
that you need a big HPC machine hasn't.
And so, I mean, from my perspective, the
big question is, is the market big enough
that Google, Amazon, and Microsoft are
saying, this is so critical to us, because
otherwise the workloads move to these
esoteric vendors, and then they will have
a drag that we don't want to allow this.
And then there are effectively
two options for them.
BioBuild, right?
And if you look, like you, you go across
Google, Amazon, and Microsoft, and their heads
of the training clusters are all ex HPC guys.
So they have the talent in house.
So now it's a question,
how fast are they moving?
And then, you know, do they see this
as a market which is big enough?
If you look at CoreWeave, you know,
it's a couple of billions.
If you look at Microsoft,
it's a couple of trillions.
So there are three orders of magnitude,
you know, uh, of, of market cap.
So, like, it may just not
be so critical right now.
Or now that those companies are coming online,
they will go and have to do it themselves.
So I think they, we will see.
Um, I think, Chris, you're right.
The chances that, you know, trillion
dollar businesses take out billion
dollar businesses is extremely high.
And so, and the negotiation
power is better, we'll see.
But I think, fundamentally, what we
are seeing is that there's a different
compute paradigm, which allowed that
market to exist, and it was underserved.
And because it was underserved,
this company exists.
So now let's see if they close the gap.
I, I agree with that.
It is definitely an underserved market.
But as I said, every single one of these
companies, whether you're an AI provider or a
cloud provider are invested in designing and
building their own chips to bring down the cost.
And, and, and, and Volkmeyer, I totally agree
with you, which is latency on inference is
key, but actually that's the biggest focus
at the moment is getting the inference chips right.
So, so actually,
to my point is having big, massive clusters.
And yes, it does take a lot
of data and it is big train and runs
a big clusters to do these kind of,
uh, you know, post post tune phases.
But the reality is it's a
different mix of workload, right?
In that sense.
And therefore there's new techniques and where
those guys are really just saying, here's
my big cluster kind of, kind of go for it.
So I, I just can't help thinking that
anyone who is an AI model provider.
is really going to be investing in that,
in that space themselves with their
own chips, their own infrastructure.
And I get your buy versus build
point of view, but as you say, kind
of small numbers at the moment.
And, and I just, I just don't see the big cloud
providers being prepared to hand over so much
cash to a kind of third party in that sense.
Uh, we could go on at length on this and I
actually do want to return to this point because
I think it's, it's very interesting about like
how like the landscape of infrastructure is
going to look with all of these pressures.
Um, and I think this kind of third path
where the really big companies are basically
like, ah, what's a few billion dollars?
And they kind of leave the market alone.
Um, it is kind of a, it's definitely a
path that I'd ever thought of before.
Um, though we shall see.
Well, great.
Well, I think for the last segment, because
we only have a minute, I just wanted to
quickly touch on a new story that popped up.
Um, we'll mention it just,
I think, because it was.
Sort of, again, widely chattered about online.
Uh, there's a startup called Sesame,
um, which was launched by one of the
Oculus co-founders that's been working
on sort of, uh, synthetic voice.
Um, and they released a demo that I think,
at least personally for me, has kind of like
gone over Like what they argue is to be kind
of like the uncanny valley of like voice
interfaces like, you know To be totally clear
right like I don't really use voice on Open AI.
I don't really use voice on Anthropic but this
is like the first time where I was demo is
like, okay yeah, this is like almost getting
smooth enough to the point where it does
sort of feel like interacting with a human
maybe we can just kind of do a quick around
the horn on just like you know, if Chris,
Volkmar, you guys have played around with it.
You know, do you think it's worth for
people to check out or is it overhyped?
Do you think we're finally
there from a voice standpoint?
Just kind of quick takes before
we close out the episode.
Oh my goodness, that model got
me in trouble with my wife.
I put that model on at about 11 o'clock
at night just to interact with it and my
wife's like Who are you speaking to what
it's like I hear a woman's voice and I was
like, oh my goodness...And I had to
switch that thing off right because it was so
realistic and I was just like Oh my goodness.
I can't talk to this anymore.
So that model is incredible.
Actually, they, they have solved a few things.
They've solved the latency problem.
They've, they've solved the
kind of utterance problem.
Kind of just, you know, the, the silence,
the waiting, and the model will come back.
It is truly like a natural interaction.
And you feel it.
You feel as if you're talking to
somebody else at the other end.
So...
I think this is going to change everything,
you know, contact, you know, I did a thing
about contact centers about latency, etc.
You know, you don't think that we're going
to see those types of models powering
customer service experiences in the future.
Absolutely, this is coming down
that road and they're going to
kick off agents to do workflows.
The model is incredible and I think if you
Anybody who has interacted with any other
voice model before and like, ah, it's not
quite there yet or, oh, that's terrible.
Go check out this model because actually
what they've done there is incredible.
All right.
Volkmar, parting
shots.
Hyped?
Overhyped?
I, I agree with Chris.
It's amazing.
I tried it in the office.
Um, so my wife was not listening to it.
Um, no, I think the, the, it's very
interesting because it shows the
other end of the spectrum, right?
So we have these kind of military style
Siri conversations, you know, which
command you around with the drive.
And this was really smooth, like chatty, you
know, friendly, funny, and so I think there is
now, now we have two ends of the spectrum, and I
think now we can populate all the other points.
And so now you can make these models, you
know, for pretty much any human interaction.
So I can't wait that, you know, when I call
into, you know, any airline that doesn't
tell me that I need to wait for like a head.
Um, Uh, one of the airlines telling me
it's two hours and 40 minutes until the
next agent can talk to me and they can
just pick up the phone and talk to me.
So I think this is really, this is
a great, great extension, um, to,
to the, uh, to the spectrum here.
And, you know, it's also good
that someone is nice to you and
when you're driving the wrong way.
Yeah, also critical.
It might be too sassy for the airline scenario.
You know, you call up, go out,
my flight is delayed, you go.
Ah, but did you actually get
there on time, Volkmar?
Did you, you know, did you, did you plan enough?
Did you, you know, and so maybe, maybe
it might be a little bit too chatty
for that scenario.
I think it's a, it's a really good, uh, way
to see, you know, where, where we can go.
It's, it's like, if you can
do that, you can do anything.
It's really.
a different emotional state
they managed to capture.
And so I think the, the real interesting
part is how do they express that?
You know, how do, how could they make
a model which, where they could get
those types of emotions into the model,
um, and express it mathematically.
And I think if, if you get that dial,
then that dial is the powerful part.
I agree with you actually that is probably
that word emotional is the most important one
because that was my real point when I interacted
with the model it was like, oh my goodness.
This feels real It was there was a there
was that a or it was really weird It was
just that feeling that you had and no
other voice model has been able to do that.
So I I I think This is something different.
Um, they're very open about the techniques
and in fact, I think it's getting kind
of open weighted pretty soon as well.
So, um, I, I think this is just a game changer.
Well, uh, you heard it here first.
You should check out the Sesame demo.
Uh, and that's all the time we have for today.
Uh, Chris Volkmar, thanks
for joining us as always.
We'll have to do the duo
show again at some point.
Um, and thanks to all you listeners
for tuning into Mixture of Experts.
Uh, if you like what you heard,
you can get us on Apple podcasts.
It's Spotify and podcast platforms everywhere,
and we will see you next week here on MoE.