Google Antitrust, AI Safeguards, Quantum Shift
Key Points
- The episode opens with host Brian Casey introducing the “Mixture of Experts” panel, featuring AI experts Kowar El McGrowi, Gabe Goodhart, and Mihi Crevetti, to discuss current AI developments.
- The team highlights several headline AI stories: OpenAI’s new safeguards for detecting emotional distress in teens, IBM and AMD’s partnership to blend quantum and classical computing for supercomputing, Amazon’s “Lens Live” visual shopping tool, and Starbucks’ AI‑driven inventory‑reorder system.
- The main discussion centers on the latest clarity regarding Google’s antitrust case and its implications for the tech and AI landscape.
- Additional topics slated for the episode include Anthropic’s recent funding round, rising skepticism about AI, the concept of “AI bears,” and concerns about a potential AI winter.
Sections
- AI Futures and Antitrust Update - In the opening of the “Mixture of Experts” podcast, host Brian Casey introduces a panel of AI leaders to contrast utopian and pessimistic visions of AI, preview discussions on Anthropic’s raise, AI skepticism, bears, an AI winter, and the latest developments in Google’s antitrust case.
- Google Antitrust Case Update - The speaker outlines the 2020 antitrust lawsuit against Google—dubbed the biggest tech case since U.S. v. Microsoft—explains how AI developments shaped a more conservative ruling, and notes that Google will keep Chrome and Android and may continue paying to remain the default search platform.
- The Power of Software Defaults - The speaker stresses how ubiquitous default settings drive everyday software use, benefiting firms like Google while also offering savvy users a chance to customize and influence the broader AI‑search landscape.
- Default AI Partnerships Shape Search Power - The speaker argues that default settings in AI assistants amplify platform influence, as choices like Siri routing users to OpenAI or Anthropic can instantly shift the balance of power in AI-driven search.
- AI Search Evolution and Funding Gaps - The speaker argues that while large search engines will continue to dominate due to extensive real‑time data and varied user contexts (phone vs. desktop), smaller AI providers lack funding and up‑to‑date data, so a significant shift toward AI‑driven search is unlikely for another few years.
- AI vs Search: Browser as Portal - The speaker examines how exclusivity agreements hinder merging AI‑generated information with interactive web experiences, questioning whether browsers remain the main distribution model as users constantly shift between traditional search and AI assistants.
- Beyond Tokens: AI Tool Integration - The speaker explains that large language models only generate tokens and depend on surrounding systems to perform actions like invoking code or accessing the internet, emphasizing that we are still in the early stages of building interfaces that turn models into functional agents.
- Anthropic's Series F and Market Position - The speakers discuss Anthropic’s recent Series F funding amid a broader trend of late‑stage private rounds with soaring valuations, and highlight Anthropic’s niche focus on code‑centric AI compared to competitors pursuing broader, full‑stack models.
- Assessing GPT‑5 Reception and Anthropic’s Coding Edge - The participants examine the initially lukewarm response to GPT‑5, its shifting sentiment, and question how durable Anthropic’s advantage is in the AI development and coding market.
- AI Development Tools Lower Barriers - The speakers argue that superior tooling and user experience—exemplified by AI‑powered platforms like Replit, Cursor, and Vibe Coding—are key to attracting novice developers by removing setup friction, representing a substantial growth opportunity.
- Rapid Tool Switching After Failures - Developers quickly abandon underperforming AI/code platforms for alternatives whenever incidents occur, underscoring low switching costs and the need for reliable, cost‑effective workflows.
- The Expectations Game of AI - The speaker contends that AI debates are fueled by projected utopian or dystopian visions, but emphasizes that AI is entering a phase‑change toward everyday indispensability, making its loss feel as jarring as a broken web browser.
- Cheaper AI Tokens Fuel Enterprise Integration - The speaker emphasizes ultra‑low pricing like Nano's $0.05 per million tokens as a catalyst for moving AI from simple consumer chat tools into deeper, workflow‑embedded applications.
Full Transcript
# Google Antitrust, AI Safeguards, Quantum Shift **Source:** [https://www.youtube.com/watch?v=Qw8GOzs3Z0g](https://www.youtube.com/watch?v=Qw8GOzs3Z0g) **Duration:** 00:52:12 ## Summary - The episode opens with host Brian Casey introducing the “Mixture of Experts” panel, featuring AI experts Kowar El McGrowi, Gabe Goodhart, and Mihi Crevetti, to discuss current AI developments. - The team highlights several headline AI stories: OpenAI’s new safeguards for detecting emotional distress in teens, IBM and AMD’s partnership to blend quantum and classical computing for supercomputing, Amazon’s “Lens Live” visual shopping tool, and Starbucks’ AI‑driven inventory‑reorder system. - The main discussion centers on the latest clarity regarding Google’s antitrust case and its implications for the tech and AI landscape. - Additional topics slated for the episode include Anthropic’s recent funding round, rising skepticism about AI, the concept of “AI bears,” and concerns about a potential AI winter. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=0s) **AI Futures and Antitrust Update** - In the opening of the “Mixture of Experts” podcast, host Brian Casey introduces a panel of AI leaders to contrast utopian and pessimistic visions of AI, preview discussions on Anthropic’s raise, AI skepticism, bears, an AI winter, and the latest developments in Google’s antitrust case. - [00:03:04](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=184s) **Google Antitrust Case Update** - The speaker outlines the 2020 antitrust lawsuit against Google—dubbed the biggest tech case since U.S. v. Microsoft—explains how AI developments shaped a more conservative ruling, and notes that Google will keep Chrome and Android and may continue paying to remain the default search platform. - [00:06:27](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=387s) **The Power of Software Defaults** - The speaker stresses how ubiquitous default settings drive everyday software use, benefiting firms like Google while also offering savvy users a chance to customize and influence the broader AI‑search landscape. - [00:10:47](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=647s) **Default AI Partnerships Shape Search Power** - The speaker argues that default settings in AI assistants amplify platform influence, as choices like Siri routing users to OpenAI or Anthropic can instantly shift the balance of power in AI-driven search. - [00:14:51](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=891s) **AI Search Evolution and Funding Gaps** - The speaker argues that while large search engines will continue to dominate due to extensive real‑time data and varied user contexts (phone vs. desktop), smaller AI providers lack funding and up‑to‑date data, so a significant shift toward AI‑driven search is unlikely for another few years. - [00:18:00](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=1080s) **AI vs Search: Browser as Portal** - The speaker examines how exclusivity agreements hinder merging AI‑generated information with interactive web experiences, questioning whether browsers remain the main distribution model as users constantly shift between traditional search and AI assistants. - [00:21:54](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=1314s) **Beyond Tokens: AI Tool Integration** - The speaker explains that large language models only generate tokens and depend on surrounding systems to perform actions like invoking code or accessing the internet, emphasizing that we are still in the early stages of building interfaces that turn models into functional agents. - [00:27:09](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=1629s) **Anthropic's Series F and Market Position** - The speakers discuss Anthropic’s recent Series F funding amid a broader trend of late‑stage private rounds with soaring valuations, and highlight Anthropic’s niche focus on code‑centric AI compared to competitors pursuing broader, full‑stack models. - [00:30:15](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=1815s) **Assessing GPT‑5 Reception and Anthropic’s Coding Edge** - The participants examine the initially lukewarm response to GPT‑5, its shifting sentiment, and question how durable Anthropic’s advantage is in the AI development and coding market. - [00:34:16](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=2056s) **AI Development Tools Lower Barriers** - The speakers argue that superior tooling and user experience—exemplified by AI‑powered platforms like Replit, Cursor, and Vibe Coding—are key to attracting novice developers by removing setup friction, representing a substantial growth opportunity. - [00:40:17](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=2417s) **Rapid Tool Switching After Failures** - Developers quickly abandon underperforming AI/code platforms for alternatives whenever incidents occur, underscoring low switching costs and the need for reliable, cost‑effective workflows. - [00:43:50](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=2630s) **The Expectations Game of AI** - The speaker contends that AI debates are fueled by projected utopian or dystopian visions, but emphasizes that AI is entering a phase‑change toward everyday indispensability, making its loss feel as jarring as a broken web browser. - [00:51:32](https://www.youtube.com/watch?v=Qw8GOzs3Z0g&t=3092s) **Cheaper AI Tokens Fuel Enterprise Integration** - The speaker emphasizes ultra‑low pricing like Nano's $0.05 per million tokens as a catalyst for moving AI from simple consumer chat tools into deeper, workflow‑embedded applications. ## Full Transcript
There are those that project into a
utopian future where AI really plays the
role of humans in roles that humans
don't like to play. There's those that
also see that exact same future and get
extremely pessimistic and worried about
it. All that and more on Mixture of
Experts.
[Music]
All right. Hello everyone. I am Brian
Casey. It is back to school week this
week and Tim's kid gave him a fever
immediately. So, you are all stuck with
me and I think that's a situation we can
all as parents at least for the for
those of you that are uh understand and
get ready for in terms of the fall
season. So, you're all stuck with me
today. Uh welcome to mixture of experts.
Every week we bring together a panel of
experts, technologists, product leaders
to talk about the latest news in uh in
AI. And today I'm joined by a great
crew. Uh we have Kowar El McGrowi who is
the principal research scientist and
manager for the hybrid cloud platform.
Uh we have Gabe Goodhart, chief
architect for AI open innovation and
Mihi Crevetti who is the distinguished
engineer for Agentic AI. As always we
have a packed episode this week. We'll
be talking about anthropics uh recent
raise. Uh we'll be talking about
skepticism, AI bears and an AI winter.
But we're going to start off with the
biggest story of the week, which was we
finally got some clarity on where
Google's antitrust case was going to
land. Before we get into that though,
I'm going to turn things over to Eye
McCann, uh, who's going to take us
through just a couple of the other top
stories of the week. So, over to you.
>> Hey everyone, I'm Eiley McConn. I'm a
tech news writer with IBM Think. Before
we dive into today's main episode, I'm
here with a few AI headlines you may
have missed this busy week. First up,
OpenAI has added new safeguards to chat
GPT so it can better detect emotional
distress in teens and other users in
crisis and guide them to the real world
support they need. And our next news
item, IBM and the semiconductor company
AMD are teaming up. They're combining
the power of quantum computing with
traditional computing and AI to create
quantum ccentric supercomputing.
Meanwhile, your back to school shopping
may look a little bit different. Amazon
has just launched Amazon Lens Live. So,
next time you're out shopping, you can
hold your phone, point it at an object,
and see some matching products that you
can swipe through. And last, but not
least, you may never need to worry again
that Starbucks is going to run out of
your favorite vanilla cold cream or
caramel drizzle. Why? Starbucks baristas
will soon only need a tablet to scan
store and supply shelves, and the
built-in AI tools will identify which
items are running low and then
automatically reorder them. Want to dive
deeper into some or all of these topics?
Subscribe to the IBM Think newsletter
linked in the show notes. And now back
to our episode.
[Music]
So, we'll kick things off today by
talking about the Google antitrust case.
Um, for those of you who have been
following this closely, uh, a lot of
people have considered it the biggest
case, uh, antitrust case in tech since
the 98 case in US versus Microsoft. So,
um, some have even called it like a
blockbuster, um, case. And for those of
you who have been following or may not
have been following, this case actually
kicked off and it's obviously about um,
Google's search business, but it kicked
off in two 2000, the year 2020, so 5
years ago. Um, which feels like an
eternity. Um and I think one of the most
interesting aspects of the case is just
how much has changed over the last 5
years in this particular space.
Obviously uh with the with the reason
this show exists is because of how much
has happened in AI. Um, and it turns out
that that actually played a significant
role in just the ruling that was
ultimately came down, which was I I
think a more conservative was the way
that it was described in the market um
approach to the ruling than uh than I
think more dramatic um results could
have been. And so what actually ended up
happening um there was a lot of
discussion about whether Google was
going to be forced to devest things like
Chrome or Android. Uh whether they would
still be able to pay um to be the
default in things like browsers and
devices. Um and what it ended the ruling
ended up coming down was that Google is
going to keep um Chrome and Android as
part of this. they will continue to be
able um to pay to be part of uh be the
default uh search engine for platforms
like uh browsers and devices. But
there's a few things that did come down
as well as part of that which is um so
one some of the exclusivity agreements
that Google has today. They can't pursue
that um in the in the way that they used
to and there's also limited data sharing
um with some of their potential
competitors um in the space. Um the
intention there I think is to just make
it easier for um other entrance and you
know more dynamic marketplace um
essentially. Um but to go back to the
point about this case originated in 2020
u and it's now 2025 and one of the big
reasons why it was a more conservative
ruling was because of everything that's
happened in AI. I'm just going to read a
couple u a couple lines from the
document here. um where one generative
AI technologies pose a quote unquote a
threat to the prim primacy of
traditional internet search. Um the
money flowing into the space and how
quickly it has arrived is astonishing.
These companies are already in a better
position both financially and technology
uh technologically to compete with
Google than any traditional search
company has been in decades um
potentially. So the whole category sort
of changed um over the last 5 years and
that's really what I want to talk about
is just kind of where we are in this
space between you know the search market
the AI market the way that they're
converging. Um and so let's just start
with maybe defaults and the extent to
which you all believe they're still um
and how important they are in this space
because if I were to make maybe just a
counterpoint on this um and Gabe maybe
I'll start with you on this one. Um,
chat GBT as far as I know has hundreds
of millions of users and is one of the
fastest growing if not the fastest
growing products of all time. If
defaults were really that big of a deal,
how is it growing u this quickly? And
are these defaults as important and
powerful as people say they are or you
know is all this happening anyways?
>> Yeah. Okay. Let's start with a question
about defaults. Um, how many different
applications did you use today before
logging into this uh this session?
Probably a couple hundred. How many
individual settings do you think each of
those applications has buried in its
configuration profile? Probably tens of
thousands in total. Defaults in software
engineering are one of the the great
unsung heroes of us actually being able
to use software in our daily lives. And
so something as big as your default
entry point into the internet is a big
deal. Even if for those of us that are
enthusiasts, you're likely going to
change one piece of that default, maybe
even the the primary piece of that
default, for the vast majority of humans
out there who do not join AI podcasts,
if it works, it works. Don't mess with
it. Right? And if the button is there
when I grab my phone off the shelf, I'm
going to click that button until that
button stops working for me. So I think
defaults are extremely important in this
case. Um I think uh you know this ruling
is clearly a win for Google in some ways
because they can still throw their
financial muscle around and remain in
that default position. I do think one of
the things that the the article we all
uh prime prepped with uh didn't call out
was that there is a pretty big win here
for the enthusiast consumer who's
willing to change a default. Um and that
I think becomes around the fact that
there's been a pretty clear line between
generative AI and search for the most
part. I think the place where that blurs
the most is when you start getting the
AI generated results from Google. But
technologically, they're not actually
that different. In fact, one of the the
very best uses of generative AI is good
search. And I think up until now, to
your point, Brian, chat GPT has
I don't know how many millions of users
at this point. And um it probably if
people have organization on their
phones, lives in a different bucket than
the Google icon, right? like it
literally lives in one labeled AI or
something like that. For it to truly
make its way into that default position
where it takes over the crown of the
thing that people click on the most on
their phones, it's got to start blurring
and being that just utility that people
go to when they want to get something
done, like learn some knowledge, find
something on the internet. Um, so I do
think this will start to allow that sort
of user experience blurring to happen in
a way that it hasn't up until now
because of these exclusivity agreements.
So being allowed to frame an AI tool as
a search option on a mobile device, I
think can potentially change
uh the landscape from being a one-horse
race to uh you know a many horse race
which could be interesting. I do think
the idea that there could be that these
AI tools are challenging for that role
as as the default. It hasn't happened
yet though. Like I've seen what I've
seen a lot of is that on platforms like
Twitter, everybody says, "Oh, I don't
use Google anymore. I only use these AI
tools." And Twitter is obviously a
pretty unique sort of echo chamber. And
so perhaps not broadly reflective of the
entire consumer market on planet Earth
at this point. But what I don't see as
much of is chatter about the browsers,
devices, them flipping over from like
Google and search as like the primary
sort of interface into the web and
making that instead these AI tools. And
I'm curious like what you think the
barrier um is there like what has to
happen for you know companies like uh
you know Mozilla or Apple or whoever
else to look at that and say like okay
we actually want to have kind of more AI
generated um be kind of more the default
versus the traditional um web because I
haven't totally seen the appetite for
that yet and I'm curious whether you
think that's a consumer behavior you
know sort of obstacle for them to
overcome if it's a techn technological
limitations maybe a little both but you
know they're they're not coming to the
defaults yet and um I'm curious what you
think the barriers are there.
>> Yeah, I think that's a great question
and there is a twist here I think
especially with AI. So of course you
know I totally agree with Gabe that uh
defaults are so important you know for
decades of research you know show us
that most users stick with defaults so
they don't switch sittings and this is
why Google was willing to pay you know
Apple like 20 more than 20 billion
annually to secure you know Safari's
defaults. So there is you know really
strength there and with the AI there's a
switch especially the way that the
search is evolving right now with these
AI assistants. So the importance of
defaults I believe intensifies because
the assistance is not just routing you
to link it's shaping also the answer and
especially now with agentic you know
there will be routing to so many
different tools underneath that you're
not even aware of and that maybe the
faults there will also be you know very
important so I think platform
partnerships are important here you know
I think Apple here or maybe other
platform uh makers they're becoming king
makers here so if Siri defaults you to
open AI or anthropic instead of Google
the balance of power in AI search here
shifts overnight. So and so I think here
it's really important to really
understand how these partnerships are
are you know shaping because for the
users you know they they initially
they're going to be presented with
products you know which one is better.
It's like that initial experience
sometimes is sticky. So the platforms
that present these things and underneath
the hood especially with agents right
now where are they going to go? where
are they going to default? That's going
to be really important. So, how do we
break that? How do we get into that
partnerships? That's I think is going to
be very important for also, you know,
all of these new, you know, startups or
companies that want to get into the
space and maybe there the monopoly
still, you know, having um a big role
here. So, who's going to maybe pay the
most in these uh big AI partnerships?
Who still has, you know, that power? So
I think the ruling that happens it's
kind of I feel it's a mixed uh bag here.
So it is definitely a short win for
Google because it avoids the breakup and
it protects its core business model. Uh
but I think for competitors
including the AI startups and the AI
makers the ruling is a mixed back. So on
the one hand the judge did not break up
Google which many rivals you know were
hoping for but on the other hand you
know the data sharing mandate is I think
it's seen as a lifeline here. So AI
companies like open AI perplexity and
and and so forth they stand to benefit
the most because they can now access
also the this trove of information to
improve their own answer engines and
also try to compete more directly with
with Google. But I feel having you know
also a a way to penetrate that
partnership uh with the uh platform
makers like Apple is going to also be
key. Yeah, I'm thinking here that really
Google wins because they get to keep
Chrome, Apple wins because they get to
keep the 20 billion from uh Google and
now open up for additional revenue from
open AI from entropic from all these
other businesses and AI new search
engines. I guess some of the folks that
are not going to win from this uh from
this um enrollment is you know folks
like Mozilla and Firefox and some of the
smaller browsers and you know they still
get revenue from even Google cuz they
get paid as well to make Google the
default search engine on their platforms
or Bing or some other search engine. Uh
but they're not really they don't really
have an avenue to penetrate this market
and I think that's something to look at
as well. We're consolidating all of the
search and all of the AI capabilities in
the hands of maybe three or four large
organizations. You've got your Google,
your open AI, your entropic, the
Frontier U models, maybe Perplexity.
They're going to have a way to provide
that level of funding necessary to get
if not in the default at least in the
second or third option when you use the
search, but some of the smaller
providers are not going to have a
mechanism to get in there. The second
thing I'm thinking about here is
what kind of things are you really
searching for when you're using your
phone versus when you're using your
tablet or when you're using your
computer? Because usually when I'm using
my phone, I'm out of the house, right?
I'm searching for, you know, nearby
restaurant that is open. So, there's
still going to be a necessity for the
traditional search engine. They do all
the leg work. They still have all the
data. They have data from maps. They
have data from businesses, from reviews.
Even if you apply AI on top, it's still
going to be Google's data or Bing's
search data. And I don't really see
OpenAI or some of the other AI providers
have that current data availability.
They can build agents, they can build on
top of that data. And I think for what
you get on your phone, it's not as much
relevant for what you would use on your
desktop or on other applications cuz
nobody's going on their phone and
saying, "How do I create a Python
program to do this?" And you get the
answer. It's going to be better than the
first three hits on Stack Overflow and
you're going to pick that answer. So I
think watch this space. It's going to
take a couple more years before we see a
major impact from these top search
engines. and the search engines
themselves are now prioritizing the AI
answer. So when you use Google search or
Bing, you're getting the AI summary to
kind of compete back with these kind of
u AI vendors as well.
>> Just just one more thing to a question
that you threw out earlier that really
struck me is is the question of why we
aren't seeing these um AI apps replace
or start to encroach on basic browser
apps.
uh you did ask the question of whether
there's a technology element to that and
I do think there is and I think it's
around the visual portion of the user
experience. The web is a very visual
place, right? You go to a given web
page, if it were just a pile of text,
you would probably immediately turn your
brain off. Almost every website out
there has a banner, has something that
is visually appealing to to draw you in.
And right now, the UX of AI assistance
is primarily textbased. um there's
novelty to that text because it's text
framed directly to you. And that in and
of itself, you know, has brought about
the AI revolution we're in today. But I
think um part of the reason we haven't
seen these things come closer together
is that that UX layer. There's nothing
fundamental about the technology of
generative AI that prevents that. It's
just a level of generated output that we
haven't really seen incorporated into
the UX for these AI platforms. So, I
think uh you know to the point I I was
trying to make that I don't know if I I
articulated well, I think um this ruling
actually sets the grounds for those
technologies to start coming together.
And I think those exclusivity agreements
have probably been a big barrier to uh
actually trying to blur the lines
between what you get if you're just
looking for a purelyformational answer.
AI is probably the best way to get that
right now versus an experiential result
of something that's got visual elements
to it, something that's got interactive
elements to it beyond just text and
multi-turn chat. So, uh I'm really
curious to see now where this goes from
a UX perspective. One of the things that
I always think about with the browser in
particular, like people look at that as
this like critical point of distribution
for like very obvious reasons. Um,
right? But the thing the browser is is
like a portal to the internet. And so
like an AI is only kind of that. Um,
right whereas search is explicitly that
um, right now. And one of the things
that I kind of go back and forth on is
like is the browser even the right way
to think about how like the main kind of
distribution model for these things um
going forward because even when I was
prepping for this podcast, I'm a heavy
user of both traditional search and um
and AI tools and I have like slightly
different ways I use both of them and I
move back and forth between them all the
time. And to me, I'm actually like to me
they feel like portals into two
different worlds where one I want to go
interact with the web and the other one
I want to talk to an assistant. And
those are closer like to me they feel a
little bit less like two areas that
could converge into one thing versus two
distinct portals that have some sort of
overlapping ven diagram. And I'm curious
whether y'all see that as the case. like
do you ultimately imagine kind of like
web and the web and AI and browsers
converging into like one giant blob of a
thing and maybe it'll the UX will feel
better than a giant blob um hopefully at
that point or do you imagine these
things kind of living in parallel with
like obvious points of intersection but
kind of two fundamentally different
things at the end of the day.
>> Uh maybe I can jump in here. I think the
user experience is already experiencing
a lot of changes right now with all of
these AI native assistants and chatbots
and agents. So I expect that at some
point things will converge. But that's
my point of view. Um and especially that
you know I think maybe we'll see new
user interfaces that are not like
browser based where you scroll down and
pick and you know move from one page to
another. Maybe there'll be neuromorphic
you know interfaces you know brain
interfaces uh voice assistants so things
that are surrounding us. So I think
we're in the time where we see the
biggest kind of re you know revolution
in these user uh interfaces that we
haven't seen in decades. I think the
biggest changes were maybe with the
Apple iPhone um you know and you know
before that was the graphical interfaces
but now I think we're in for another big
change in the user experience especially
with these chat bots so at that time
it's not going to be switching back and
forth probably it's going to be like a
more converge experience and the user
interfaces we see today will be you know
back interfaces or maybe you know back
backend systems and the front end
systems
will be completely new ways of
interacting with the users and the
backends will kind of route you know to
maybe a more traditional APIs or agentic
or AI. So it's going to be I think a mix
and I'm hoping to see the conversion in
that world you know what does it mean
you know to all of these uh competitors
what search will look like and uh so I
think it's going to evolve into this
more who hopefully will converge into uh
different interfaces uh that you know
all are you know having a mix of
different user interactions whether it's
voice or touching or brain based
interfaces.
>> Yeah, I think Mark Zuckerberg is now
thinking metaverse. Marina,
>> yes,
>> it's finally time.
>> Finally.
>> Yeah. So, I I have one, you know,
thought on this,
um, and Mihi, I was curious if you were
going to jump in on this as well, but
um, the I I I actually think to your
point, Brian, right now our mental model
of them being two separate channels is a
technological like we're just not quite
there yet. Uh, and to me, it's around
the interface. Uh so this is why I was
curious if Mihi, our MCP uh expert was
going to jump on this, but right now um
I actually had an interesting
conversation with a colleague recently
who is is deeply in this space. Um and
uh I was explaining that models don't
call tools and that was shocking at the
moment that in fact a model has
absolutely no ability to do anything.
All it can do is produce a token. That's
it. That's all it can do. A system
wrapped around a model can do a whole
whole lot more, including invoke
arbitrary code, grab things off the
internet, frame things as tools that the
model can then generate tokens to say,
hey, that seems like the right step that
I should take. Please go do this system
that is wrapped around me. Um, and to
that end, we are still in this, I would
say, kind of infancy of expanding beyond
a textual user experience for the actual
set of tokens that go into the model and
the set of tokens that come out of the
model. And you know we keep throwing
around the word agentics but really I
think what that fundamentally means is
the evolution of the models in
conjunction with the expanded
essentially conventions that we're
coming up with for input tokens and
output tokens. And in particular, right
now we have no convention whatsoever for
output tokens that imply visual user
interactive experiences or to your point
Kar not even necessarily visual but
nontextual user experiences right you
can throw an arbitrary tool and if that
tool happens to know how to render as a
dial in some dashboard cool but the
models themselves have no sort of
inherent knowledge that when I want to
do something that represents
quantity the right? Output is not a
number followed by a percent sign. It is
in fact a visual dial or a bar chart or
something to that effect. And so I
think, you know, there's going to be
this continued evolution. And I'm
curious. I've seen a few articles flash
across around MCP extending into the
guey space that I think if that happens
will give model authors a chance to
actually train conventions around how to
generate visual components that will
really bring the actual UX of an AI app
a whole lot closer to what we expect
when we go to an arbitrary web page and
have an interactive uh you know UI
directly there in the browser. Yeah,
this this is this is very exciting
because I've been looking at MCP UI and
what block is doing uh in this space as
well. But really we're doing the same
thing as part of the project we're
building which is we're calling a lot of
AI agents and these AI agents are
rendering UI components based on the
result that you're getting. So this is
going to be a document that we want to
show and we want to preview. This is a
link. This is an image but this is a URL
to an image. We don't want you to
expand. So I think that visual
representation hasn't yet been
standardized. Maybe MCP UI is one
potential way to implement it, but we're
far away from a standard RFC that every
browser or a gentic or AI application
implements, every phone implements. If
you look at the reason the web
succeeded, it was open standards and
everybody adopting those standards. my
web page renders in your browser and it
doesn't matter what browser you're
using. were way away from that when it
comes to AI applications to the way
these AI agents are interacting and none
of them are able to display those visual
elements and there's also the security
risk right you don't want everybody to
render UI components that make it look
like your bank for example and you can
log into that bank so so I think watch
this space there's going to be a lot of
evolution especially in the UX and the
experience and a lot of what we've built
in the last 60 years of compute has
still been based on the same kind of
interfaces developed for the early
mainframes. you know the teletype if you
look at the terminal that you have in
Unix systems they're all evolutions of
the same thing even your keyboard right
when your keyboard says they're Qerty
that has been an evolution of you know
typing machines and typewriters like all
of these things are evolving on previous
systems and I'm eager to see something
developed from scratch in this space and
I think AI is going to enable a lot of
those innovations to come in and say
what if the user interface wasn't a
keyboard and mouse was what if we had a
different way of engaging with these
systems and I think AI especially with
the progress is made in visual and voice
and voice recognition and generation are
going to give you those options in the
future.
I'm going to move us along to our our
second topic today which is um Anthropic
just announced another raise. it was
their series F um raise and um this is
becoming something of a trend in the
industry. I uh I think data bricks just
did their series K. This is these are
letters of the alphabet that they did
not teach in my kindergarten VC school.
So we have a lot of things going on in
the private markets uh public markets
right now where very large fast growing
companies are staying private for for
much longer um than they used to and at
way higher valuations than was the case
but it is becoming increasingly the norm
uh these days. There are a couple things
that just I want to talk about kind of
where Anthropic just is in the market
right now because it is an impressive
number in valuation that that they came
in um on and obviously they're you know
one of the big leaders in the space and
Mihi I'll probably throw this question
over to you to maybe start with um you
know there's almost like two ways I
sometimes think about anthropic you know
one I think they're kind of known
amongst all the big AI players as being
more focused than other people are
they're like they're really going after
the code use case where if you look at
like open AI I or Google or some of the
other ones, they're kind of doing like
everything. They're kind of going after
the full stack of what a model can uh
potentially do. And so I think like one
way to look at where Anthropic is in the
market is it's just it's carved out a
niche um for itself that is a very
lucrative niche where they've had what
has been a a reasonably durable
advantage for I would say a quote
unquote long time in AI years. um even
though it's like you know like dog ears
or something at this point. Um but a a
different way to look at it if you look
at like the commercial success of AI so
far there's really just two killer use
cases chat code um right like that's
where all the money is um so far in in
the space like obviously a bunch of like
enterprise use cases are spinning out
but like the thing that a lot of these
big valuations are riding on presently
is like those two things and so I'm
curious when you look at anthropic do
you see a focus player or do you see
there's two markets and they the
leadership position in one of them.
>> I think it's also a question of cost
effectiveness. So right now entropic in
my view has the best bang for buck
models for code and they're I would say
opus 4.1 model is the best planner for
AI agents. It's really really good at
this focused use case but it's not very
cost effective. So if I were to use the
same model for general chat use cases,
I'm sure I would get get great results,
but it would be 10 times more expensive
than what I can do with a smaller,
cheaper model. So I think that's where
you kind of see Antropic really shine at
those more expensive, more complex use
cases that you get a lot of value from
as opposed to the general chat. They're
great at everything, but they're too
expensive to use for everything. And I
think this is where we're going to see a
lot of niche players card their market.
The small tiny model that's really
really good at one thing does it well
but happens to be extremely cost
effective at that.
>> That makes sense. And maybe throwing it
over to to you Gabe. Um you know GPT5
came out and like the initial reaction
to it, we're going to talk more about
GPT5 in the next segment as well, but
then initial reaction to it people were
like kind of bearish and they were
expecting more and then you know they
were like ah it didn't deliver as much.
But then over time if you've just like
monitored the sentiment online it's like
turned more positive people like this is
a pretty good model like there are
aspects of it from a like there are
people who act who like at least on thee
podcast it has not become the default
coding platform but there are definitely
people out there who have switched to um
to GPT5 you know I'm curious you know
start with Gabe but curious on the whole
crew's opinion on this is like how
durable is do we think anthropics
advantage in in the code face. Um
there's a lot been a lot of speculation
on what their secret sauce is and like
nobody seems to like totally agree on
that. But um you know I'm just curious
how you know what what do you think
about like that market and how sticky it
is and how kind of durable their
position is in it?
>> Yeah. So I I want to answer that in two
rounds. One uh at the meta level how
durable is the position of AI in the
development market at large anthropic or
not? and then where does anthropics sit
relative to the overall use of AI in the
development market. So on the first
topic and maybe this is foreshadowing
the the next segment uh that we're going
to cover but um it's kind of one of
those once you start you don't go back
situations. Sometimes as a developer
it's really hard to get your brain
plugged into how does AI fit into my
workflow. Like I've been writing code
for decades. I know how to do it. I
don't need something else to help me. um
once you start once you find even a
small piece of your workflow that AI
actually naturally slots into, it's
really hard to go back. So I think AI as
a whole is durable in the development
space because it really does help
actually remove blockers and remove
friction points. Um, as far as
anthropics presence in the uh developer
space specifically, I go back to my
previous answer about like a model can't
actually do anything. Um, a model
fundamentally has to have the patterns
trained into it to be able to perform in
a given system and take certain actions
in a given system. And I think Anthropic
is doing a great job of building models
that work extremely well with the
systems they are building around it. Um
I think their durability is going to
entirely be pinned on those systems that
wrap the model themselves. So it's again
back to UX. It's um you know uh cloud
code is an excellent user experience
that's found resonance with developers.
There's many other similar options out
there. But back to the earlier
conversation about defaults, Claude Code
is sitting in that default position for
terminal AI assistant. Um there's uh
they they just really have managed to
figure out um the right sets of tools,
the right prompts for those tools that
work well with their models. Um and the
whole pieces of the puzzle come together
in a package that provides an a fuzzy
but tangible advantage. Um and I think
they will have to stay on the ball about
that that ecosystem that they've built
if they want to stay in position. I
think the models themselves you know the
benchmark scores the fundamental
capabilities of the models for you know
isolated experiences is going to even
out the playing field. It's going to
level out. Um, and especially if you're
focusing on specific use cases that you
care about, to your point me high, we're
going to get smaller specialized models
that are way cheaper and can accomplish
that specific task just as easily.
However, the overall tooling and user
experience is is the place where I think
they're going to hold their um their
advantage.
>> One other angle to this is um and Kar,
I'll throw this one over to you. it'll
be our token vibe coding um you know
discussion of of the week. But um to
your point Gabe about like these things
needing to be wrapped in systems you
know there anthropics growth like cloud
code is obviously one of the primary
ways that people consume it but like
there's all these new AI powered
development tools like you know cursor
and wind surf but um replet is maybe
another example of of this and I think
has been like very successful and grown
really fast on the back of vibe coding
and particularly for the sort of like
proumer
um sort of space for for not like the
person who's like you know Gabe who's
been developing for decades but you know
the person who's getting into coding and
co and like these new AI gener or AI
powered tools are actually helping to
get people over the hump of you know
like the initial periods of like
unproductivity and feeling like you know
even just environment getting your
environment set up is like a major
barrier for people never mind even
writing your first line of
Um how much do you think how big do you
think the opportunity is and just like
what have you seen in terms of as these
models and the I would say the systems
that wrap around them improve in
capability like how large of an
opportunity do you think it is for just
expanding the market around code period
right are we going are we heading to a
place like I've seen two kind of
patterns around this I'm just curious
your take on it which is you know one
some folks think that we're heading to a
place where anyone's going to be able to
build or at least proto prototype
anything. Um, and that's going to become
standard practice. Like forget the kind
of traditional development process, just
get 80% of the way there and throw it to
somebody who's a professional. Um, sort
of thing. Then I see another kind of
take, which is that the the speed ups
that you're getting there are kind of
fake. um and they're not as big as you
think they are because the amount of
refactoring, the amount that you have to
go figure out what the code's actually
doing, like the security implications
there, like the path to productivity for
vibe coding maybe is not as
straightforward as people think it is.
But if it if it did work, obviously the
TAM of what that would do to this whole
market would, you know, go up way more.
And so, you know, I'm just maybe
starting with Kowar, just curious like
how you think about vibe coding and just
like the accessibility of programming as
like a major component of like what this
market will look like long term. I think
this is a huge opportunity and of course
you know for the players that get it
right like you said you know there is V
coping in a sitting like where you're
doing proof points and small prototyping
but there's also the serious coding that
you want to get you know as part
integrated in a real enterprise setting
in your real workflows with safety and
you know a lot of compliance and making
sure that you get the speed ups making
sure that you know things are bug free
And so kind of product versus
experimentation and proof of concepts
and research and so on. And of course I
think with claude uh they've they have
uh you know a lot of advancements you
know when it comes to their claude
family of models for code. Um so their
coding llams has been quietly you know
gaining traction especially in the
enterprise workflows particularly
because of its marketed as safer and
more reliable for professional
environments like legal, healthcare and
finance. So coding LLMs they're a major
part of their product edge but also
entropics position is broader. It's also
safe enterprise ready and aligned AI. So
I think um there's also you know this
hype that we're seeing and I think the
article that we were uh discussing here
you know showed you know that hype where
I think right now we are kind of over
the you know the big hype and we're
trying to enter the pragmatic phase can
this make money for us you know if I use
you know these code LLMs and so on
what's the uh ROI for for my investments
uh is this safe for me to use uh how do
I integrate it how do I maintaining
this. So it's I think there is a lot
more to unpack there. It's not just
about VIP coping but how do we do it
efficiently, safely and in a matter
that's going to bring us uh return on
the investment here and be enterprise
ready. So um I think it's the space you
know it's going to be growing and of
course it's not just about one model
like Gabe keep saying it's how do you
use it? How do you wrap it around? How
do you you know maintain that there's a
whole life cycle behind it and how do
you make sure that the codes that are
generated they're pragmatic they're
efficient uh so it's not just about you
know how do they integrate in the whole
stack uh how do they run efficiently on
your hardware you know how do they
manage all the you know compliance and
you know safety it's it's a much more
complex thing than just you know
generating automatically you know few
lines of code or even big lines of codes
So integrating that in a serious
workflow it's going to take a lot more
work than just you know uh quick you
know vibe coding. So who gets that
right? Of course, it's going to be a
huge uh you know opportunity and uh I
don't think we're going to completely
replace the programmers because
programmers here of course you're going
to get them over humps you're going to
boost their productivity but we need
also these developers that understand
you know the output of these code LLM
understand how to integrate them
understand how to orchestrate how to you
know uh you utilize that part of a big
design so um maybe other skills need to
be required or acquired ired at that
level but also you know platforms that
allow to do these things to test to
debug and maybe other LLMs will do the
reasoning and the debugging and so on.
So it's going to be a whole workflow
that needs to be done right efficiently
and also in a cost-effective manner and
I think entropic is making headways in
that space especially with their safety
moto and uh the enterprise focus and uh
but maybe we'll see other players. Yeah,
I think all we need to see is one failed
release and developers are going to
switch over last week for 3 days. I
switched from using cloud code to
codeex. And the reason why is because
when I was trying to use Opus 4.1 to
actually go and write some test cases,
he will look at my code and go all these
test cases are failing. I will just
remove the test cases. It's like no,
please fix the code. It's like it's
easier to fix the test cases. And he
kept doing that again and again and
again. And when I look at the incident
report, I see an incident report now
that says 6 days ago for 3 days, Opus
4.1 was seeing degraded quality. Users
were seeing lower intelligence or issues
in tool calling with cloud code. So I
was directly experiencing that and
immediately my reaction was well this
isn't working. I'm going to switch to
the next model or I'm going to switch to
the next tool. There's a lot of choice.
If your Google Chrome browser isn't
working, you're going to switch to Edge.
you're going to switch to Firefox.
You're going to find another browser.
You're not going to say, "Well, for free
days and I'm not not going to use the
internet."
So, I think that's actually a great
transition to our final segment that
we're going to do today. Um, you know,
we were discussing ROI, um, and just
even getting to that through the hype
cycle. You go through the hype cycle and
then you wind up on the plateau of
productivity. And while I still
definitely think we're in we're still in
hype cycle a little bit, there's also
like aspects of the market that are
transitioning I think to that real like
productivity and like how do we get
value out of this? But at the same time,
I look at the internet every day and I
look at discussion online every day and
there's another segment of the community
that is just in a totally different
place. Um, and like there's a thing I've
observed is that there's actually like a
market, a real market out there for AI
cynicism. And it's not like I don't
think it's like sober analysis. It's not
like, oh, this is too hyped. Um, like
I'm still a I'm still a fan, but you
know, the thing you're saying it can do,
it's not quite there yet. It's it's much
further than that where people are kind
of rooting for failure. Uh, you know,
talking endlessly about um bubbles, the
collapse of the economy, we're
destroying the plan. you know, it's
really kind of much further than that.
And um I'm I'm curious what y'all think
is motivating that. Like a simple one is
like maybe it's just purely fear u which
is you know a normal fear of change um
sort of thing, but it's it's beyond um I
think just like traditional technology
skepticism. And to me, and I mentioned
this earlier, but like I felt like GPT5
was kind of a flash point for this where
people looked at that and they were
like, "Oh, it's so over." Um you know,
like progress is stalling. And then but
when you look at one of the the progress
curves over time, you're like, seems
like it's still progressing pretty
normally. Like why why are people
freaking out about this? There's lots of
use cases that people are starting to
find really valuable. And so I'm just
curious how you all read the situation
and you know whether any of you just see
there's like any plausible universe
where we could wind up with something as
like the fourth AI winter in in this
cycle or are we just in kind of like
permanent productivity from from here?
Um, you know, Gabe, it looked like you
were kind of winding up on that one, so
I got a couple thoughts on this one.
Fire away.
>> Yeah. And I teased this in the last one,
but you know, to your question of why,
it really feels like it is.
It's all about an expectations game,
right? So, I think there are certain
members of the AI community that are are
very fond or perhaps they built their
brand on projecting into the future. and
you know sort of for for better and
worse um there are those that project
into a utopian future where AI really
plays the role of humans in roles that
humans don't like to play. There's those
that also see that exact same future and
get extremely pessimistic and worried
about it. Um, and while I don't want to
rule out either of those possible
futures, I love a good sci-fi book as
much as the next. Um,
the place we are today is around a
technology phase change. And in that
sense, this is there's nothing novel
about this. You know, Mihi, you you said
it great the other day. If for what some
reason you download an update to Chrome
and your browser just literally seg
faults when you launch it, are you going
to not use the internet today? No, of
course not. You're going to find a
different browser. The same thing I
think is rapidly becoming true about AI
in our daily lives. Uh even in ways we
probably aren't even realizing. And I
think if today uh you know all of us
lost access to all AI models, we would
feel that pain each and every one of us.
It's like even if we are an AI skeptic,
there's probably somewhere in our lives
that it has started to creep into our
daily workflows. So I think from a
technology phase change, no. Um, you
know, the article we read spoke to the
previous AI winters. All of those had
the commonality that the funding and the
usage of AI was isolated in a very small
number of specialized users that had
been oversold on the potential
capability and got disillusioned. I
think we are to the point with the
technology at this point that it is
ubiquitous enough that generative AI is
not going to go away. Now the funding
for some elements of generative AI may
go away especially the ones that are
looking at those further reaching
futures. Um, and to wrap back to that
for the reaching future question, Brian,
I think the reason we see so much
appetite for this skeptical view of it
is just I think sort of human nature in
the the sort of most evolutionary basic
like competitive survival of the fittest
type of perspective. I mean the there
aren't
I'm not a I don't have an encyclopedic
knowledge of sci-fi uh genre but if you
look back pre- internet you didn't have
a deep literature imagining a future in
which humans could communicate instantly
right I mean yeah that that that played
its way in but that doesn't start to
approach and replace the humans in those
narratives AI has always occupied this
place in the human cycle psyche of what
happens if like how much of me is
reproducible in a machine. We've been
writing books about that and telling
stories about that time immortal. Uh
right. And so that I think is really
where the skepticism keys in on is the
dystopian view of what happens if we
actually achieve that. But from a
technology perspective, I think it's
here to stay and I think a steady
investment is not going to go away
anytime soon. Yeah, I totally agree with
Gabe and I think uh we're not really
heading for a full-blown AI winter. Um
but the industry here is experiencing
kind of a slowdown but more of a reality
check. I think the debate over GPG5
uh highlights you know kind of a
critical gap but that that we have
between the optimistic promises like the
bulls as mentioned the the article and
the practical concerns and I think it's
kind of a period of a bit of a slowdown
which is an opportunity also for the
industry to focus on real world problems
and sustainable business models because
historically if we look at past AI
winters they've occurred when you know
overhyped promises failed to materialize
and you know that kind of led to this
sharp drop in funding interest as like
for example the first AI winter it was
caused by failures to achieve human
level intelligence and the second was
you know due to limitations of expert
systems and so the lesson here is that
these unrealistic expectations are the
biggest threat to AI progress but I
think what we see here is you know I
think tons of investment has already
been made and so today's AI is embedded
in billion dollars first products you
know GPT cloud copilot and others past
AI winters really did not have any real
market traction there's also huge
infrastructure that was built like GPUs
cloud data scales you which mean
innovation doesn't really vanish you
know it really compounds here and uh and
of course you know some of the
similarity I see with the past AI
winters you know there is you know the
high versus reality so investors expect
a AGI like leaps but I think the
progress is more like incremental
especially what we see with GPT5 it's it
is slower versus GPT4 but it is for me
it's more incremental it's going to take
time and you know I I think also there
is a bit of a maybe over reliance on
these benchmarks you know just like
expert systems failed when taken out of
the lab current LLMs right now they're
showing some brittleleness in the wild
so they that's going to you know there
will be some friction you know when you
take these things and really apply them
in real to solve real problems. So
certain parts will fail, certain parts
will work. You know, there's going to be
a lot of refinement in the reality check
here. But today's ecosystem, I think,
has a sustained revenue stream and that
financial base didn't exist in past
winter. So like Gabe said it is here to
stay and uh we just have to you know I
think keep making the progress and uh
hopefully you know of course there will
be ups and downs uh like any big
technology shifts and we've experienced
this also with the industrial revolution
you know with the you know the print
also revolution when storytellers were
replaced by print many people were not
happy but you see what the print the
publishing did you know to the industry
and to our lives so it's just you uh
like um uh we all see here it's a just a
shift and a reality check more for AI
not really an a winter
>> I'll give a different take on this which
is I think the progress has actually
been tremendous but invisible because
we've seen a shift in the cost of these
models where we have intelligence too
cheap to meter we're seeing all these
models get integrated more and more and
more to the point where they become
transparent you just happen to be using
some kind of an AI as part of your
application it behaves lives better, but
it's so fast and so cheap and so
effective at what it does that you don't
see it. You don't interface with it like
you would with chat GBT. It's just part
of the application workflow. And I'm
also seeing a shift in the focus from
building models which give you that raw
raw performance to models like even with
GPD5 where they have that router they
write route you to different models. I
don't think they've quite perfected that
yet, but the direction is very positive.
I think the price changes, especially if
you look at Nano, which is, I don't
know, 5 cents for 1 million token or
something that's insignificant, is
amazing. And I think if we continue in
this direction, we're going to see AI
embedded more and more and more into
application workflows and into real
world systems than just a consumer thing
where you type into the box and you get
an answer from the AI.
>> I think that's a great place to end on.
Mihi, Kowar, Gabe, thank you for joining
us this week. Uh for the listeners out
there, like, subscribe if you're a fan
of the pod, and we will see you back
next time on Mixture of Experts.
[Music]