Beyond the AI Bubble Hype
Key Points
- A growing “AI bubble” narrative has emerged, fueled by the disappointment around the botched GPT‑5 rollout, high‑profile layoffs in Meta’s AI division, Sam Altman’s own admission of a bubble, and an MIT study highlighting the high failure rate of enterprise AI projects.
- The hype‑to‑doom swing is partly driven by a collective need for a dramatic story, as the initial excitement over GPT‑5 quickly turned into a counter‑reaction seeking a new narrative.
- The MIT research underscores that successful AI adoption requires strong leadership, cultural change, and clear high‑value use cases—factors many organizations are still lacking.
- Despite the negative sentiment, the chatbot market is reaching saturation, meaning incremental gains from larger models are becoming less perceptible to end users.
- A more balanced view recognizes that while certain AI applications face diminishing returns, the broader AI landscape still holds realistic opportunities beyond hype‑driven hype.
Sections
- Debunking the AI Bubble Narrative - The speaker identifies four drivers—storytelling cycles, Meta layoffs, a botched GPT‑5 rollout, and an MIT study—behind the current “AI bubble” panic and argues for a more grounded perspective on AI’s state.
- Exponential AI Progress & Chip Scarcity - The speaker asserts that AI performance is improving exponentially across unsaturated benchmarks like MER, while a shortage of compute chips limits model upgrades, indicating massive demand that is often misunderstood in studies such as MIT’s.
- AI's Exponential Power-Law Returns - The speaker argues that AI delivers outsized, exponential gains—following a power‑law distribution—making it an existential bet for firms and prompting massive, rational investment despite the high risk of failure.
- Optimistic View on AI Future - The speaker expresses curiosity, dismisses fears of an “AI winter,” and hopes to shift the prevailing narrative about artificial intelligence.
Full Transcript
# Beyond the AI Bubble Hype **Source:** [https://www.youtube.com/watch?v=Sno3eqzgmtA](https://www.youtube.com/watch?v=Sno3eqzgmtA) **Duration:** 00:11:57 ## Summary - A growing “AI bubble” narrative has emerged, fueled by the disappointment around the botched GPT‑5 rollout, high‑profile layoffs in Meta’s AI division, Sam Altman’s own admission of a bubble, and an MIT study highlighting the high failure rate of enterprise AI projects. - The hype‑to‑doom swing is partly driven by a collective need for a dramatic story, as the initial excitement over GPT‑5 quickly turned into a counter‑reaction seeking a new narrative. - The MIT research underscores that successful AI adoption requires strong leadership, cultural change, and clear high‑value use cases—factors many organizations are still lacking. - Despite the negative sentiment, the chatbot market is reaching saturation, meaning incremental gains from larger models are becoming less perceptible to end users. - A more balanced view recognizes that while certain AI applications face diminishing returns, the broader AI landscape still holds realistic opportunities beyond hype‑driven hype. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Sno3eqzgmtA&t=0s) **Debunking the AI Bubble Narrative** - The speaker identifies four drivers—storytelling cycles, Meta layoffs, a botched GPT‑5 rollout, and an MIT study—behind the current “AI bubble” panic and argues for a more grounded perspective on AI’s state. - [00:04:08](https://www.youtube.com/watch?v=Sno3eqzgmtA&t=248s) **Exponential AI Progress & Chip Scarcity** - The speaker asserts that AI performance is improving exponentially across unsaturated benchmarks like MER, while a shortage of compute chips limits model upgrades, indicating massive demand that is often misunderstood in studies such as MIT’s. - [00:07:31](https://www.youtube.com/watch?v=Sno3eqzgmtA&t=451s) **AI's Exponential Power-Law Returns** - The speaker argues that AI delivers outsized, exponential gains—following a power‑law distribution—making it an existential bet for firms and prompting massive, rational investment despite the high risk of failure. - [00:11:50](https://www.youtube.com/watch?v=Sno3eqzgmtA&t=710s) **Optimistic View on AI Future** - The speaker expresses curiosity, dismisses fears of an “AI winter,” and hopes to shift the prevailing narrative about artificial intelligence. ## Full Transcript
A number of things are coming together
to drive a narrative that we're in an AI
bubble today. I saw a conversation just
on X last night basically saying the
death of AI is near, right? Like we have
the profits of doom out. I want to lay
out for you why I think that's
happening. And then I want to lay out
for you the pieces we're not looking at
as a community as we talk about AI. And
last but not least, I want to put
together a story that I think is more
accurate about where we're actually at
in AI at the moment. Less hype, more
reality. That's what I do. So here are
the four things that I think are driving
this are we in a bubble death of AI
narrative. Number one, people need a
story. There was a massive story and
swing around GPT5 hype. It was kind of a
botched roll out and people need a
counter reaction. People need to come
back and have a different take now. And
so I think the need for narrative swing
and narrative drama is part of the
challenge here. Number two, reports of
layoffs at Meta. So the AI division at
Meta was widely reported to be
restructuring. there have been cut cut
offs and challenges. Well, that's a part
of the layoff narrative, right? That's a
part of the AI in trouble narrative.
Number three, Sam Alman himself admitted
the GPT5 rollout was botched and
infamously said, "Yes, we're in an AI
bubble or there's some elements of an AI
bubble in what we're doing." And then
number four, around the same time all
this was happening, an MIT study came
out saying that most enterprise AI
projects fail, which is not new. It's
it's yet another study showing that this
is a high-risisk highreward kind of
activity and that organizations really
struggle to get it right at the team and
above layer even as we see individual
productivity gains. Ironically, a lot of
the things that I've been emphasizing
are things the MIT study called out
like, hey, you need to have the right
leadership, you need to have a culture
change moment, you need to define a high
value use case. I could go on, etc.,
etc. But all these four things came
together, right? People saw layoffs.
They saw that they needed a narrative
after GPT5 was disappointing. They saw
Sam saying the word bubble. They saw
this thing on enterprise AI studies
failing and it was like, you know what?
That's it. That's it. We're done. We're
in a bubble. It's over. And so people
just kind of swung the pendulum swung
back and the narrative has exploded from
there. So I want to suggest to you that
a more correct take includes the
following five elements or following
five facts that we're not really paying
attention to. Number one, the chatbot
use case is indeed getting saturated.
This was reported by Sam in like an
interview right after the one where he
talked about the bubble. In other words,
if you're in the chatbot, you're not
necessarily going to see tons of
tremendous gains anymore, no matter how
smart the model gets because people
don't necessarily perceive the progress
in the chatbot because the AI is about
as good in the chatbot as it's going to
get. So, famously, what Sam said in that
conversation with Chad, GPT6 is coming
and memory is going to get better, but
really the chat use case is kind of
saturated. I think he's right. I don't
think we have a lot more to gain from
the chat use case. Number two, I think
we're forgetting that progress is moving
to agentic and complicated use case,
which is sort of a correlary to the
chatbot, right? And those use cases are
hard for people to understand. I'll give
you an example. There was a big
conversation on X over the last couple
days around whether GPT5 Pro did new
mathematics when it was assigned a new
theorem and did a new proof for
something that a human hadn't done. And
the consensus seems to be it was new. It
was correct. It is a milestone but it is
a different kind of innovation than we
get from a human. Humans are good at
creativity, intuition and the models
that we have today are good at brute
forcing innovation forward. And so it
was in a position where it could brute
force a series of calculations around a
defined problem space and get to a new
proof that hadn't been done before. And
it did it. And that lines up with what
we see in other innovation stories where
we see that these models are very very
good at certain kinds of innovation that
really do push the field forward but
they aren't doing the same work as
humans and that nuance often gets lost
and that's a great example of how
complex agentic use case analysis is
getting and assessment is getting. I
don't know the math either. It's hard
for people to understand or experience
where the progress is. Fact number three
that I think is getting forgotten.
Progress is demonstrabably continuing at
exponential rates. We have any benchmark
that is not saturated is showing
continued strong gains. I think my
favorite is MER right now because it
just doesn't have a top. All it does is
it measures how long a task takes a good
human and then it says can an AI do it
50% of the time. Now I'm the first one
to say 50% is a low bar, but at least
it's a consistent bar. And we keep
showing exponential gains on that as a
use cases get stronger and we're not
bottoming out. That's not slowing down.
We keep doubling every few months.
Number fourth, we are still
underallocated on chips. In the same
interviews that got blown up around the
world around AI and bubbles, Sam
admitted he could release a smarter
model, but he lacks the chips to do it.
Anthropic is also famously
underallocated on chips. Everyone's
using them for coding, and they just
can't get enough chips. They can't do
it. So, in that world, if they're
underallocated on chips, it means they
sense tremendous demand, which is backed
up by what we see from the MIT study. If
95% of orgs are failing at AI, that's
95% of 100% who are desperately trying
to get into AI, that's the demand.
Ironically, the MIT study was read as
reinforcing uselessness when what it
should have been read at is reinforcing
the insane
cost benefit that organizations are
running to get AI correct. Like they are
doing absolutely anything they can to
force their way in the door. Fact number
five, teams are refocusing now that the
path to the next leg of gains is mapped
out. I have been in a lot of corporate
restructurings. It's very typical once
you bring in fancy new talent like Meta
has to restructure and that is exactly
what they did. And the path to the next
leg of games is around inference and
Meta has grabbed a bunch of people who
are good at the next leg of AI computing
and they're just refocusing to do that
well. I don't think that's that big a
surprise frankly, but it got fed into
the story. So if you put this all
together, you get a story of continued
progress on high-v value use cases.
Continued demand for chips. Ironically,
continued demand for intelligence backed
up by everybody saying they don't have
enough chips to serve models, backed up
by the MIT study, ironically. So is Sam
right? Are we in a bubble? I would
actually argue that if he means are
there elements of unfounded hype in AI?
Yes, there are. Absolutely. Is there
froth? Yes. as a wonderful example.
Again, just from this week, I could pick
any number of a dozen examples, but look
at the number of lovable copycats out
there. How many companies do you know
who have put up a little box saying,
"What do you want to build today?" The
latest one is Air Table. I would not I
would not think that Air Table should be
doing that, but they've decided to. With
any gold rush, you get people rushing in
to stake a claim where they think
there's gold. And Lovable has
demonstrated there's gold in vibe
coding, and so now there's a rush there,
right? And there's going to be a lot of
mewoo players. anytime you have value,
you have me too players. That doesn't
mean it's inherently a bubble no matter
what. And I think that people sort of
overindexed on that comment and they
thought there's hype players that means
it's a bubble. Let me tell you, I have
lived through a bubble. That is not the
only element you need for a bubble. I
think that one of the things that we
should balance out with as we look
across like how people got to this
narrative, the things that we've
forgotten, what Sam might have meant by
bubble and what elements are indeed
bubbly in the AI narrative. We need to
also pay attention to what else is going
on. And I think there's there's two
trends that better explain the full
story I've been telling just a bubble.
One is AI is demonstrating real value
and real use cases. And that is
ironically why businesses are leaning in
so hard. The story of the 5% isn't
getting told, but I've seen it. When
organizations get it right, AI is
delivering step change gains. It's
delivering 10x gains across the
business. That is existential. It is
worth betting a lot on. It is why the
organizations that are failing are going
to come back and most of them are going
to try again. They can't afford to miss
this one. The second one is related to
that. We are in a power law game and
power law cost and returns show up
across AO. That means AI is increasing
according to a power law. So it's
increasing exponentially. I talked about
that. It also means you get power law
returns from gambling on AI as a
business. And I gambling is probably the
wrong word. Betting on AI as a business.
Essentially, if you invest in something
and there's a power law return, it's
rational to invest more than you usually
would. And we see that pattern play off
across companies investing in AI, but
also across model makers. Modelm makers
investing a billion dollars in AI talent
or whatever it is, as Zuck did, model
makers investing a huge amount in chips.
All of that is a way of saying we think
there's disproportionate returns on AI
and we are going to keep investing very
very heavily in order to harvest those
returns. Now, I do think one of the
things that's shifted in this game is
that it's harder and harder to catch up.
One of the things that I noticed is that
Apple is trying to figure out how to
recast their narrative in the last week
or two. They need to be seen to be
playing an AI. And so, they had a big
piece that there was a leak. I'm sure it
was a leak that was kind of intentional,
guys. But it is harder and harder to
catch up as we move forward on the AI
frontier. And there are fewer and fewer
labs that are really seriously playing
on the edges of AI. There's OpenAI,
there's Anthropic, there's Google, and
Meta is trying. and XAI is trying. And
other than those, like Amazon has fallen
by the wayside. Microsoft has arguably
just decided to be in the cloud business
and serving AI models business and
that's gone very well, but they're not
really doing something separate from
open AI right now. And part of the
reason for that is that as you get a
power law world with AI, you get
incredible pressure to specialize and
pick your niche because otherwise you're
spending a lot of money for nothing. And
so, ironically, I would argue the fact
that we've seen a winnowing out and a
narrowing of AI model makers in the last
year. It's an argument that people are
actually starting to think about how
they're allocating capital, which is not
something you do in a bubble, and
they're starting to be trying to be
smart about where they play in this AI
world. Microsoft wants to sell the picks
and shovels. They want to sell the cloud
piece. Google wants to sell the cloud
piece. I think AWS does as well,
although less successfully so far. In a
power law world, it pays to invest
heavily if you know your niche. which is
sort of a large strategic insight that
scales all the way out to businesses.
Like you have to know your niche to sort
of be able to invest carefully,
cleverly, and well if you're going to
invest that much. But if you know your
niche, it is rational to allocate
capital heavily. And that's what we see
businesses doing. And so when you lad
this together, some froth, you have
demonstrated real value on use cases,
you have a power law dynamic going on. I
think the way I would put it is that we
are in a world where model makers are
showing exponential gains in model
performance and we are very very early
and seeing how that lands with the
business and that's part of the irony
and the challenge right now in terms of
where this sets us up for the rest of
the year. Listen, I've lived through
multiple bubbles. The one thing you
never see in a true bubble is people
complaining about it being a bubble. If
it really was a bubble, we wouldn't all
be complaining about it. Instead, we
would all be hyping it up. And I think
it's really healthy that we're having
this conversation. It's healthy that
we're asking the question, but when you
look at the narrative overall, I don't
think it I don't think it adds up to
bubble. I think it adds up to a frothy
high capital market where some people
don't know where their their niche is
and they're overallocating in the wrong
spaces. You see some people who are
desperately trying to AI wash their
products and you see real value and the
real value is so disproportionately
helpful to business that people are
doing anything to get it. That's a
complex story. It's going to become more
complex over time. I don't think that we
are going to get into a world again
where we have immediately obvious
chatbot use cases. There are going to be
some immediately obvious AI use cases
for consumers coming. I don't think it
will be in the chatbot. It will be
somewhere else. But we're going to
increasingly see incredibly valuable
business tools come out and I think
we're just at the front end edge of that
piece of the AI revolution. I'm excited.
I'm curious. I am not worried about an
AI winter. Uh, and I hope that this has
helped you recast some of the overall
narrative we're seeing,