AI Eats the World: Strategic Takeaways
Key Points
- Benedict Evans, a two‑decade tech strategist at a16z, framed AI’s rise within the broader “platform cycle” that historically reshapes industries—from mainframes to PCs, the web, smartphones, and now AI—while emphasizing that new layers typically augment rather than replace existing ones.
- He highlighted AI’s “moving‑target” nature: technologies once labeled AI (databases, search, classic ML) shed the label once they become routine, meaning today’s hype around LLMs and generative models obscures deeper, longer‑standing technical progress.
- The surge of AI investment follows a predictable wave pattern that creates new winners and losers, yet it rarely eliminates prior toolsets, resulting in a fractal ecosystem where chat‑GPT coexists with emerging 3D‑modeling and vision tools.
- Massive capex from big tech—hundreds of billions (potentially trillions) in data‑center and GPU spending—signals that AI is transitioning from a speculative bubble to a fundamental infrastructure layer driving future profit margins.
- For AI leaders, the takeaway is to treat AI as a strategic platform shift: balance hype with sustainable P&L impact, integrate new capabilities alongside legacy systems, and position teams to capitalize on the long‑term, layered growth of the AI economy.
Sections
- AI Eats the World – Strategic Takeaways - A briefing that introduces veteran analyst Benedict Evans’ “AI Eats the World” talk, outlines his credentials and the macro‑level focus of his presentation, and previews key strategic implications for AI team leaders and executives.
- Shifting AI Leaders and Adoption Gap - The speaker explains that Anthropic, Google, and OpenAI now dominate model production, while most companies lag in daily AI use, with adoption hampered by motivation, integration, governance, and the need to envision LLMs as high‑fidelity “alien” intelligences.
- AI as Inevitable Infrastructure - The speaker warns that firms must treat AI like spreadsheets—an essential, transformative infrastructure—not an optional R&D experiment, emphasizing that adoption is lumpy, path‑dependent, and reshapes value chains and workflows.
- Multi-Model Strategy Over Lock‑In - The speaker advises treating AI models as interchangeable components—routing workloads by cost, latency, data sensitivity, and jurisdiction—while emphasizing that AI, like cloud before it, will reshape organizational structures and power dynamics rather than merely replacing jobs.
Full Transcript
# AI Eats the World: Strategic Takeaways **Source:** [https://www.youtube.com/watch?v=iGvJpBWWGOU](https://www.youtube.com/watch?v=iGvJpBWWGOU) **Duration:** 00:15:48 ## Summary - Benedict Evans, a two‑decade tech strategist at a16z, framed AI’s rise within the broader “platform cycle” that historically reshapes industries—from mainframes to PCs, the web, smartphones, and now AI—while emphasizing that new layers typically augment rather than replace existing ones. - He highlighted AI’s “moving‑target” nature: technologies once labeled AI (databases, search, classic ML) shed the label once they become routine, meaning today’s hype around LLMs and generative models obscures deeper, longer‑standing technical progress. - The surge of AI investment follows a predictable wave pattern that creates new winners and losers, yet it rarely eliminates prior toolsets, resulting in a fractal ecosystem where chat‑GPT coexists with emerging 3D‑modeling and vision tools. - Massive capex from big tech—hundreds of billions (potentially trillions) in data‑center and GPU spending—signals that AI is transitioning from a speculative bubble to a fundamental infrastructure layer driving future profit margins. - For AI leaders, the takeaway is to treat AI as a strategic platform shift: balance hype with sustainable P&L impact, integrate new capabilities alongside legacy systems, and position teams to capitalize on the long‑term, layered growth of the AI economy. ## Sections - [00:00:00](https://www.youtube.com/watch?v=iGvJpBWWGOU&t=0s) **AI Eats the World – Strategic Takeaways** - A briefing that introduces veteran analyst Benedict Evans’ “AI Eats the World” talk, outlines his credentials and the macro‑level focus of his presentation, and previews key strategic implications for AI team leaders and executives. - [00:04:46](https://www.youtube.com/watch?v=iGvJpBWWGOU&t=286s) **Shifting AI Leaders and Adoption Gap** - The speaker explains that Anthropic, Google, and OpenAI now dominate model production, while most companies lag in daily AI use, with adoption hampered by motivation, integration, governance, and the need to envision LLMs as high‑fidelity “alien” intelligences. - [00:08:17](https://www.youtube.com/watch?v=iGvJpBWWGOU&t=497s) **AI as Inevitable Infrastructure** - The speaker warns that firms must treat AI like spreadsheets—an essential, transformative infrastructure—not an optional R&D experiment, emphasizing that adoption is lumpy, path‑dependent, and reshapes value chains and workflows. - [00:11:33](https://www.youtube.com/watch?v=iGvJpBWWGOU&t=693s) **Multi-Model Strategy Over Lock‑In** - The speaker advises treating AI models as interchangeable components—routing workloads by cost, latency, data sensitivity, and jurisdiction—while emphasizing that AI, like cloud before it, will reshape organizational structures and power dynamics rather than merely replacing jobs. ## Full Transcript
This week, Benedict Evans, a 20-year
veteran of A16Z, gave a memorable
presentation in Singapore called AI Eats
the World. My executive briefing this
week is going to be focused on what he's
talking about, why we need to pay
attention, and what the implications are
for all of us who are building and
leading AI teams. Let's get right to it.
So, first, who's Benedict? So he he has
been a tech strategist for the last 20
years specifically focused on platform
shifts which makes him perfectly
positioned to think about what AI means
strategically. So he's been involved in
PCs, the web, smartphones, social, and
of course now AI. His job is to think
about how these shifts change power
margins and industry structures. So he's
not selling you an AI product in this
presentation. He's trying to be a macro
translator between the hype and the P&L
statement. So he's useful as a sanity
anchor in a world that loves hype. The
setting is Super AI Singapore 2025 and
he is talking to senior leaders, right?
CTO's, investors who are asking
themselves, is AI a bubble? Is AI just
the next software cycle? Is this the
moment when everything we know about
software economics breaks? So what did
he talk about? This is 90 slides. I'm
going to get it into just a few minutes
for you and then we're going to start to
look at strategic takeaways that I
pulled out and what I think it means for
all of us. First, Ben talked about AI as
a moving target. AI used to mean
databases. Then it meant search. Then it
meant classical machine learning. Once
it works, we stop calling it AI. I love
that insight. So today, large language
models and generative models are wearing
that label. But other stuff, people are
forgetting that it's AI because it
works. When you think about it that way,
you start to realize how deep the roots
of this technical transition are and how
much of our adoption curve is driven by
novelty. Ben also talked about the
platform cycle frame. The idea that we
are moving through predictable wave
patterns even as AI is a novel
technology. But these novel technologies
have predictable patterns that AI is
following. So, we've moved from
mainframes to PCs to web to smartphones
and now to AI. Every wave attracts
massive investment at first. It reshapes
who are the winners and who are the
losers. But this is the critical point.
It rarely deletes previous layers. I
loved that takeaway because it's
fractal. That takeaway works both for
the larger insight that like I have a
smartphone now and also a laptop but
also in the world of AI the newest tools
that are coming out in 2025 are rarely
deleting the base tools. We are getting
new tools for 3D models. We are getting
new tools for vision. We are not
deleting chat GPT. So the idea that like
you can have massive investment and
reshape winners and not delete previous
layers seems very powerful to me. On the
capex side, Ben pointed out that yes,
big tech is spending hundreds of
billions, if not trillions, on data
centers and GPUs. And at the same time,
more and more labs are grabbing on to
proliferating AI technology so that they
can train good enough models. The net
effect is that the model itself is
looking like a commodity input. And we
have talked about that a fair bit on
this newsletter. You should not be
surprised to hear that the model is not
a moat. I will add a caveat that Ben
didn't talk about a ton. One of the
other papers that came out this week was
a deep study on Chinese open-source
models. And one of the things it
concluded is that the flexible
intelligence of these models taken in
aggregate across Quen and many others is
less clear, less effective, less
generally flexible than the intelligence
of Americanmade models. And that may be
because it's not quantized effectively
or distilled down effectively. But the
general conclusion of the paper is that
Chinese models are heavily reliant on US
frontier models and distilling those
down to get to opensource models that
they can release to the world. And in a
sense, what the paper suggested is that
the pace of innovation is still being
driven by private models developed by
frontier labs in the United States. and
the rest of the world is following suit
in pulling distillations out of those
models that may be good for some use
cases but are not as generally
intelligent and are not appropriate for
cutting edge uses within that context
right Ben's statement needs some nuance
because I would argue that the
methodologies used by the cutting edge
labs are defensible and certainly their
edge is defensible and so no one is
going to join the table of top model
makers which frankly at this point we've
even lost folks books in the last year.
Meta is not a top model maker anymore.
Grock is trying to be but isn't leading
anything right now. The top model makers
are Anthropic, Google and OpenAI. That's
it. And so in a sense, the model may
become a commodity. Intelligence may be
in everything and yet we still may have
cutting edge modes. Let's move on to
what else Ben talked about. One of the
things he called out that we'll talk a
fair bit about here is the adoption gap.
Lots of people and companies have tried
AI, but Ben made the point that far
fewer use it daily in core workflows. I
keep pounding this drum. The difference
between casual chat GPT users and
passionate professionals is night and
day 10x. And this is critical for teams
because one person on your team, two
people on your teams who are an eight or
a nine or a 10 in terms of their AI
skill sets out of 10, they are going to
run circles around everyone else. And so
the blockers to adoption, the blockers
to moving people that way are really
around motivation, the ability to
understand what these models can do, and
then on the corporate side, how do you
get them integrated? How do you handle
governance and risk? And how do you roll
them out? One of the things that Andre
Carpathy talked about this week on X
that Ben didn't mention because it
hadn't happened yet is he talked about
this idea that we need to be able to
imagine LLMs as nonan animal alien
intelligences at a high degree of
fidelity so that we can understand how
to work with them. Effectively, what
he's saying is we are as a species
having our first contact with a new
intelligence. And the better we can
build a mental model of what that
intelligence looks like and how it
works, the more effectively we can
partner together. This is not a like
scary doomsday first contact movie. It's
more about imagining how the
intelligence works helps us to prompt
better, work better, collaborate better,
all the boring stuff that's really
important. And this is something that
Ben didn't get into, but I think is
really important. Having that
imagination, that aha moment in your
teams is critical to enabling outsized
leverage, outsized impact for the team.
So that was the heart of his message.
That's what he talked about. That's 90
slides in just a few minutes. What are
the deeper takeaways here? Number one, I
think we've quietly crossed from miracle
to inevitable utility. So this is much
more subtle than a commoditization
argument. I think Evan's talk marks a
tipping point. AI is no longer being
framed in most settings as will this
work, will we get there? Instead, it's
being framed as obviously this works.
Where does the margin end up? Where do
the winners end up? That's especially
true and top of mind this week when we
saw visual reasoning solved with Nano
Banana Pro when we saw Meta SAM 3 model
drop and handle semantic search for
video. We have these previously
difficult spaces where we're seeing AI
just works. And then we have
confirmation from Google that Gemini 3
didn't have special tricks up its
sleeve. It was classical pre-training
and post-training LLMs. There is no wall
on training. You can just get bigger and
better and train the same way you always
have and get a smarter model. That may
sound like a benol observation, but
knowing that that's true and seeing the
breakthroughs that we've had, we now are
just living in a world where this is
inevitable. AI is going to be
everywhere. AI has already solved enough
problems to let us know that the scaling
laws hold. And if we assume it's
everywhere, we need to ask a different
set of questions. Where do we matter?
Where do our companies matter? How do we
set up ourselves as competitive players
in this space? Those are becoming the
relevant questions. And so the strategic
risk isn't sort of missing the AI
moment. It's really continuing to act as
if this is a tunable or optional
research and development play instead of
this is inevitable infrastructure and if
you don't go after it with every tool
you've got, you're just not going to
make it. A smarter question to ask in
that world is if AI is as inevitable as
spreadsheets have become, what parts of
our value chain become just a feature in
that world and are no longer
competitive? That's a tight interesting
question to play with. Deep takeaway
number two, adoption isn't just slow, it
is path dependent and it can trap you.
So adoption is lumpy. Evans pointed that
out. Lots of pilots, not a lot of deep
usage. Some people use it a lot. Whether
and where you choose to adopt shapes
what becomes possible later. And he
didn't talk about that. But think about
spreadsheets. The first teams that
adopted them weren't just more
efficient. [snorts]
They reorganized how information flowed
through the business. They could model
scenarios. They own the numbers. They
could self-s serve. LLMs and agents are
poised to do the same. So the pattern is
going to be you drop AI into one or two
workflows. Those workflows shift how
information is produced. They shift how
it's consumed. And that in turn shifts
which other workflows are now possible.
So the non-obvious leadership problem
for you is if adoption is path
dependent, are we choosing the right
beach heads? talk a lot about problem
framing about picking the right places
to jump in with AI and that's really the
question in front of us as we confront
an adoption challenge in our teams
recent model evolution makes this an
even sharper problem agent native models
Gemini class etc aren't just better
autocomplete right they're they're
suited to many kinds of work meaningful
work knowledge work triage coordination
followup repetitive decision loops with
clear constraints if your first
experiments are all summarize this doc
You're never going to discover the
compounding benefit of agent assisted
customer onboarding or agent assisted
engineering support. Essentially, the
beach head you picked constrains some of
your paths forward. So, where should we
try AI is not a random sandbox question
for a Friday afternoon. It is a path
design question. In other words, you you
will get compounding benefits or
compounding costs depending on which
workflows you choose. So look where
there are important junctions in your
organization's information flow patterns
and jump in there because when you can
create a change in that flow, you unlock
a lot of downstream benefits. You unlock
a lot of opportunity to use AI agents
elsewhere. Non-obvious takeaway number
three, AI is going to turn you into a
buyer with additional leverage if you
design for it. So Evans commoditization
story has a second order effect that
most people aren't talking about. As
models get closer to par and quality, as
you get more model options, your power
is going to increase as a purchaser of
models, as long as you structure for
that effectively. Enterprise AI
conversations still turn too often on
vendor lock in. I have screamed about
this a lot. I'm going to say it again.
Don't say we're an Xodel shop. Just be
multimodel from the get-go. If you take
Evans seriously, if you take me
seriously, the long-term equilibrium is
going to look like treating models as
components and routing your workloads to
different models based on the cost, the
latency, the data sensitivity, the
jurisdiction, etc. That's not the
reality in most of our orgs today. It is
something we need to get to. So, the
non-obvious implication is don't think
about picking a winner model or even a
winner lab. Instead, think about
building an architecture that lets you
be in the driver's seat in buyers
conversations and lets you arbitrage
models the way you want over time. Don't
settle for lockin. Deeper takeaway
number four, AI is eating the org chart,
not just the tech stack. And it's not
about layoffs. So Evans focuses on tech
cycles, but if you extend his logic,
spreadsheets didn't just change
software, they changed who needed to
talk to whom, what roles become
bottlenecks, which functions gained
political power like finance and
operations. Cloud. Cloud didn't just
move servers off premises. It shifted
power from central IT to product and
engineering. It accelerated the pace at
which teams could experiment. AI will do
the same for roles that are around
coordination, for roles that are around
synthesis versus roles that are mostly
judgment and constraint setting. So
recent agent style capabilities make
this more concrete. A model that can
read your emails, Slack, tickets,
dashboards, you you name it, right? And
propose actions is effectively an
informal chief of staff for every
knowledge worker. And we should expect
that by 2026. That doesn't just increase
individual productivity. It changes who
needs an assistant, who needs a team,
where the bottlenecks in decision-making
live. And so the non-obvious implication
for you as a leader is if you only think
of AI as a tool roll out, you will miss
that you are doing an org design change
at the same time. Some roles will shift
from doing work to specifying to
checking to escalating that work. Other
roles will shrink because the
coordination overhead they manage gets
automated away. So your span of control
assumptions, your management layers,
your hiring plans are all going to need
to adopt much adapt much faster than in
previous cycles. So Evan is giving you
the technical story here, but I think we
need to extend that out to the org
story. So where does this leave us? I
want to suggest to you especially at the
end of one of the most jaw-dropping
weeks I can remember in AI that we need
to be taking a step back regularly as
leaders and we need to be asking
ourselves when we have weeks like this
where I can't name the number of
significant developments we had I've
attempted to it's like half a dozen or
so over the course of the week we need
to say does any of this change the
strategic operating reality of the
business that I am building and I think
Evan's matrix Evans talk AI eats the
world gives us a good framework for that
because it enables us to say okay is
there something that is shifting the
tech adoption cycle here is there
something that is shifting my org chart
here is there something about how
information flows in my business that is
changing is there something about my
vendor relationships and my power with
vendors that is shifting because of this
unlock and the answer if we ask is often
yes but having the right questions to
ask helps put us in the driver's seat
during times when the news cycle feels
relentless on AI and I got to say that's
not going to stop. And so my
encouragement to you if you're feeling
overwhelmed and you're trying to think
about how to sort all of this out is
make a regular practice of stepping back
and looking at the world like Evans
does. Take a day, step back, get a
whiteboard out, maybe you get your
senior team together or just go for a
walk in the woods and figure out what
this means for your business. Distill it
down. Take your time because that time
to reflect is what is going to enable
you to digest, synthesize, and form core
conviction that you need to push your
teams forward. A lot of what I'm talking
about here is really the meat of where
leadership and understanding of AI meets
the road, where you need to be at with
your teams to drive them forward. And
you can't do that if you don't have
energy and conviction. And that comes
from having the ability to reset,
digest, and synthesize all of these
updates effectively and then come back
with fresh energy. So, take that into
the week and uh I'll see you next