Deep Dive into Mary Mer's AI Trends Deck
Key Points
- The video is a detailed, hour‑long walkthrough of Mary Mer’s 340‑page “AI Trends” deck, which she released after years of focusing on VC investments rather than public trend reports.
- Mer’s deck aims to synthesize disparate data points into a cohesive narrative about AI, structuring the material around rapid AI adoption, compute demand, usage, cost, monetization, robotics, and the broader global competitive landscape.
- A key insight highlighted is that AI adoption is accelerating faster than the internet rollout ever did, driven heavily by developers within the Nvidia ecosystem and the proliferation of tools like ChatGPT.
- While praising the deck’s organization, the presenter signals disagreement with Mer’s framing of global competition and promises to explore why that perspective may be flawed.
Sections
- Deep Dive into Mary Mer's AI Deck - A presenter announces an hour‑long, detailed walkthrough of Mary Mer’s newly released 340‑page AI trends deck, explaining its background, purpose, and scope.
- Rising Compute Costs vs Global AI Adoption - The speaker contrasts decreasing AI expenses with soaring compute costs, noting a revenue‑loss mismatch for a company while emphasizing rapid worldwide ChatGPT adoption driven by broadband‑enabled mobile access.
- Incumbents Aggressively Shaping AI Landscape - The speaker asserts that major tech firms’ swift adoption of AI trends curtails disruption, creates a mixed‑bag of opportunity and uncertainty, and diminishes the immediacy of geopolitical AI‑leadership risks in favor of a future where many AIs proliferate rather than a single super‑intelligent monopoly.
- AI Scaling Limits and Future Impact - The speaker likens the AI era to an intensified internet, discusses the difficulty of measuring its cross‑industry utility, questions the sustainability of ever‑larger training data sets, and anticipates rapid hardware advances that could dramatically reshape AI development.
- ChatGPT's Growth Challenging Google - The speaker highlights ChatGPT’s soaring user base and revenue, predicts it will reach a billion users and rival Google’s search dominance, while tracing AI’s evolution from the printing press to Turing and Kasparov.
- AI Incremental Improvement and Alignment Risks - The speaker likens AI advancement to filmmakers’ evolving quality, questions how an AI released into the wild would autonomously get better and stay aligned, and notes that many capabilities projected for 2035—such as scientific hypothesis generation and immersive world‑building—are already emerging today.
- AI Revolution Evident in Patents - The speaker emphasizes a sharp rise in AI‑driven computing patents, rapid model advances such as GPT‑4.5, high‑quality generated images and audio, and the transition toward more powerful yet costly AI systems marking the start of a new technological era.
- Explosive AI User Adoption - The speaker highlights how ChatGPT’s rapid climb to 100 million users—driven by cheap tokens, ubiquitous bandwidth, and a surge of AI tools and startups—outpaces the adoption timelines of past foundational technologies.
- AI Expansion Across Sectors - A rapid‑fire overview of how AI—from restaurant optimization tools to government‑tailored models and accelerating FDA‑approved medical devices—is proliferating across industry, education, and research.
- Assessing AI Agent Deployments - The speaker critiques superficial AI agent implementations, argues that true utility comes from scalable, custom agents that could trigger a phase‑shift toward AGI, and explores how this shift might transform work and institutional decision‑making.
- AI Compute Efficiency Explosion - The speaker highlights the rapid, post‑2019 improvements in AI hardware—showing exponential GPU performance, massive energy‑efficiency gains, and soaring Nvidia and cloud capex—that have turned AI compute into a vastly more powerful and cost‑effective resource.
- AI Compute Costs: Scaling vs Efficiency - The speaker contrasts exponential growth in AI training expenses with dramatic declines in inference costs, crediting Nvidia’s Volta GPU breakthrough for reshaping AI unit economics.
Full Transcript
# Deep Dive into Mary Mer's AI Trends Deck **Source:** [https://www.youtube.com/watch?v=_g1LFxC31EY](https://www.youtube.com/watch?v=_g1LFxC31EY) **Duration:** 00:41:13 ## Summary - The video is a detailed, hour‑long walkthrough of Mary Mer’s 340‑page “AI Trends” deck, which she released after years of focusing on VC investments rather than public trend reports. - Mer’s deck aims to synthesize disparate data points into a cohesive narrative about AI, structuring the material around rapid AI adoption, compute demand, usage, cost, monetization, robotics, and the broader global competitive landscape. - A key insight highlighted is that AI adoption is accelerating faster than the internet rollout ever did, driven heavily by developers within the Nvidia ecosystem and the proliferation of tools like ChatGPT. - While praising the deck’s organization, the presenter signals disagreement with Mer’s framing of global competition and promises to explore why that perspective may be flawed. ## Sections - [00:00:00](https://www.youtube.com/watch?v=_g1LFxC31EY&t=0s) **Deep Dive into Mary Mer's AI Deck** - A presenter announces an hour‑long, detailed walkthrough of Mary Mer’s newly released 340‑page AI trends deck, explaining its background, purpose, and scope. - [00:03:33](https://www.youtube.com/watch?v=_g1LFxC31EY&t=213s) **Rising Compute Costs vs Global AI Adoption** - The speaker contrasts decreasing AI expenses with soaring compute costs, noting a revenue‑loss mismatch for a company while emphasizing rapid worldwide ChatGPT adoption driven by broadband‑enabled mobile access. - [00:07:29](https://www.youtube.com/watch?v=_g1LFxC31EY&t=449s) **Incumbents Aggressively Shaping AI Landscape** - The speaker asserts that major tech firms’ swift adoption of AI trends curtails disruption, creates a mixed‑bag of opportunity and uncertainty, and diminishes the immediacy of geopolitical AI‑leadership risks in favor of a future where many AIs proliferate rather than a single super‑intelligent monopoly. - [00:10:47](https://www.youtube.com/watch?v=_g1LFxC31EY&t=647s) **AI Scaling Limits and Future Impact** - The speaker likens the AI era to an intensified internet, discusses the difficulty of measuring its cross‑industry utility, questions the sustainability of ever‑larger training data sets, and anticipates rapid hardware advances that could dramatically reshape AI development. - [00:14:26](https://www.youtube.com/watch?v=_g1LFxC31EY&t=866s) **ChatGPT's Growth Challenging Google** - The speaker highlights ChatGPT’s soaring user base and revenue, predicts it will reach a billion users and rival Google’s search dominance, while tracing AI’s evolution from the printing press to Turing and Kasparov. - [00:17:32](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1052s) **AI Incremental Improvement and Alignment Risks** - The speaker likens AI advancement to filmmakers’ evolving quality, questions how an AI released into the wild would autonomously get better and stay aligned, and notes that many capabilities projected for 2035—such as scientific hypothesis generation and immersive world‑building—are already emerging today. - [00:20:42](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1242s) **AI Revolution Evident in Patents** - The speaker emphasizes a sharp rise in AI‑driven computing patents, rapid model advances such as GPT‑4.5, high‑quality generated images and audio, and the transition toward more powerful yet costly AI systems marking the start of a new technological era. - [00:23:56](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1436s) **Explosive AI User Adoption** - The speaker highlights how ChatGPT’s rapid climb to 100 million users—driven by cheap tokens, ubiquitous bandwidth, and a surge of AI tools and startups—outpaces the adoption timelines of past foundational technologies. - [00:28:20](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1700s) **AI Expansion Across Sectors** - A rapid‑fire overview of how AI—from restaurant optimization tools to government‑tailored models and accelerating FDA‑approved medical devices—is proliferating across industry, education, and research. - [00:32:51](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1971s) **Assessing AI Agent Deployments** - The speaker critiques superficial AI agent implementations, argues that true utility comes from scalable, custom agents that could trigger a phase‑shift toward AGI, and explores how this shift might transform work and institutional decision‑making. - [00:35:55](https://www.youtube.com/watch?v=_g1LFxC31EY&t=2155s) **AI Compute Efficiency Explosion** - The speaker highlights the rapid, post‑2019 improvements in AI hardware—showing exponential GPU performance, massive energy‑efficiency gains, and soaring Nvidia and cloud capex—that have turned AI compute into a vastly more powerful and cost‑effective resource. - [00:39:02](https://www.youtube.com/watch?v=_g1LFxC31EY&t=2342s) **AI Compute Costs: Scaling vs Efficiency** - The speaker contrasts exponential growth in AI training expenses with dramatic declines in inference costs, crediting Nvidia’s Volta GPU breakthrough for reshaping AI unit economics. ## Full Transcript
Okay, you guys asked for it. I'm going
to record a full walkthrough showing
Mary Mer's trends and artificial
intelligence deck which she released
four days ago and we're just going to go
piece by piece. We're going to talk
about AI and I hope you enjoy the deep
dive. This is going to be a much longer
video than I usually do. I would
estimate it over an hour. So dig in,
grab that
coffee. All right, first up, uh this is
Mary Mer's uh VC firm. So Mary Miker is
a general partner um and she is an
investor now and that is really the core
reason why she has not been doing these
trends in artificial intelligence uh
decks earlier. She did not do one in
2023. She did not do one in 2024 and
before it was internet trends and that
last one was back in 2019. And so her
investment activities have been front
and center for her. And it's notable
that she's taking time aside from that
to effectively brief the industry with
an incredible piece of AI intelligence.
And so this is her setting out the
context. Um I generally don't read these
10point fonts, but it's Mary, so I'll
give you the TLDDR here. I think the key
thing is that Mary is looking at this as
a collective effort. So 340 pages, she's
trying to connect, as she says, several
disperate data points and it just she
didn't expect it to get this big either,
right? It turned into a
beast and she wants to find a way to
make sense of it. And that is this is
her attempt to set the narrative for AI
as a whole. I think this is one of the
most interesting things we'll get to
later in the hour. I don't know that I
agree with Mary on the global
competition frame, and we'll get into
why. All right. Here's the outline. We
are literally going to go through the
whole thing, so just sit tight. Um, I
actually really like the way she
structured this. If anyone is wondering,
how do you structure a gigantic deck so
it feels understandable? Well, this is
one way to do it, right? The overall
takeaway seems like change is happening
faster than AI user and capex growth.
The headlines we tend to see what is
driving that compute. How does this
start to translate into the market? We
get into usage, cost, and growth. We get
into monetization. we get into physical
and robotics. Um, and then finally, we
start to get into global and work
evolution at the end. That's a pretty
good organization. All right, let's jump
right in. This is probably the slide
that she expects people to stop at and
go, "Wow." Uh, so we have development uh
developers, which is a theme I didn't
expect Mary to get into. She comes back
to developers a fair bit. Uh, developers
in Nvidia's ecosystem. You can just read
this as
Nvidia. This is chat GPT. I don't know
why she's confusing people because she
says it in exactly those words later
later on here. Uh AI user usage and
capex growth. The key here and this is
something that shows up over and over
again through these graphs and others
I've seen. We are seeing faster uptake
on AI than we ever saw on internet
across so many different metrics. And
that's one of the reasons the space is
so exciting right now. All right. AI
usage and capex growth. companies are
building a lot in AI. This is my
surprise
face. Okay. So when you think about
that, one of the things I want to call
out is that basically the story of AI is
the story of charts that are up and to
the right and charts that are down and
to the left or down and to the right my
brain. So the up and to the right ones
we know about, right? The user gain is
the popular one, the capex growth, etc.
The down and to the right is how cheap
AI is becoming. like that's just a
straight vertical cliff. Um how compute
expenses are skyrocketing. So in this
case down is worse. Down is spending
more money on
compute. And one of the things that Mary
is essentially calling out is that Chad
GPG's revenue is scaling but not as fast
as their losses are scaling from serving
all the compute. There is a disconnect
and they're going to have to figure out
how to close that and that's something
we'll get into later on.
And so in a sense you can see this as
like the story of those two trend lines,
right? The things that are down and the
things that are up. And we'll sort of
follow that through the
deck. One of the things that I want to
call out is that
we we have a
tremendous global uptake here that is
powered off of the back of the internet.
And so part of why we are adopting so
much faster with AI is because broadband
internet is available all over the
planet and because the form factor of
the primary app chat GPT is textheavy
first then its images. It operates on a
cell phone. much of the uh like India,
subsaharan Africa, South Asia, um Middle
East and North Africa, Latin America and
the Caribbean. These are places where if
you have a cell phone, you're online and
you can get chat GPT. And so it's
actually relatively easy
to stack up adoption into a very very
large overall approach. And I will say I
did not realize that chat GPT app users
in North America, that blue section
there, I didn't realize that was such a
small proportion of overall users. If
you had tapped me on the shoulder and
you would said, Nate, out of 800 million
chat GPT users, how many are in the US?
You know what I would have
guessed? 150 million. Maybe half the
country. Nope. Apparently, it's a lot
smaller than that. And they're stacking
stuff up in Europe. There's it's truly a
global
product. All right. Um, this is one
where I really like we'll get into this,
but I think this is just poor framing. I
do not think IT jobs is the right frame
for tech. And that's a categorization
issue from the Department of Labor that
I just don't think is super helpful. If
you look at the trends more broadly, and
this kind of shows it, down 9% is a
little bit
disingenuous. And the reason why is
because it's basically flat versus 2018.
We had the massive Xerpier run up in all
tech jobs and now you have a runup in
AI. And I think that that's like the
interest rate story kind of gets left
behind.
Okay, this is her like the world is
changing very fast. She goes back and
calls out some of the companies she was
early on like Google. Um, and one of the
things that she says here that I think
is really interesting is
that we
have a change in how work gets done. I
just had to find a change in how work
gets done, how capital is deployed, and
how leadership is defined. The
leadership one feels a little squishy,
but fundamentally I think Mary is
correct. This is a I think she describes
it as a meta techchnology. This is a
meta technology that enables us to do a
lot of other things more effectively
including use the internet, including uh
make it easy to do business everywhere,
including organize the world's
information. Like you can see the
efforts of these major companies as
essentially efforts
to
optimize against a meta technology that
makes it easier to fulfill their
mission.
And
so one of the things that I think pops
to me as I start to get into this
deck is that that overall trend line
that we see in that initial slide is
part of what is driving incumbents in
this wave to be so aggressive about
keeping up with the trends. I think
there are arguably less opportunities
for disruption
because these companies like Google,
like Facebook, like Microsoft have been
so aggressive about keeping up with the
trends. I think Mary calls this out
later, but Microsoft's investment in
open AI was a significant moment in this
overall trend in this overall AI
revolution. Okay. So if we move forward,
basically it's the best of times, it's
the worst of times. Uh it's very
uncertain. And I think that one of the
things that's really interesting is that
she specifically calls
out the she calls out the geopolitical
risk factors right here at the top when
I would say that's not the
most interesting or compelling risk
right away. I think a
geopolitical leadership race for AI or
an AI leadership race that beggets
geopolitical risk only transpires if we
buy the idea that just scaling more
intelligence, which is what we see
evidence of today, is enough to get us
to a sort of super intelligent scenario
where one company in one country has
found super intelligence and quickly
evolves to the point where nobody else
can catch
up. Even Sam Alman no longer thinks
that's the most likely outcome. We live
in a proliferating world. We're going to
get all the
AIs. And so I in a sense I think it's
not so much global AI leadership as it
is the global crowd of AIs that we're
getting. We're going to get lots of AIs
from everywhere very fast because of the
proliferation of this technology is so
easy. All right.
So now we get into she's she's gone
through her size 10 introduction. We get
into sort of the numbers behind the
momentum the introduction. All right.
She starts off very investory slide
here, right? Global GDP on a log scale,
right? So you can actually I think it's
a log scale. It's got to be 500 billion
to 100 trillion. It's roughly a log
scale.
Um so 1 trillion to 10 trillion to 100
trillion. It's about a double. Um, at
the end of the day, basically what you
see is this gigantic pop up. And I think
what she's trying to say is that maybe
we get the next leg of this from AI. We
will see. Certainly very investor way to
start. I kind of roll my eyes until we
see actual productivity growth, which is
one of the things that has been elusive
for the internet. We see GDP growth
alongside the internet, but it's hard to
prove positively how the internet
affected GDP growth and productivity
growth in particular. Uh you can see
internet companies generating revenue
but proving that productivity increases
drove that GDP
growth. That's a little bit sketchier.
It's one of the notorious loopholes um
in technologies. It's very difficult for
a generally available technology
to demonstrate its utility enough across
enough different industries in a very
measurable way for us to see the impact
very clearly. So basically it may be
tremendously impactful and it may be
hard to see like the internet. And
that's essentially what she's suggesting
is that the AI era is like internet on
steroids. So she called out mobile
internet. Now she's calling out the AI
era with tens of billions of units.
She's talking about GPUs here, but I
actually looking at Johnny Iive and I'm
looking at like where they're going with
devices. Feels like that's coming pretty
fast. So I would not be surprised if
this is an entirely looking different
looking slide a year from now when the
devices uh that OpenAI are is working on
are
released. All right, we keep
going. Training data set size. This is
super controversial. Yes, it's been
scaling really fast since 2010. I think
the question is can we keep scaling from
here? Is there a limit? We are one e to
the 13 on uh number of words in the data
set. Does it make sense to keep going
up? Is there more to it?
Uliagiver gave a talk at Nurips last
year basically saying data is the new
oil. we don't have an infinite sort of
amount of data left. We're going to run
out of data for pre-training. We'll need
to find other scaling laws. Uh and then
he went off and founded I think safe
super intelligence and has done nothing
public since
then. But this is a question like I
think in a sense it's a little
disingenuous to portray this as up and
to the right without mentioning that
there's a huge question mark up here
about how we handle this
uh teraflop or training compute flop
here. How do you in sort of increase uh
training compute? There's definitely an
inflection point. I think uh whether you
can continue to sort of scale on a log
scale like Mary is talking about is an
interesting question. We've inflected up
and uh Jensen Hong has done incredible
things with Nvidia and
chips. He's selling lots and lots of
them. Is there an upper limit here or
not is another question mark.
200% annual growth over nine years of
compute gains from better algorithms has
led to
uh sort of tremendous gains in AI
intelligence. Uh it's also led to gains
in effective compute. Basically, you can
scale compute, you can scale algorithmic
progress. The thing I take away here is
that this is one of the clearest graphs
I've ever seen of the difference between
scaling algorithms and scaling just raw
compute. And I think one of the big
sources of gain over the next 5 to 10
years in AI is not necessarily going to
be just compute scaling or pre-training
data. It'll be algorithms. It'll be how
we use what's on the chip to deliver
better
answers. All
right. 150% annual growth over six years
of performance gains. Looks very
impressive. Uh and it is very
impressive. And this is all
supercomputers. I'm not going to spend a
lot of time here. These graphs are all
up into the right. We have 340 of these
to get through. Number of new large
scale models. This is absolutely
explosive. This is illustrating
proliferation to me. Like she can't even
fit this on the
slide. This is one of the money slides.
I think I included this in my Substack.
User growth up into the right.
Subscribers, revenue up into the right.
This is the bullcase for Chat GPT right
here. like they they are hitting
absolute vertical on 800 million users.
I would expect them to hit a billion by
the end of the
year. They are thanks to the power of an
internet that's already available, they
are very very rapidly hitting uh the
same search mark that Google took 11
years to hit right over here. Does that
mean they're going to actually eat
Google? Sam Alman said he didn't think
so. He thinks that it's going to be
different. He doesn't see a case where
Google really disappears. Uh, I think
it's going to be really interesting to
see are these high intent searches. How
does Fiji build ads at OpenAI? Those are
really open
questions. Okay, now we have knowledge
distribution over six centuries. This is
deep in the weeds. We're going to go
fast here because again, 340 slides. So,
basically, printing press, we jump
several hundred years to the start of
the internet, which really did look like
this for you young people out there. Uh
then we jump straight to chat GPT
2022 and it was not that smart guys. I
don't know if you've forgotten but it
was kind of dumb to start with. I
remember when I was like oh this is just
going to be used for marketing
copy.
Um and then we get into the story of AI.
So all the way back to Alan Turing in
1950. If you want to pause this screen
and look at it it's fantastic. I
remember the Kasparov moment. That was a
moment for me when Deep Blue won. Um,
and now like everything compresses.
These are all 2023 to 2024 to early
2025. Uh, and there's been more
happening since Claude hit a $3 billion
run rate in since she she produced this
deck.
Okay, we keep moving. We're 10% of the
way through the deck, guys. Uh, things
that chat GPT can do. I think this is
going to be pretty obvious to most of my
audience. Do we know that it does PDFs?
Does it write code? Does it prep for
interviews? Nobody who's been listening
to this channel is surprised by this.
Maybe the investors are when they look
at this deck. Uh AI equals circa 2030.
Top 10 10 things AI will likely do. Uh
it claims generating human level text.
It claims creating fulllength films. It
claims understanding and speaking like a
human. Here's the thing I want you to
call out. There's a massive gulf between
these two slides. It's a gulf not just
in terms of assembling new tokens, but
in terms of several breakthroughs we
haven't seen yet that I don't think Mary
does a good enough job teasing out. And
maybe that is just not in the scope for
this deck because it's a phenomenal deck
overall. One of them is how these
systems adaptively learn in the wild,
how these systems handle context, how
these systems handle memory, how these
systems handle intent over time. I could
go on and on. There's a lot of things
that are native to humans, especially
adult humans who have been educated for
information work or even creative work.
You know, she calls out full-length
films. They're not intuitive to an AI.
So, for example, Christopher Nolan gets
better every time he makes a full-length
film. Steven Spielberg arguably gets
better and has been getting better for
decades every time he makes a
full-length film. But it is not clear
how an AI in the wild would adaptively
get better after being
released and if it did what that would
mean uh in terms of how quickly it would
improve. Those are all very unanswered
questions and they do raise alignment
questions like would the AI stay aligned
if that were the case. So I think in a
sense this
slide it illustrates possibilities. It
illustrates the dreams of a major model
maker. I think it's different from
saying it's a step function and you just
run up it. Um,
ironically, some of the stuff that's
listed here by 2035, we already have
examples of doing. So, alpha evolve
conducts scientific research. It
generates hypotheses. Um, it's on the
verge. Uh, I think some of them run
simulations. So, this is not really
new. Um, and so in a sense, this does
feel a little bit mixed up like building
immersive virtual worlds. I think we're
actually quite close there and that
feels much much easier than trying to do
a full length film. So I think this is
an area where
like again it may be per chat GPT but
like it would be helpful to get a little
bit of a spot check there. All right,
now we're back to the graphs where I
think this is a stronger
deck. So machine learning models
are absolutely trending to no one's
surprise. Uh and I I think that the idea
like she has an interest in in academia
anyway. She wrote one of her rare decks
that was not as widely circulated on AI
in educational institutions. And so when
she you know references Stanford and
talks about the academia era um I think
that's something that has been an
interest for her for a long time. It's
something where she wants you to know
that she
is thinking through how she can
present the findings from academia in a
way that shows scale and also thinking
through how AI needs to circulate back
into the academic space. And she covers
that more. This is a little bit of a
preview. So that one I don't know if
that slide landed. This one I think
does. So developer growth in the Nvidia
ecosystem 6x like just an absolutely
stunning gauge. And you know what's
interesting? It gapped up around the
time when we started to talk about
crypto around the time when we started
to talk about late Zerp era some AI but
not Chad GPT
release. And so it's a little
disingenuous to see AI as driving this
trend. when Chad GPT came out here. It's
more accurate to say that developer
ecosystem has been booming for Nvidia
for a long time and arguably set up AI
to be a super boom, which I think is an
interesting insight that doesn't get
reported a ton. All right, global
developers growing at Google. Yeah, US.
This is one of the eye openeners for me.
This is
absolutely driven by
AI 100%.
Um, and I think that when you see that
pop 1995 to 2003, and this is like an
even sharper
line, at the end of the
day, this is indicating we're at the
beginning of a new revolution. Like,
people are doing new things and the
innovation is really popping. And I had
no idea that the pop for computing
related patents was this high. This is
super
exciting. System performance on bench
test. Honestly, like she doesn't get
into this, but this is like it's not
really to me about surpassing the human
baseline. I mean, it is. It's also about
the models overfitting and saturating,
and she doesn't get into
that. Uh, AI performance, uh, GPT4.5.
The irony is they're rolling this back,
even though it is as good as she's
describing here, I think implicitly
because the model is so expensive to
serve, they're trying to figure out how
to serve it uh, in the right way with
GPT5.
All right. AI performance, realistic
conversation, touring tests. We know it
passes the touring test. Maybe the
investors don't. Uh, the images are very
good. Now, this should not be to
anybody's
surprise. The images are very, very
good. I love this real image, AI
generated image. That's a lovely little
reverse. Catches people by surprise. Uh,
realistic audio. If you've ever tried
one of the audios from 11 Labs, it's
incredible. Um, and they absolutely are
being used in production settings. And
you can see that in sort of the scale of
uh sort of how 11 Labs is getting used.
Um and Spotify accepting audiobooks AI
translated into 29 languages. Uh which
is just like dramatically scaling up the
impact of AI and AI voice. What this she
doesn't talk about is uh Spotify
accepting AI powered music and AI music
which is a different conversation. uh
emerging applications are
accelerating. Cancer detection,
robotics, she doesn't talk about drug
pipelines, but that's another big one.
Um protein folding kind of gets at that
a little bit. But really, drug targets
are a new one where you search through
past academic papers and search through
the chemical structure of the of the
drug to sort of identify novel targets
and novel use cases for existing drugs,
which can be a simpler path to
profitability off of existing drug
testing. Uh AI benefits and risks. And
of course like it could actually help
people. Imagine that. Um anyway uh that
aside uh she wants to talk about the
risks. She has uh Deiss Hapabis Hassabis
uh CEO of Google DeepMind Nobel Prize
winner. Um this is basically the bull
thesis for AI, right? First we solve AI
then AI solves everything. Okay. He's
still very bullish on that. We will see
how that goes. Um, and I've highlighted
some of those concerns that I have
around context and learning. And we'll
just follow this deck through and and
continue to have the chat. Uh, Stephen
Hawking, what good deck is complete
without a Stephen Hawking quote? AI user
usage and capex growth. Great. Yes,
everything is up and to the right, guys.
There's Chad GPT again. There's it
beating the internet again. Yay. Uh,
user adoption super super fast year
launch Chad GPT.2. too, right? To get it
to 100 million users. Super super super
super super fast to get to 100 million
users. Uh you can see all these other
major companies up the side. Uh AI user
adoption chat GPT is a proxy is
materially faster, cheaper than other
foundational tech
products. Uh I mean honestly 0 days days
to reach a million users and purchase
price. So some of this some of this is
that it's cheap now. It's cheap to serve
tech and I think that we'll get to that
when we we get sort of later to uh into
the deck. One of the ways that we are
accelerating adoption is by making
tokens very very cheap across a widely
saturated internet bandwidth uh
environment years to 50% adoption of
household tech AI era. They're guessing
three years or we're not there yet,
right? So it is a guess but it's it's on
track for something very fast which is
for the record uh much faster than
desktop internet which was at 12 12
years and PC which is at 20 years which
is just crazy uh technology ecosystem uh
number of developers number of AI
startups this is real like this is
absolutely explosive I think the number
I heard was like over 70,000 tools which
is bigger than 27,000 startups but you
get the idea tech incumbent AI adoption
mentions of AI and earnings calls, which
is like the dumbest metric, but also
real. Um, look at Meta. Mark just can't
get enough. Uh, tech incumbent AI
focused. I've talked about Sundar and
Andy before and how they're calling this
out. I think Andy is the one that said
this could be bigger than the internet.
So, the hype is the hype, right?
Dolingo, Elon Musk, Roblox, Nvidia, you
get the idea. Lots of hype. uh
traditional enterprise AI adoption
increasing priority. This is the whole
S&P 500 and what they value enterprise
AI focus. Um and what's interesting here
is this is where generative AI is
targeted at large companies. And I think
it's interesting that uh production
output, customer success, and sales are
the top
three. And I'm a little bit surprised
that marketing is so low. I I would have
expected more interest in
marketing. And I think that this is one
of the slides that surprised me the most
because anecdotally the the line I have
heard from a lot of leaders is not
revenue focused. It is cost focused. And
so if she is correct and she actually is
highlighting a trend that they're not
talking about, but they're acting out, I
think that's interesting. I I do also
partly disagree with this slide. I think
customer service is a cost center at
most businesses. So that's probably a
little bit disingenuous. Margins is
definitely a cost center. um marketing
spend effectivity is typically a cost
center and production output. I don't
know that that's a revenue either. So I
can also beef about the graph. The point
is like I think whether it's topline or
bottom line is a really relevant
question to talk about more and we we
probably don't
enough enterprise AI focus global
CMOs here we are jumping to marketing
right what are they doing they're fully
implemented not so much running initial
tests yes this is changing so fast like
I I think that you actually have changes
in this even since the end of 2024 that
are quite substantial like I would have
expected this graph to be like up here
uh and And then since we have a bunch of
case studies, we're going to run these
super fast. Bank of America, Erica
virtual assistant, JP Morgan, endtoend
AI
modernization. Uh you can just see them
sort of scaling that
up. And you can pause on these slides if
you want to. Uh Kaiser Permanente multi
multimodal ambient AI, basically a
notetaker. Let's just be honest. Scaling
up uh at their scale. Uh Yum Brands uh
enterprise AI adoption. She's basically
calling out how fast this is, right?
Restaurants using One Bite by Yum. Um,
it gives franchises the option
to optimize their kitchen, right? I
don't even know what it does, but
apparently it's an AI thing. I don't
know what it does, but apparently it's
an AI thing. It's something you can say
about a lot of things right now. All
right, we can keep moving. Uh,
education, government research. We'll
skip through this one pretty quick as
well. Uh, again, education is one of her
special interests and so she's going to
talk about it. Um so she they have a
model tailored Chad CBT is a model for
USA federal agencies. Um she talks about
different
universities. She talks about uh
government sovereign AI partners. Uh
there's definitely work between NVIDIA
and Chad JPGT in the Middle East as of
the last uh couple weeks. But this is
one of the reasons by the way why we are
going to have a proliferated AI future.
Look at how many different data centers
we have already.
Um, we're going to have more chat GPT
will be all over the place and it is to
be expected that we will have multiple
AIs that are very highly capable. As a
result, FDA approved AI medical devices
are scaling very rapidly. I would expect
this to scale even faster in 2025. Um,
and we will have to see how that sort of
nets out. But I think that the approval
pipeline has gone very very fast and
it's getting faster at least by FDA
standards. Research uh 30 to 80%
reduction in medical R&D timelines which
is wild. This one doesn't get publicized
enough. We're going to see a lot more
like this. We'll probably see more gains
here. More of the push to 80%. Okay. Uh
rising rapidly across age groups. What
she doesn't get at, by the way, yes,
usage is
rising, but the difference in estimates
versus what Pew surveys say is notable.
And Pew is also the company that has
been or the survey company that has been
reporting the most pessimism about AI
usage among American adults. And
so I'm a little bit
shocked that we see this big a
disparity. I guess people lie on surveys
all the time. Um, and I think she
doesn't do us a service by not calling
out how pessimistic Pew survey
respondents are about AI. And I think
that's definitely a
factor. Again, quantitatively, people
are using AI a ton, whatever they say.
Like, you can actually see this. Um, and
it is really hard to get minutes out of
someone's day. So, being able to carve
out 20 minutes from someone's day on
average across millions of users is
huge. And it's inflecting up. This is
inflecting up with better models. It's
inflecting up with images,
etc. Engagement, growth in sessions.
It's getting better, right? Like to no
one's surprise. Retention,
uh, it's definitely improving. I think
that they're offering more utility, so
it's getting better. Uh, Google search
retention is the gold standard here,
right? Like, and that's stable as a
table. But the fact that they're scaling
up and you can expect them to start to
catch Google in the next year or two if
they keep scaling is interesting. And I
just want to call out it's stable as a
table, but it might be a little bit
bigger here than it is here. And that
may be that erosion that Google gets so
stressed about due to AI
potentially. AI chat bots at work. Uh
what are people saying? It improves
their quality and allows them to do more
things more quickly. I don't think this
is super informative. AI chat bots at
school. I mean, I don't know about you,
but my kid has Chad GPT on the school
laptop, and I have some mixed feelings
about that choice. Uh, and we have a
talk about whether you can actually
think critically or whether you're just
using it to do your homework. Uh in this
case uh she's calling out adults uh
which is different um than children but
still uh the are we actually going to
learn anything in school question has
never been more prominent and we don't
have a good answer to that. They're
going back to like blue books and
pencils usage expansion. Um so deep
research deep research deep search blah
blah
blah. She calls it automating
specialized knowledge work. I think it's
more narrower than that. I think it's
basically I need to do a lot of
reasoning across the internet. Can you
help me? And there are a lot of tools
for that
now. Chat responses to doing work.
Basically, she's trying to get investors
to realize there's actual utility here.
And so, she calls out agent growth. This
is interest. I think it's really
interesting that it's interest because
personally, interest way outweighs the
ability of someone to execute on this
agent side of things. It's really
fascinating. AI agent deployments. uh
you do see some right like uh these are
agent deployments in op software I would
say operator got much better in the last
week and a half when 03 was released but
by and large these are very much sketchy
implementations
um and they do not work as well as
advertised and I think it's a little bit
disingenuous to call these the flagship
agents when really you know scaled up AI
companies that are using 11 labs agents
or using a custom llama agent would be a
more interesting way to illustrate the
workflows you can do like dispatch or
inventory or customer
success AGI. What is AGI? Um, you know,
Sam says that we can build AGI. We'll
see if he's
right.
And the the thing that she calls out is
like this is a phase shift in capability
and how does it reshape things? And I
think that's actually a fairly mature
way to look at it.
We will see how far we get. But if it's
even a part of the way toward AGI, we
are still going to see some of that
phase shift in capabilities and it will
reshape how, you know, we think about
work and decision-m in institutions. And
so she gets into that a little bit. Um,
and then she jumps right into how we
build it, right? Capex, big tech
companies. Um, it has been coming for a
while. You see that same inflection
point in 2019 that you do on the Nvidia
graph with developers, which I think is
super
interesting. You see just a linear graph
up in terms of cloud revenue. Uh it's
people just keep making a lot of money
on cloud. Uh and I think what's
interesting is yes, you have AWS, you
have Microsoft, uh but you have some new
players coming in too. Uh IBM cloud is
nothing to sneeze at now. It was quite
small in 2016, but it's starting to come
in. Alibaba cloud was becoming well it
was becoming relevant here but it's had
trouble scaling up past that and we have
newer players like Oracle cloud which is
becoming more serious since 2020 and so
it's really becoming a multi cloud
world and one of the things that I think
is a good anecdote is that CEOs
anecdotally are saying that they are
willing to jump to cloud when they
weren't before because of the power of
AI and so that's something to keep in
mind when you look at the cloud AI
relationship all right capex spend big
companies. It's up and to the right.
Right. Model training debt training data
set size is going up and to the right
which you can see which we've talked
about before. Capex spend is going up
into the right. Basically I think her
implication is that as training data
sets explode you have to spend more on
stuff which is kind of true but also
serving models is different from
training data and that doesn't
necessarily get caught
here. Capex spend continues to grow. I
would expect it to continue to grow.
This is the put money in Jensen's pocket
fund. Like this is how this works. Um
cloud versus AI patterns, the initial
cloud infra buildout, the AI info
buildout. I think this is one of the
more useful slides. I think she's
roughly correct here. But what's
interesting is again that inflection
point is 2019. It was long before Chad
GPT. It was when we called things
machine learning. And I think it's
fascinating that you can actually see a
lot of these trend lines rooted there
even though the public paid attention
here. Tech capex
spend material improvements in GPU
performance. Basically everything's
gotten vastly better. Like data center
power use is down 43% over 8 years
leading to a 50,000x greater per unit
energy efficiency. You do not hear that
a ton in the headlines about power
usage. And I'm not saying power usage
isn't relevant. But the fact that we've
gotten 50,000 times more efficient makes
it's a big deal. Uh we've scaled from
1.3 billions tokens per megawatt year to
65 trillion per megawatt year. That's
insane. Nvidia computing power is this
like one of the most gorgeous
exponential curves I've ever seen. It's
just
pretty. Uh Nvidia global data center
capex also scaling really really fast.
R&D is rising really like how do they
use stuff, right? R&D is a fairly loose
category at these big companies like
product and engineering salaries going
here but like it's scaling fast. Um and
of course they're loaded with cash. They
to no one's surprise Apple, Nvidia,
Microsoft, Google, Amazon and Meta make
a ton of money. Um and they make uh lots
of money that they then recycle into
investments. So free cash flow just
continues to grow.
Uh, and as they continue to grow,
they're using it for AI compute to spend
to train and run AI
models. So, what are they spending it
on? This is one of her larger points,
right? They're spending it on data
centers, right? You can see the data
center scaling up here. That is what
they're doing. Uh, data center growth,
existing capacity, new capacity, just
going super
fast. Um, how big is a data center? I
love this little illustration. It's
very, very large. It fits 418 US homes.
Um, compute is scaling super super fast.
She uses XAI here. Data centers are
electricity guzzler. She actually
doesn't hide this, which I think is
good. Um, and so she talks about overall
energy consumption. Even though we're
getting vastly more efficient, which is
great, still consuming a lot of energy,
we're still going to need to address
that. Um, one of the things that I think
is interesting is that this has
catalyzed a downstream revolution in
nuclear power in the US where we're now
greenlighting a lot of nuclear power.
The other thing to call out is this is
not just the US. There are other players
here that are scaling up their data
center usage as well. Um, that being
said, like the US is driving a huge
amount of it because we're the center of
the AI revolution right
now. Okay, let's go real quick. I only
have like five more minutes on this cut.
So, uh, we'll get through close to half
the deck. AI model compute costs, uh, AI
model training. We are scaling up
2400x growth over eight years, which is
just insane. Um, so it just gets more
and more expensive, like we're past
hundred billion dollars to train a
model. Are we going to be willing to
keep spending more and more and more at
factors of 10? That's really the
question. Inference costs are falling
through the floor. This is down into the
right. Um, I love this. If you don't
know what a token is, I think this is a
great definition. Four characters in
English. But fundamentally the energy
required to generate a token is just
falling through the floor. And this is
what's interesting. If you look for the
true roots of AI, Volta may be it
because so much of the gain in energy
required per token came at the Volulta
iteration for Nvidia. When it became
that cheap in 2018, well, a year later,
you suddenly see big machine learning
investments. In a sense, hardware GPU
innovation can take years to unfold.
Here we are 7 years later, 8 years
later, we're starting to see the impact
of Volulta across the globe and no one
uses Volta anymore. It's just that this
innovation was enough to change the unit
economics for AI. One of the untold
stories of
AI. Uh AI inference costs are 99.7%
lower. Cost efficiencies. This is the
light bulb which took 75 years. This is
chat GPT that took two years uh to go
down 99%. It's
insane. Um, declining cost and improving
performance. It's like getting a Porsche
for like a 100x cheaper every year. It's
crazy. Um, and so the teraflop uh
investments uh and then the relative IT
cost coming down, performance is
converging across models as all of these
servers are cheaper, as all of the the
data centers proliferate, as the
techniques to make these models
proliferate, everything is converged.
This is what I mean about a multimodel
world. Um, I am going to stop there. I
have to run. Uh, so we'll call this part
one. We got through 143 slides in about
an hour.
Well done