Learning Library

← Back to Library

Deep Dive into Mary Mer's AI Trends Deck

Key Points

  • The video is a detailed, hour‑long walkthrough of Mary Mer’s 340‑page “AI Trends” deck, which she released after years of focusing on VC investments rather than public trend reports.
  • Mer’s deck aims to synthesize disparate data points into a cohesive narrative about AI, structuring the material around rapid AI adoption, compute demand, usage, cost, monetization, robotics, and the broader global competitive landscape.
  • A key insight highlighted is that AI adoption is accelerating faster than the internet rollout ever did, driven heavily by developers within the Nvidia ecosystem and the proliferation of tools like ChatGPT.
  • While praising the deck’s organization, the presenter signals disagreement with Mer’s framing of global competition and promises to explore why that perspective may be flawed.

Sections

Full Transcript

# Deep Dive into Mary Mer's AI Trends Deck **Source:** [https://www.youtube.com/watch?v=_g1LFxC31EY](https://www.youtube.com/watch?v=_g1LFxC31EY) **Duration:** 00:41:13 ## Summary - The video is a detailed, hour‑long walkthrough of Mary Mer’s 340‑page “AI Trends” deck, which she released after years of focusing on VC investments rather than public trend reports. - Mer’s deck aims to synthesize disparate data points into a cohesive narrative about AI, structuring the material around rapid AI adoption, compute demand, usage, cost, monetization, robotics, and the broader global competitive landscape. - A key insight highlighted is that AI adoption is accelerating faster than the internet rollout ever did, driven heavily by developers within the Nvidia ecosystem and the proliferation of tools like ChatGPT. - While praising the deck’s organization, the presenter signals disagreement with Mer’s framing of global competition and promises to explore why that perspective may be flawed. ## Sections - [00:00:00](https://www.youtube.com/watch?v=_g1LFxC31EY&t=0s) **Deep Dive into Mary Mer's AI Deck** - A presenter announces an hour‑long, detailed walkthrough of Mary Mer’s newly released 340‑page AI trends deck, explaining its background, purpose, and scope. - [00:03:33](https://www.youtube.com/watch?v=_g1LFxC31EY&t=213s) **Rising Compute Costs vs Global AI Adoption** - The speaker contrasts decreasing AI expenses with soaring compute costs, noting a revenue‑loss mismatch for a company while emphasizing rapid worldwide ChatGPT adoption driven by broadband‑enabled mobile access. - [00:07:29](https://www.youtube.com/watch?v=_g1LFxC31EY&t=449s) **Incumbents Aggressively Shaping AI Landscape** - The speaker asserts that major tech firms’ swift adoption of AI trends curtails disruption, creates a mixed‑bag of opportunity and uncertainty, and diminishes the immediacy of geopolitical AI‑leadership risks in favor of a future where many AIs proliferate rather than a single super‑intelligent monopoly. - [00:10:47](https://www.youtube.com/watch?v=_g1LFxC31EY&t=647s) **AI Scaling Limits and Future Impact** - The speaker likens the AI era to an intensified internet, discusses the difficulty of measuring its cross‑industry utility, questions the sustainability of ever‑larger training data sets, and anticipates rapid hardware advances that could dramatically reshape AI development. - [00:14:26](https://www.youtube.com/watch?v=_g1LFxC31EY&t=866s) **ChatGPT's Growth Challenging Google** - The speaker highlights ChatGPT’s soaring user base and revenue, predicts it will reach a billion users and rival Google’s search dominance, while tracing AI’s evolution from the printing press to Turing and Kasparov. - [00:17:32](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1052s) **AI Incremental Improvement and Alignment Risks** - The speaker likens AI advancement to filmmakers’ evolving quality, questions how an AI released into the wild would autonomously get better and stay aligned, and notes that many capabilities projected for 2035—such as scientific hypothesis generation and immersive world‑building—are already emerging today. - [00:20:42](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1242s) **AI Revolution Evident in Patents** - The speaker emphasizes a sharp rise in AI‑driven computing patents, rapid model advances such as GPT‑4.5, high‑quality generated images and audio, and the transition toward more powerful yet costly AI systems marking the start of a new technological era. - [00:23:56](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1436s) **Explosive AI User Adoption** - The speaker highlights how ChatGPT’s rapid climb to 100 million users—driven by cheap tokens, ubiquitous bandwidth, and a surge of AI tools and startups—outpaces the adoption timelines of past foundational technologies. - [00:28:20](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1700s) **AI Expansion Across Sectors** - A rapid‑fire overview of how AI—from restaurant optimization tools to government‑tailored models and accelerating FDA‑approved medical devices—is proliferating across industry, education, and research. - [00:32:51](https://www.youtube.com/watch?v=_g1LFxC31EY&t=1971s) **Assessing AI Agent Deployments** - The speaker critiques superficial AI agent implementations, argues that true utility comes from scalable, custom agents that could trigger a phase‑shift toward AGI, and explores how this shift might transform work and institutional decision‑making. - [00:35:55](https://www.youtube.com/watch?v=_g1LFxC31EY&t=2155s) **AI Compute Efficiency Explosion** - The speaker highlights the rapid, post‑2019 improvements in AI hardware—showing exponential GPU performance, massive energy‑efficiency gains, and soaring Nvidia and cloud capex—that have turned AI compute into a vastly more powerful and cost‑effective resource. - [00:39:02](https://www.youtube.com/watch?v=_g1LFxC31EY&t=2342s) **AI Compute Costs: Scaling vs Efficiency** - The speaker contrasts exponential growth in AI training expenses with dramatic declines in inference costs, crediting Nvidia’s Volta GPU breakthrough for reshaping AI unit economics. ## Full Transcript
0:00Okay, you guys asked for it. I'm going 0:01to record a full walkthrough showing 0:03Mary Mer's trends and artificial 0:05intelligence deck which she released 0:07four days ago and we're just going to go 0:09piece by piece. We're going to talk 0:11about AI and I hope you enjoy the deep 0:13dive. This is going to be a much longer 0:14video than I usually do. I would 0:16estimate it over an hour. So dig in, 0:19grab that 0:21coffee. All right, first up, uh this is 0:24Mary Mer's uh VC firm. So Mary Miker is 0:27a general partner um and she is an 0:30investor now and that is really the core 0:32reason why she has not been doing these 0:35trends in artificial intelligence uh 0:37decks earlier. She did not do one in 0:392023. She did not do one in 2024 and 0:42before it was internet trends and that 0:44last one was back in 2019. And so her 0:47investment activities have been front 0:49and center for her. And it's notable 0:52that she's taking time aside from that 0:54to effectively brief the industry with 0:57an incredible piece of AI intelligence. 1:01And so this is her setting out the 1:02context. Um I generally don't read these 1:0410point fonts, but it's Mary, so I'll 1:06give you the TLDDR here. I think the key 1:08thing is that Mary is looking at this as 1:10a collective effort. So 340 pages, she's 1:13trying to connect, as she says, several 1:14disperate data points and it just she 1:17didn't expect it to get this big either, 1:19right? It turned into a 1:20beast and she wants to find a way to 1:23make sense of it. And that is this is 1:27her attempt to set the narrative for AI 1:29as a whole. I think this is one of the 1:31most interesting things we'll get to 1:33later in the hour. I don't know that I 1:35agree with Mary on the global 1:37competition frame, and we'll get into 1:38why. All right. Here's the outline. We 1:42are literally going to go through the 1:43whole thing, so just sit tight. Um, I 1:46actually really like the way she 1:47structured this. If anyone is wondering, 1:48how do you structure a gigantic deck so 1:50it feels understandable? Well, this is 1:54one way to do it, right? The overall 1:55takeaway seems like change is happening 1:57faster than AI user and capex growth. 2:00The headlines we tend to see what is 2:02driving that compute. How does this 2:05start to translate into the market? We 2:06get into usage, cost, and growth. We get 2:08into monetization. we get into physical 2:10and robotics. Um, and then finally, we 2:12start to get into global and work 2:14evolution at the end. That's a pretty 2:16good organization. All right, let's jump 2:18right in. This is probably the slide 2:20that she expects people to stop at and 2:22go, "Wow." Uh, so we have development uh 2:26developers, which is a theme I didn't 2:27expect Mary to get into. She comes back 2:29to developers a fair bit. Uh, developers 2:32in Nvidia's ecosystem. You can just read 2:34this as 2:35Nvidia. This is chat GPT. I don't know 2:38why she's confusing people because she 2:40says it in exactly those words later 2:43later on here. Uh AI user usage and 2:46capex growth. The key here and this is 2:49something that shows up over and over 2:50again through these graphs and others 2:51I've seen. We are seeing faster uptake 2:55on AI than we ever saw on internet 2:58across so many different metrics. And 3:00that's one of the reasons the space is 3:01so exciting right now. All right. AI 3:04usage and capex growth. companies are 3:07building a lot in AI. This is my 3:08surprise 3:10face. Okay. So when you think about 3:13that, one of the things I want to call 3:15out is that basically the story of AI is 3:18the story of charts that are up and to 3:20the right and charts that are down and 3:21to the left or down and to the right my 3:23brain. So the up and to the right ones 3:25we know about, right? The user gain is 3:27the popular one, the capex growth, etc. 3:30The down and to the right is how cheap 3:33AI is becoming. like that's just a 3:35straight vertical cliff. Um how compute 3:39expenses are skyrocketing. So in this 3:42case down is worse. Down is spending 3:46more money on 3:47compute. And one of the things that Mary 3:50is essentially calling out is that Chad 3:51GPG's revenue is scaling but not as fast 3:55as their losses are scaling from serving 3:57all the compute. There is a disconnect 3:59and they're going to have to figure out 4:01how to close that and that's something 4:02we'll get into later on. 4:04And so in a sense you can see this as 4:06like the story of those two trend lines, 4:08right? The things that are down and the 4:09things that are up. And we'll sort of 4:11follow that through the 4:12deck. One of the things that I want to 4:14call out is that 4:16we we have a 4:20tremendous global uptake here that is 4:22powered off of the back of the internet. 4:25And so part of why we are adopting so 4:28much faster with AI is because broadband 4:31internet is available all over the 4:33planet and because the form factor of 4:37the primary app chat GPT is textheavy 4:41first then its images. It operates on a 4:43cell phone. much of the uh like India, 4:46subsaharan Africa, South Asia, um Middle 4:50East and North Africa, Latin America and 4:52the Caribbean. These are places where if 4:55you have a cell phone, you're online and 4:57you can get chat GPT. And so it's 5:00actually relatively easy 5:03to stack up adoption into a very very 5:08large overall approach. And I will say I 5:12did not realize that chat GPT app users 5:15in North America, that blue section 5:18there, I didn't realize that was such a 5:20small proportion of overall users. If 5:21you had tapped me on the shoulder and 5:24you would said, Nate, out of 800 million 5:26chat GPT users, how many are in the US? 5:29You know what I would have 5:30guessed? 150 million. Maybe half the 5:33country. Nope. Apparently, it's a lot 5:36smaller than that. And they're stacking 5:38stuff up in Europe. There's it's truly a 5:40global 5:42product. All right. Um, this is one 5:46where I really like we'll get into this, 5:48but I think this is just poor framing. I 5:50do not think IT jobs is the right frame 5:52for tech. And that's a categorization 5:54issue from the Department of Labor that 5:56I just don't think is super helpful. If 5:58you look at the trends more broadly, and 6:00this kind of shows it, down 9% is a 6:05little bit 6:06disingenuous. And the reason why is 6:09because it's basically flat versus 2018. 6:13We had the massive Xerpier run up in all 6:15tech jobs and now you have a runup in 6:19AI. And I think that that's like the 6:21interest rate story kind of gets left 6:24behind. 6:25Okay, this is her like the world is 6:28changing very fast. She goes back and 6:30calls out some of the companies she was 6:32early on like Google. Um, and one of the 6:36things that she says here that I think 6:37is really interesting is 6:41that we 6:43have a change in how work gets done. I 6:46just had to find a change in how work 6:48gets done, how capital is deployed, and 6:50how leadership is defined. The 6:51leadership one feels a little squishy, 6:53but fundamentally I think Mary is 6:56correct. This is a I think she describes 6:59it as a meta techchnology. This is a 7:01meta technology that enables us to do a 7:03lot of other things more effectively 7:05including use the internet, including uh 7:09make it easy to do business everywhere, 7:11including organize the world's 7:12information. Like you can see the 7:14efforts of these major companies as 7:17essentially efforts 7:19to 7:21optimize against a meta technology that 7:24makes it easier to fulfill their 7:25mission. 7:28And 7:29so one of the things that I think pops 7:33to me as I start to get into this 7:35deck is that that overall trend line 7:40that we see in that initial slide is 7:43part of what is driving incumbents in 7:46this wave to be so aggressive about 7:48keeping up with the trends. I think 7:51there are arguably less opportunities 7:54for disruption 7:56because these companies like Google, 7:59like Facebook, like Microsoft have been 8:01so aggressive about keeping up with the 8:04trends. I think Mary calls this out 8:06later, but Microsoft's investment in 8:07open AI was a significant moment in this 8:11overall trend in this overall AI 8:15revolution. Okay. So if we move forward, 8:19basically it's the best of times, it's 8:20the worst of times. Uh it's very 8:24uncertain. And I think that one of the 8:26things that's really interesting is that 8:28she specifically calls 8:31out the she calls out the geopolitical 8:35risk factors right here at the top when 8:38I would say that's not the 8:40most interesting or compelling risk 8:43right away. I think a 8:45geopolitical leadership race for AI or 8:48an AI leadership race that beggets 8:49geopolitical risk only transpires if we 8:54buy the idea that just scaling more 8:56intelligence, which is what we see 8:58evidence of today, is enough to get us 9:02to a sort of super intelligent scenario 9:06where one company in one country has 9:10found super intelligence and quickly 9:11evolves to the point where nobody else 9:13can catch 9:14up. Even Sam Alman no longer thinks 9:17that's the most likely outcome. We live 9:19in a proliferating world. We're going to 9:21get all the 9:23AIs. And so I in a sense I think it's 9:26not so much global AI leadership as it 9:28is the global crowd of AIs that we're 9:31getting. We're going to get lots of AIs 9:34from everywhere very fast because of the 9:36proliferation of this technology is so 9:39easy. All right. 9:41So now we get into she's she's gone 9:44through her size 10 introduction. We get 9:46into sort of the numbers behind the 9:47momentum the introduction. All right. 9:50She starts off very investory slide 9:52here, right? Global GDP on a log scale, 9:55right? So you can actually I think it's 9:56a log scale. It's got to be 500 billion 9:59to 100 trillion. It's roughly a log 10:01scale. 10:03Um so 1 trillion to 10 trillion to 100 10:05trillion. It's about a double. Um, at 10:08the end of the day, basically what you 10:10see is this gigantic pop up. And I think 10:12what she's trying to say is that maybe 10:13we get the next leg of this from AI. We 10:16will see. Certainly very investor way to 10:18start. I kind of roll my eyes until we 10:20see actual productivity growth, which is 10:22one of the things that has been elusive 10:23for the internet. We see GDP growth 10:26alongside the internet, but it's hard to 10:28prove positively how the internet 10:30affected GDP growth and productivity 10:32growth in particular. Uh you can see 10:34internet companies generating revenue 10:38but proving that productivity increases 10:40drove that GDP 10:42growth. That's a little bit sketchier. 10:44It's one of the notorious loopholes um 10:47in technologies. It's very difficult for 10:50a generally available technology 10:54to demonstrate its utility enough across 10:58enough different industries in a very 11:00measurable way for us to see the impact 11:03very clearly. So basically it may be 11:06tremendously impactful and it may be 11:08hard to see like the internet. And 11:11that's essentially what she's suggesting 11:12is that the AI era is like internet on 11:15steroids. So she called out mobile 11:18internet. Now she's calling out the AI 11:20era with tens of billions of units. 11:22She's talking about GPUs here, but I 11:24actually looking at Johnny Iive and I'm 11:26looking at like where they're going with 11:27devices. Feels like that's coming pretty 11:30fast. So I would not be surprised if 11:32this is an entirely looking different 11:34looking slide a year from now when the 11:36devices uh that OpenAI are is working on 11:39are 11:40released. All right, we keep 11:43going. Training data set size. This is 11:46super controversial. Yes, it's been 11:48scaling really fast since 2010. I think 11:50the question is can we keep scaling from 11:56here? Is there a limit? We are one e to 11:59the 13 on uh number of words in the data 12:02set. Does it make sense to keep going 12:04up? Is there more to it? 12:07Uliagiver gave a talk at Nurips last 12:10year basically saying data is the new 12:13oil. we don't have an infinite sort of 12:14amount of data left. We're going to run 12:16out of data for pre-training. We'll need 12:17to find other scaling laws. Uh and then 12:20he went off and founded I think safe 12:21super intelligence and has done nothing 12:24public since 12:25then. But this is a question like I 12:28think in a sense it's a little 12:29disingenuous to portray this as up and 12:31to the right without mentioning that 12:32there's a huge question mark up here 12:34about how we handle this 12:37uh teraflop or training compute flop 12:39here. How do you in sort of increase uh 12:43training compute? There's definitely an 12:45inflection point. I think uh whether you 12:48can continue to sort of scale on a log 12:50scale like Mary is talking about is an 12:53interesting question. We've inflected up 12:57and uh Jensen Hong has done incredible 13:00things with Nvidia and 13:02chips. He's selling lots and lots of 13:04them. Is there an upper limit here or 13:07not is another question mark. 13:10200% annual growth over nine years of 13:12compute gains from better algorithms has 13:14led to 13:16uh sort of tremendous gains in AI 13:20intelligence. Uh it's also led to gains 13:22in effective compute. Basically, you can 13:24scale compute, you can scale algorithmic 13:26progress. The thing I take away here is 13:28that this is one of the clearest graphs 13:29I've ever seen of the difference between 13:32scaling algorithms and scaling just raw 13:36compute. And I think one of the big 13:38sources of gain over the next 5 to 10 13:42years in AI is not necessarily going to 13:44be just compute scaling or pre-training 13:46data. It'll be algorithms. It'll be how 13:49we use what's on the chip to deliver 13:52better 13:54answers. All 13:56right. 150% annual growth over six years 14:00of performance gains. Looks very 14:02impressive. Uh and it is very 14:04impressive. And this is all 14:05supercomputers. I'm not going to spend a 14:06lot of time here. These graphs are all 14:08up into the right. We have 340 of these 14:10to get through. Number of new large 14:13scale models. This is absolutely 14:15explosive. This is illustrating 14:17proliferation to me. Like she can't even 14:20fit this on the 14:22slide. This is one of the money slides. 14:24I think I included this in my Substack. 14:26User growth up into the right. 14:28Subscribers, revenue up into the right. 14:30This is the bullcase for Chat GPT right 14:32here. like they they are hitting 14:34absolute vertical on 800 million users. 14:36I would expect them to hit a billion by 14:37the end of the 14:38year. They are thanks to the power of an 14:42internet that's already available, they 14:44are very very rapidly hitting uh the 14:47same search mark that Google took 11 14:50years to hit right over here. Does that 14:53mean they're going to actually eat 14:54Google? Sam Alman said he didn't think 14:56so. He thinks that it's going to be 14:57different. He doesn't see a case where 14:59Google really disappears. Uh, I think 15:01it's going to be really interesting to 15:02see are these high intent searches. How 15:04does Fiji build ads at OpenAI? Those are 15:07really open 15:09questions. Okay, now we have knowledge 15:13distribution over six centuries. This is 15:15deep in the weeds. We're going to go 15:17fast here because again, 340 slides. So, 15:19basically, printing press, we jump 15:22several hundred years to the start of 15:24the internet, which really did look like 15:25this for you young people out there. Uh 15:28then we jump straight to chat GPT 15:312022 and it was not that smart guys. I 15:34don't know if you've forgotten but it 15:35was kind of dumb to start with. I 15:36remember when I was like oh this is just 15:37going to be used for marketing 15:39copy. 15:41Um and then we get into the story of AI. 15:44So all the way back to Alan Turing in 15:461950. If you want to pause this screen 15:48and look at it it's fantastic. I 15:51remember the Kasparov moment. That was a 15:53moment for me when Deep Blue won. Um, 15:56and now like everything compresses. 15:58These are all 2023 to 2024 to early 16:012025. Uh, and there's been more 16:03happening since Claude hit a $3 billion 16:05run rate in since she she produced this 16:08deck. 16:09Okay, we keep moving. We're 10% of the 16:12way through the deck, guys. Uh, things 16:14that chat GPT can do. I think this is 16:16going to be pretty obvious to most of my 16:18audience. Do we know that it does PDFs? 16:20Does it write code? Does it prep for 16:21interviews? Nobody who's been listening 16:23to this channel is surprised by this. 16:24Maybe the investors are when they look 16:26at this deck. Uh AI equals circa 2030. 16:30Top 10 10 things AI will likely do. Uh 16:34it claims generating human level text. 16:36It claims creating fulllength films. It 16:38claims understanding and speaking like a 16:40human. Here's the thing I want you to 16:41call out. There's a massive gulf between 16:43these two slides. It's a gulf not just 16:47in terms of assembling new tokens, but 16:50in terms of several breakthroughs we 16:52haven't seen yet that I don't think Mary 16:54does a good enough job teasing out. And 16:56maybe that is just not in the scope for 16:58this deck because it's a phenomenal deck 17:00overall. One of them is how these 17:03systems adaptively learn in the wild, 17:06how these systems handle context, how 17:09these systems handle memory, how these 17:12systems handle intent over time. I could 17:15go on and on. There's a lot of things 17:16that are native to humans, especially 17:20adult humans who have been educated for 17:22information work or even creative work. 17:25You know, she calls out full-length 17:26films. They're not intuitive to an AI. 17:30So, for example, Christopher Nolan gets 17:32better every time he makes a full-length 17:34film. Steven Spielberg arguably gets 17:37better and has been getting better for 17:38decades every time he makes a 17:39full-length film. But it is not clear 17:43how an AI in the wild would adaptively 17:46get better after being 17:48released and if it did what that would 17:52mean uh in terms of how quickly it would 17:54improve. Those are all very unanswered 17:56questions and they do raise alignment 17:58questions like would the AI stay aligned 18:00if that were the case. So I think in a 18:02sense this 18:03slide it illustrates possibilities. It 18:06illustrates the dreams of a major model 18:07maker. I think it's different from 18:09saying it's a step function and you just 18:11run up it. Um, 18:14ironically, some of the stuff that's 18:16listed here by 2035, we already have 18:19examples of doing. So, alpha evolve 18:21conducts scientific research. It 18:23generates hypotheses. Um, it's on the 18:26verge. Uh, I think some of them run 18:28simulations. So, this is not really 18:30new. Um, and so in a sense, this does 18:33feel a little bit mixed up like building 18:35immersive virtual worlds. I think we're 18:36actually quite close there and that 18:39feels much much easier than trying to do 18:42a full length film. So I think this is 18:44an area where 18:46like again it may be per chat GPT but 18:49like it would be helpful to get a little 18:52bit of a spot check there. All right, 18:55now we're back to the graphs where I 18:56think this is a stronger 18:58deck. So machine learning models 19:02are absolutely trending to no one's 19:06surprise. Uh and I I think that the idea 19:11like she has an interest in in academia 19:13anyway. She wrote one of her rare decks 19:15that was not as widely circulated on AI 19:17in educational institutions. And so when 19:19she you know references Stanford and 19:22talks about the academia era um I think 19:24that's something that has been an 19:26interest for her for a long time. It's 19:29something where she wants you to know 19:30that she 19:32is thinking through how she can 19:37present the findings from academia in a 19:40way that shows scale and also thinking 19:42through how AI needs to circulate back 19:45into the academic space. And she covers 19:47that more. This is a little bit of a 19:49preview. So that one I don't know if 19:51that slide landed. This one I think 19:53does. So developer growth in the Nvidia 19:58ecosystem 6x like just an absolutely 20:01stunning gauge. And you know what's 20:03interesting? It gapped up around the 20:06time when we started to talk about 20:08crypto around the time when we started 20:10to talk about late Zerp era some AI but 20:14not Chad GPT 20:16release. And so it's a little 20:18disingenuous to see AI as driving this 20:21trend. when Chad GPT came out here. It's 20:24more accurate to say that developer 20:26ecosystem has been booming for Nvidia 20:28for a long time and arguably set up AI 20:31to be a super boom, which I think is an 20:32interesting insight that doesn't get 20:34reported a ton. All right, global 20:37developers growing at Google. Yeah, US. 20:40This is one of the eye openeners for me. 20:42This is 20:44absolutely driven by 20:47AI 100%. 20:49Um, and I think that when you see that 20:52pop 1995 to 2003, and this is like an 20:55even sharper 20:57line, at the end of the 20:59day, this is indicating we're at the 21:02beginning of a new revolution. Like, 21:03people are doing new things and the 21:05innovation is really popping. And I had 21:08no idea that the pop for computing 21:10related patents was this high. This is 21:12super 21:14exciting. System performance on bench 21:16test. Honestly, like she doesn't get 21:18into this, but this is like it's not 21:21really to me about surpassing the human 21:22baseline. I mean, it is. It's also about 21:24the models overfitting and saturating, 21:26and she doesn't get into 21:28that. Uh, AI performance, uh, GPT4.5. 21:32The irony is they're rolling this back, 21:34even though it is as good as she's 21:36describing here, I think implicitly 21:38because the model is so expensive to 21:40serve, they're trying to figure out how 21:42to serve it uh, in the right way with 21:43GPT5. 21:46All right. AI performance, realistic 21:48conversation, touring tests. We know it 21:50passes the touring test. Maybe the 21:52investors don't. Uh, the images are very 21:54good. Now, this should not be to 21:56anybody's 21:57surprise. The images are very, very 21:59good. I love this real image, AI 22:01generated image. That's a lovely little 22:03reverse. Catches people by surprise. Uh, 22:06realistic audio. If you've ever tried 22:09one of the audios from 11 Labs, it's 22:12incredible. Um, and they absolutely are 22:14being used in production settings. And 22:16you can see that in sort of the scale of 22:18uh sort of how 11 Labs is getting used. 22:21Um and Spotify accepting audiobooks AI 22:25translated into 29 languages. Uh which 22:28is just like dramatically scaling up the 22:31impact of AI and AI voice. What this she 22:33doesn't talk about is uh Spotify 22:35accepting AI powered music and AI music 22:38which is a different conversation. uh 22:40emerging applications are 22:43accelerating. Cancer detection, 22:45robotics, she doesn't talk about drug 22:47pipelines, but that's another big one. 22:48Um protein folding kind of gets at that 22:50a little bit. But really, drug targets 22:52are a new one where you search through 22:54past academic papers and search through 22:56the chemical structure of the of the 22:57drug to sort of identify novel targets 22:59and novel use cases for existing drugs, 23:01which can be a simpler path to 23:03profitability off of existing drug 23:05testing. Uh AI benefits and risks. And 23:09of course like it could actually help 23:11people. Imagine that. Um anyway uh that 23:14aside uh she wants to talk about the 23:17risks. She has uh Deiss Hapabis Hassabis 23:20uh CEO of Google DeepMind Nobel Prize 23:22winner. Um this is basically the bull 23:25thesis for AI, right? First we solve AI 23:27then AI solves everything. Okay. He's 23:30still very bullish on that. We will see 23:32how that goes. Um, and I've highlighted 23:35some of those concerns that I have 23:36around context and learning. And we'll 23:38just follow this deck through and and 23:40continue to have the chat. Uh, Stephen 23:43Hawking, what good deck is complete 23:45without a Stephen Hawking quote? AI user 23:48usage and capex growth. Great. Yes, 23:52everything is up and to the right, guys. 23:54There's Chad GPT again. There's it 23:56beating the internet again. Yay. Uh, 23:59user adoption super super fast year 24:01launch Chad GPT.2. too, right? To get it 24:04to 100 million users. Super super super 24:06super super fast to get to 100 million 24:08users. Uh you can see all these other 24:10major companies up the side. Uh AI user 24:13adoption chat GPT is a proxy is 24:16materially faster, cheaper than other 24:19foundational tech 24:21products. Uh I mean honestly 0 days days 24:25to reach a million users and purchase 24:28price. So some of this some of this is 24:30that it's cheap now. It's cheap to serve 24:32tech and I think that we'll get to that 24:34when we we get sort of later to uh into 24:36the deck. One of the ways that we are 24:40accelerating adoption is by making 24:43tokens very very cheap across a widely 24:47saturated internet bandwidth uh 24:50environment years to 50% adoption of 24:53household tech AI era. They're guessing 24:56three years or we're not there yet, 24:58right? So it is a guess but it's it's on 25:00track for something very fast which is 25:03for the record uh much faster than 25:05desktop internet which was at 12 12 25:07years and PC which is at 20 years which 25:09is just crazy uh technology ecosystem uh 25:13number of developers number of AI 25:15startups this is real like this is 25:17absolutely explosive I think the number 25:19I heard was like over 70,000 tools which 25:21is bigger than 27,000 startups but you 25:24get the idea tech incumbent AI adoption 25:28mentions of AI and earnings calls, which 25:30is like the dumbest metric, but also 25:32real. Um, look at Meta. Mark just can't 25:35get enough. Uh, tech incumbent AI 25:38focused. I've talked about Sundar and 25:40Andy before and how they're calling this 25:41out. I think Andy is the one that said 25:43this could be bigger than the internet. 25:45So, the hype is the hype, right? 25:49Dolingo, Elon Musk, Roblox, Nvidia, you 25:52get the idea. Lots of hype. uh 25:55traditional enterprise AI adoption 25:57increasing priority. This is the whole 25:58S&P 500 and what they value enterprise 26:02AI focus. Um and what's interesting here 26:06is this is where generative AI is 26:09targeted at large companies. And I think 26:12it's interesting that uh production 26:14output, customer success, and sales are 26:16the top 26:17three. And I'm a little bit surprised 26:21that marketing is so low. I I would have 26:26expected more interest in 26:29marketing. And I think that this is one 26:32of the slides that surprised me the most 26:34because anecdotally the the line I have 26:37heard from a lot of leaders is not 26:40revenue focused. It is cost focused. And 26:43so if she is correct and she actually is 26:46highlighting a trend that they're not 26:47talking about, but they're acting out, I 26:50think that's interesting. I I do also 26:53partly disagree with this slide. I think 26:55customer service is a cost center at 26:57most businesses. So that's probably a 26:59little bit disingenuous. Margins is 27:01definitely a cost center. um marketing 27:05spend effectivity is typically a cost 27:09center and production output. I don't 27:12know that that's a revenue either. So I 27:14can also beef about the graph. The point 27:16is like I think whether it's topline or 27:18bottom line is a really relevant 27:20question to talk about more and we we 27:22probably don't 27:23enough enterprise AI focus global 27:26CMOs here we are jumping to marketing 27:29right what are they doing they're fully 27:31implemented not so much running initial 27:33tests yes this is changing so fast like 27:36I I think that you actually have changes 27:39in this even since the end of 2024 that 27:41are quite substantial like I would have 27:42expected this graph to be like up here 27:45uh and And then since we have a bunch of 27:46case studies, we're going to run these 27:48super fast. Bank of America, Erica 27:50virtual assistant, JP Morgan, endtoend 27:53AI 27:53modernization. Uh you can just see them 27:56sort of scaling that 27:58up. And you can pause on these slides if 28:00you want to. Uh Kaiser Permanente multi 28:04multimodal ambient AI, basically a 28:06notetaker. Let's just be honest. Scaling 28:08up uh at their scale. Uh Yum Brands uh 28:12enterprise AI adoption. She's basically 28:16calling out how fast this is, right? 28:17Restaurants using One Bite by Yum. Um, 28:20it gives franchises the option 28:23to optimize their kitchen, right? I 28:25don't even know what it does, but 28:27apparently it's an AI thing. I don't 28:29know what it does, but apparently it's 28:30an AI thing. It's something you can say 28:32about a lot of things right now. All 28:33right, we can keep moving. Uh, 28:35education, government research. We'll 28:36skip through this one pretty quick as 28:38well. Uh, again, education is one of her 28:41special interests and so she's going to 28:43talk about it. Um so she they have a 28:45model tailored Chad CBT is a model for 28:47USA federal agencies. Um she talks about 28:51different 28:52universities. She talks about uh 28:55government sovereign AI partners. Uh 28:57there's definitely work between NVIDIA 28:59and Chad JPGT in the Middle East as of 29:01the last uh couple weeks. But this is 29:03one of the reasons by the way why we are 29:04going to have a proliferated AI future. 29:06Look at how many different data centers 29:08we have already. 29:09Um, we're going to have more chat GPT 29:12will be all over the place and it is to 29:17be expected that we will have multiple 29:19AIs that are very highly capable. As a 29:22result, FDA approved AI medical devices 29:26are scaling very rapidly. I would expect 29:28this to scale even faster in 2025. Um, 29:32and we will have to see how that sort of 29:35nets out. But I think that the approval 29:38pipeline has gone very very fast and 29:41it's getting faster at least by FDA 29:43standards. Research uh 30 to 80% 29:47reduction in medical R&D timelines which 29:51is wild. This one doesn't get publicized 29:55enough. We're going to see a lot more 29:57like this. We'll probably see more gains 29:59here. More of the push to 80%. Okay. Uh 30:03rising rapidly across age groups. What 30:05she doesn't get at, by the way, yes, 30:07usage is 30:08rising, but the difference in estimates 30:11versus what Pew surveys say is notable. 30:14And Pew is also the company that has 30:16been or the survey company that has been 30:18reporting the most pessimism about AI 30:21usage among American adults. And 30:24so I'm a little bit 30:27shocked that we see this big a 30:30disparity. I guess people lie on surveys 30:32all the time. Um, and I think she 30:36doesn't do us a service by not calling 30:38out how pessimistic Pew survey 30:41respondents are about AI. And I think 30:43that's definitely a 30:44factor. Again, quantitatively, people 30:47are using AI a ton, whatever they say. 30:49Like, you can actually see this. Um, and 30:53it is really hard to get minutes out of 30:55someone's day. So, being able to carve 30:56out 20 minutes from someone's day on 30:59average across millions of users is 31:01huge. And it's inflecting up. This is 31:04inflecting up with better models. It's 31:06inflecting up with images, 31:08etc. Engagement, growth in sessions. 31:11It's getting better, right? Like to no 31:13one's surprise. Retention, 31:17uh, it's definitely improving. I think 31:20that they're offering more utility, so 31:21it's getting better. Uh, Google search 31:24retention is the gold standard here, 31:25right? Like, and that's stable as a 31:26table. But the fact that they're scaling 31:28up and you can expect them to start to 31:30catch Google in the next year or two if 31:32they keep scaling is interesting. And I 31:34just want to call out it's stable as a 31:35table, but it might be a little bit 31:38bigger here than it is here. And that 31:40may be that erosion that Google gets so 31:41stressed about due to AI 31:44potentially. AI chat bots at work. Uh 31:48what are people saying? It improves 31:50their quality and allows them to do more 31:51things more quickly. I don't think this 31:52is super informative. AI chat bots at 31:56school. I mean, I don't know about you, 31:58but my kid has Chad GPT on the school 32:00laptop, and I have some mixed feelings 32:03about that choice. Uh, and we have a 32:05talk about whether you can actually 32:07think critically or whether you're just 32:09using it to do your homework. Uh in this 32:11case uh she's calling out adults uh 32:13which is different um than children but 32:16still uh the are we actually going to 32:19learn anything in school question has 32:21never been more prominent and we don't 32:24have a good answer to that. They're 32:26going back to like blue books and 32:27pencils usage expansion. Um so deep 32:31research deep research deep search blah 32:33blah 32:34blah. She calls it automating 32:36specialized knowledge work. I think it's 32:37more narrower than that. I think it's 32:39basically I need to do a lot of 32:41reasoning across the internet. Can you 32:42help me? And there are a lot of tools 32:44for that 32:45now. Chat responses to doing work. 32:50Basically, she's trying to get investors 32:51to realize there's actual utility here. 32:53And so, she calls out agent growth. This 32:55is interest. I think it's really 32:56interesting that it's interest because 32:58personally, interest way outweighs the 33:00ability of someone to execute on this 33:02agent side of things. It's really 33:04fascinating. AI agent deployments. uh 33:07you do see some right like uh these are 33:09agent deployments in op software I would 33:12say operator got much better in the last 33:14week and a half when 03 was released but 33:16by and large these are very much sketchy 33:19implementations 33:21um and they do not work as well as 33:23advertised and I think it's a little bit 33:24disingenuous to call these the flagship 33:27agents when really you know scaled up AI 33:31companies that are using 11 labs agents 33:33or using a custom llama agent would be a 33:35more interesting way to illustrate the 33:37workflows you can do like dispatch or 33:39inventory or customer 33:41success AGI. What is AGI? Um, you know, 33:47Sam says that we can build AGI. We'll 33:50see if he's 33:51right. 33:53And the the thing that she calls out is 33:57like this is a phase shift in capability 33:59and how does it reshape things? And I 34:01think that's actually a fairly mature 34:02way to look at it. 34:04We will see how far we get. But if it's 34:07even a part of the way toward AGI, we 34:10are still going to see some of that 34:12phase shift in capabilities and it will 34:14reshape how, you know, we think about 34:16work and decision-m in institutions. And 34:19so she gets into that a little bit. Um, 34:20and then she jumps right into how we 34:22build it, right? Capex, big tech 34:24companies. Um, it has been coming for a 34:27while. You see that same inflection 34:29point in 2019 that you do on the Nvidia 34:32graph with developers, which I think is 34:33super 34:35interesting. You see just a linear graph 34:37up in terms of cloud revenue. Uh it's 34:40people just keep making a lot of money 34:42on cloud. Uh and I think what's 34:44interesting is yes, you have AWS, you 34:47have Microsoft, uh but you have some new 34:49players coming in too. Uh IBM cloud is 34:52nothing to sneeze at now. It was quite 34:53small in 2016, but it's starting to come 34:55in. Alibaba cloud was becoming well it 34:58was becoming relevant here but it's had 35:00trouble scaling up past that and we have 35:03newer players like Oracle cloud which is 35:05becoming more serious since 2020 and so 35:08it's really becoming a multi cloud 35:10world and one of the things that I think 35:12is a good anecdote is that CEOs 35:14anecdotally are saying that they are 35:17willing to jump to cloud when they 35:18weren't before because of the power of 35:20AI and so that's something to keep in 35:21mind when you look at the cloud AI 35:23relationship all right capex spend big 35:26companies. It's up and to the right. 35:28Right. Model training debt training data 35:30set size is going up and to the right 35:33which you can see which we've talked 35:35about before. Capex spend is going up 35:36into the right. Basically I think her 35:39implication is that as training data 35:40sets explode you have to spend more on 35:42stuff which is kind of true but also 35:44serving models is different from 35:46training data and that doesn't 35:47necessarily get caught 35:49here. Capex spend continues to grow. I 35:52would expect it to continue to grow. 35:53This is the put money in Jensen's pocket 35:55fund. Like this is how this works. Um 35:59cloud versus AI patterns, the initial 36:01cloud infra buildout, the AI info 36:03buildout. I think this is one of the 36:04more useful slides. I think she's 36:06roughly correct here. But what's 36:09interesting is again that inflection 36:12point is 2019. It was long before Chad 36:14GPT. It was when we called things 36:17machine learning. And I think it's 36:18fascinating that you can actually see a 36:20lot of these trend lines rooted there 36:22even though the public paid attention 36:25here. Tech capex 36:27spend material improvements in GPU 36:30performance. Basically everything's 36:32gotten vastly better. Like data center 36:36power use is down 43% over 8 years 36:39leading to a 50,000x greater per unit 36:41energy efficiency. You do not hear that 36:44a ton in the headlines about power 36:46usage. And I'm not saying power usage 36:48isn't relevant. But the fact that we've 36:50gotten 50,000 times more efficient makes 36:52it's a big deal. Uh we've scaled from 36:551.3 billions tokens per megawatt year to 36:5865 trillion per megawatt year. That's 37:00insane. Nvidia computing power is this 37:03like one of the most gorgeous 37:04exponential curves I've ever seen. It's 37:07just 37:08pretty. Uh Nvidia global data center 37:11capex also scaling really really fast. 37:16R&D is rising really like how do they 37:19use stuff, right? R&D is a fairly loose 37:21category at these big companies like 37:23product and engineering salaries going 37:24here but like it's scaling fast. Um and 37:28of course they're loaded with cash. They 37:30to no one's surprise Apple, Nvidia, 37:32Microsoft, Google, Amazon and Meta make 37:33a ton of money. Um and they make uh lots 37:37of money that they then recycle into 37:40investments. So free cash flow just 37:42continues to grow. 37:46Uh, and as they continue to grow, 37:48they're using it for AI compute to spend 37:51to train and run AI 37:52models. So, what are they spending it 37:55on? This is one of her larger points, 37:56right? They're spending it on data 37:58centers, right? You can see the data 37:59center scaling up here. That is what 38:01they're doing. Uh, data center growth, 38:03existing capacity, new capacity, just 38:06going super 38:07fast. Um, how big is a data center? I 38:10love this little illustration. It's 38:11very, very large. It fits 418 US homes. 38:15Um, compute is scaling super super fast. 38:18She uses XAI here. Data centers are 38:20electricity guzzler. She actually 38:22doesn't hide this, which I think is 38:23good. Um, and so she talks about overall 38:26energy consumption. Even though we're 38:27getting vastly more efficient, which is 38:29great, still consuming a lot of energy, 38:31we're still going to need to address 38:32that. Um, one of the things that I think 38:35is interesting is that this has 38:36catalyzed a downstream revolution in 38:39nuclear power in the US where we're now 38:40greenlighting a lot of nuclear power. 38:42The other thing to call out is this is 38:44not just the US. There are other players 38:45here that are scaling up their data 38:47center usage as well. Um, that being 38:50said, like the US is driving a huge 38:51amount of it because we're the center of 38:53the AI revolution right 38:55now. Okay, let's go real quick. I only 38:58have like five more minutes on this cut. 39:00So, uh, we'll get through close to half 39:02the deck. AI model compute costs, uh, AI 39:05model training. We are scaling up 39:092400x growth over eight years, which is 39:13just insane. Um, so it just gets more 39:16and more expensive, like we're past 39:17hundred billion dollars to train a 39:19model. Are we going to be willing to 39:20keep spending more and more and more at 39:22factors of 10? That's really the 39:24question. Inference costs are falling 39:25through the floor. This is down into the 39:27right. Um, I love this. If you don't 39:29know what a token is, I think this is a 39:31great definition. Four characters in 39:33English. But fundamentally the energy 39:35required to generate a token is just 39:36falling through the floor. And this is 39:38what's interesting. If you look for the 39:40true roots of AI, Volta may be it 39:42because so much of the gain in energy 39:45required per token came at the Volulta 39:47iteration for Nvidia. When it became 39:49that cheap in 2018, well, a year later, 39:52you suddenly see big machine learning 39:54investments. In a sense, hardware GPU 39:58innovation can take years to unfold. 40:01Here we are 7 years later, 8 years 40:04later, we're starting to see the impact 40:06of Volulta across the globe and no one 40:08uses Volta anymore. It's just that this 40:10innovation was enough to change the unit 40:12economics for AI. One of the untold 40:15stories of 40:16AI. Uh AI inference costs are 99.7% 40:21lower. Cost efficiencies. This is the 40:23light bulb which took 75 years. This is 40:25chat GPT that took two years uh to go 40:29down 99%. It's 40:30insane. Um, declining cost and improving 40:33performance. It's like getting a Porsche 40:35for like a 100x cheaper every year. It's 40:37crazy. Um, and so the teraflop uh 40:42investments uh and then the relative IT 40:44cost coming down, performance is 40:47converging across models as all of these 40:50servers are cheaper, as all of the the 40:53data centers proliferate, as the 40:55techniques to make these models 40:56proliferate, everything is converged. 40:58This is what I mean about a multimodel 40:59world. Um, I am going to stop there. I 41:03have to run. Uh, so we'll call this part 41:05one. We got through 143 slides in about 41:08an hour. 41:10Well done