Learning Library

← Back to Library

Nvidia Licenses Grock in Aquahire

Key Points

  • Grock with a Q announced a non‑exclusive licensing deal with Nvidia for its inference‑on‑chip technology while keeping the company independent under new CEO Simon Edwards.
  • As part of the agreement, Grock’s founder Jonathan Ross, president Sunonny Madra, and several key engineers will “aqua‑hire” to Nvidia, effectively transferring the team’s expertise without a formal change‑of‑control.
  • This hybrid structure blends a technology licence with a talent acquisition, resembling a “brain transplant” rather than a traditional outright acquisition.
  • Analysts note the deal reflects a growing industry trend where large AI firms use licensing and aqua‑hire arrangements to absorb startup capabilities while preserving the startup’s corporate shell.
  • The new model reshapes the concept of an “exit” for startups and employees, as equity triggers and conventional acquisition payouts may no longer apply.

Sections

Full Transcript

# Nvidia Licenses Grock in Aquahire **Source:** [https://www.youtube.com/watch?v=BRXGDCBSARY](https://www.youtube.com/watch?v=BRXGDCBSARY) **Duration:** 00:25:39 ## Summary - Grock with a Q announced a non‑exclusive licensing deal with Nvidia for its inference‑on‑chip technology while keeping the company independent under new CEO Simon Edwards. - As part of the agreement, Grock’s founder Jonathan Ross, president Sunonny Madra, and several key engineers will “aqua‑hire” to Nvidia, effectively transferring the team’s expertise without a formal change‑of‑control. - This hybrid structure blends a technology licence with a talent acquisition, resembling a “brain transplant” rather than a traditional outright acquisition. - Analysts note the deal reflects a growing industry trend where large AI firms use licensing and aqua‑hire arrangements to absorb startup capabilities while preserving the startup’s corporate shell. - The new model reshapes the concept of an “exit” for startups and employees, as equity triggers and conventional acquisition payouts may no longer apply. ## Sections - [00:00:00](https://www.youtube.com/watch?v=BRXGDCBSARY&t=0s) **Nvidia’s Unconventional Grock Deal** - Nvidia secures Grock with Q’s inference‑on‑chip technology via a non‑exclusive licensing pact and talent hire, a complex arrangement—not a straight acquisition—that could reshape the AI hardware landscape heading into 2026. - [00:04:15](https://www.youtube.com/watch?v=BRXGDCBSARY&t=255s) **Memory Bandwidth Limits AI Speed** - The speaker argues that beyond compute power and scarce talent, the primary bottleneck for advancing large language models is memory bandwidth—the ability of chips to rapidly move data for matrix multiplications, weight accesses, and KV‑cache handling, which directly dictates inference performance. - [00:07:43](https://www.youtube.com/watch?v=BRXGDCBSARY&t=463s) **HBM and CoWoS Packaging Explained** - The speaker explains why high‑bandwidth memory and TSMC’s chip‑on‑wafer‑on‑substrate (CoWoS) packaging are essential to overcome the memory wall and deliver fast, efficient AI accelerator performance. - [00:10:53](https://www.youtube.com/watch?v=BRXGDCBSARY&t=653s) **HBM Bottleneck and SRAM Overview** - The speaker outlines how the limited supply of high‑bandwidth memory (HBM) constrains AI hardware development, noting the few manufacturers and recent executive fallout, then shifts to describe SRAM’s faster, on‑chip characteristics despite its lower density and higher cost. - [00:14:09](https://www.youtube.com/watch?v=BRXGDCBSARY&t=849s) **On‑Chip SRAMM vs HBM** - The speaker explains that using on‑chip SRAM as the primary weight store can cut inference latency by keeping the working set on the die, yet its limited capacity (hundreds of megabytes) means it cannot replace the much larger gigabyte‑scale HBM needed for modern AI models. - [00:17:18](https://www.youtube.com/watch?v=BRXGDCBSARY&t=1038s) **Inference Economics and SPV Financing** - The speaker explains that AI inference incurs continuous operating expenses while training requires heavy upfront capital, notes Nvidia’s push for low‑cost inference with Grok, and outlines XAI’s financing plan using a special‑purpose vehicle that combines equity, debt, and Nvidia’s potential investment to lease GPUs. - [00:20:27](https://www.youtube.com/watch?v=BRXGDCBSARY&t=1227s) **Big Tech License‑Hire Strategy** - The speaker explains that firms like Google, Microsoft, and Amazon are increasingly acquiring startup technology, talent, and rights through licensing and “acqui‑hire” deals rather than full purchases, reshaping investor returns and employee incentives. - [00:23:34](https://www.youtube.com/watch?v=BRXGDCBSARY&t=1414s) **Nvidia's Defensive Chip Strategy** - The speaker explains that Nvidia’s acquisition of Grok talent is a strategic defensive move to safeguard its specialized AI chip leadership—distinct from fears about Google’s TPU dominance—by retaining expertise for LPU applications while navigating complex financing and market dynamics. ## Full Transcript
0:00There's only one news story that 0:01mattered this week and it was the story 0:03of Grock with a Q. Not Grock with a K, 0:05not the AI model company. Grock with a 0:08Q, the inference on a chip memory 0:11company. Grock with a Q was quote 0:13unquote bought by Nvidia. And I use 0:15scare quotes for that because the story 0:17is much more complicated. This is one of 0:20the defining plays of 2026. It's 0:23happening right at the end of 2025. I 0:24know a lot of us are focused on the 0:26holidays and time away. I want to make 0:28sure that we don't miss this story 0:30because it's going to shape the world 0:32that we all live in AI wise for the next 0:34few months. First, what did Grock 0:36actually announce? Number one, they 0:38announced a non-exclusive licensing 0:40agreement with Nvidia for Gro's 0:42inference technology. And in the same 0:45announcement, they said that Jonathan 0:46Ross, their founder, and Sunonny Madra, 0:48their president, plus some other team 0:50members are moving to Nvidia as part of 0:52the deal. That's the aqua hire part of 0:54the deal. Grock also said it remains 0:57independent and named Simon Edwards as 1:00CEO and said that Grock Cloud, one of 1:03their products, is continuing. This is 1:05not a straight acquisition. It's 1:07something else. It's a transfer of 1:09capability. It's like a brain transplant 1:11and it doesn't have a clean change of 1:13control event. So, let's slow down. 1:15Let's define what's actually going on 1:17because the mechanics of this deal are 1:20the point. A license obviously means one 1:22company pays for the right to use 1:24another company's tech. Non-exclusive 1:26means the seller can license the same 1:27tech to other parties. It's not on paper 1:30a takeover, right? And then there's then 1:32there's the other piece of it that makes 1:33it feel like a takeover, the aqua hire. 1:36And aqua hire is when the real asset 1:38being acquired is the team, right? Key 1:40key leaders, key engineers, and they 1:43matter somehow more than the company's 1:45revenue, more than the company's 1:46product. Historically, that kind of buy 1:50has only happened via full acquisition. 1:53So, this is what would happen when say 1:56Metabot WhatsApp, right? It was a full 1:58acquisition deal. What's new here is 2:00that hire the team is becoming a part of 2:04the play that frontier AI companies run 2:07when they want to snap up smaller 2:08startups in the AI space. They they hire 2:10the team, they license the tech, while 2:13someone else's job is to keep the 2:15startup's corporate shell alive. One of 2:17the things Reutder called out that's 2:18correct is that this is part of a 2:20broader trend where big tech is using 2:23licensing and hiring structures like 2:24this instead of straightforward 2:26acquisition. And that matters because it 2:28changes the meaning of the word exit for 2:30startups and for employees. Before when 2:33we got the startup story, it was really 2:35simple, right? If you have an exit 2:36event, whether you go to the public 2:38markets, whether you get acquired by a 2:39private company, the company changes 2:41hands. There's a change of control event 2:43and all of the equity triggers 2:45associated with that occur and that 2:47means employees will get some kind of 2:49reward for the time they spend in the 2:50company if they had equity at the time. 2:53But now all of that is different. It is 2:55unclear what the remaining employees at 2:58Grock get, if anything. And this is not 3:01the only time this has happened. It's 3:03happened a couple of other times. It's 3:04becoming a way that larger companies are 3:07able to grab key people and pull them 3:12over into their corporate entity without 3:16triggering regulatory review which is 3:18handy and it's I understand the strategy 3:21but it leaves things really awkward from 3:24a exit and culture perspective in 3:26Silicon Valley and it also tells us 3:29something nonobvious about where the 3:32race is headed. We have known for a 3:34while that a few valuable people are 3:37worth more than entire companies. That's 3:39what the market told us about Mer Morati 3:41coming out of OpenAI, founding her own 3:43business, getting a monster seed round 3:45off of her name alone. It's also what 3:47the market said about Ilia Sudsker, 3:49right? Founding safe super intelligence, 3:51getting a monster round. Very similarly, 3:53there are people who are worth more than 3:55any corporate shell can contain. And one 3:58of the things that the frontier AI 4:00companies are figuring out is that they 4:02would rather have the people on board 4:06than the tech or the assets that come 4:08with the company, the cap table, 4:09anything. They just want the people. And 4:11this is a really clean way to get that. 4:13Ironically, when we say we're 4:15computebound, I sometimes think that 4:17we're people bound, that we have a few 4:19people who can drive AI forward and they 4:21are worth anything that they care to say 4:23they're worth. And that is really the 4:24barrier at this point to moving forward. 4:26It's just it's just an interesting 4:27thought. The other thing that we're 4:29bound by though besides compute is 4:32memory. Our deeper constraint, and I've 4:34been emphasizing this for a while, is 4:36about memory bandwidth. And that's an 4:38important part of the story here because 4:40Grock was working on memory. And memory 4:42is really about how fast the chip can 4:45move data in and out of working memory 4:48while it's doing the matrix 4:49multiplications that are at the heart of 4:51large language model inference. So 4:53modern AI models don't just do 4:56mathematics. Instead, they constantly 4:58fetch and move enormous amounts of data, 5:01right? They do the matrix 5:02multiplication, but they also play with 5:04model weights. They play with activation 5:06of parameters. And in generation 5:08workloads, the KV cache that stores 5:10context so the model can keep going is 5:12critical and needs to be filled and and 5:14read and all of that all the time. So 5:16the result is that fast AI is as much 5:19about feeding the chip as it is about 5:21the chip's raw compute. This can feel 5:23really theoretical, but we see it 5:26already in our local machines. As an 5:28example, if you upgrade from an M2 Apple 5:31silicon chip to an M5 Apple silicon chip 5:34on their new laptops, you will feel the 5:37speed up in all of your cloud LLMs. Like 5:40you'll be talking to Claude or Chad GPT 5:41or any other AI Gemini, and it will feel 5:44faster because the tokenization to feed 5:48the chip happens on the local machine. 5:52And I didn't know this either, but like 5:53it happens on the local machine. And so 5:55you need a local chip that handles 5:57tokenization efficiently. And so in a 6:00sense, our perception of speed is 6:02governed by this whole ecosystem of 6:04memory management that happens around 6:06it. And that's what Grock was all about. 6:08And the thing that shows us this is true 6:11is that the components that make the 6:13memory system work are now being 6:15pre-sold years and years ahead. KH 6:18Highix, a Korean company, has repeatedly 6:20said that its high bandwidth memory is 6:22effectively allocated out over multiple 6:24years with Reuters reporting sold out 6:26conditions for all the way through 2025 6:30uh and later reporting that HBM was sold 6:32out in 2026. Volumes were being 6:34finalized now. Wild how competitive it 6:37is. We are at a point where Google execs 6:40are getting fired because they were 6:42unable to come up with pre-allocated 6:46high bandwidth memory to support 6:48Google's TPU goals heading into next 6:51year. That is how important memory is. 6:53So what is high bandwidth memory? It is 6:55not a different kind of memory the way 6:57people imagine. It's DRAM, right? It's 6:59dynamic random access memory. It's 7:02stacked vert vertically and packaged 7:04right next to the processor physically 7:06connected with very very wide interfaces 7:09so that the chip can pull data far far 7:12faster than it could from conventional 7:15memory solutions. SKH highex itself 7:18defines HPM as a memory that vertically 7:21interconnects multiple DAM chips to 7:23dramatically increase processing speed 7:25compared to earlier DRM. Essentially, 7:27you're stacking these DRAMs on a very 7:30high bandwidth connection with the core 7:32processing chip itself so that you 7:34reduce memory read write bottlenecks at 7:37the time of inference. And why does HBM 7:40keep showing up in Nvidia conversations? 7:42Because it's one of the things that 7:43makes modern AI accelerators practical. 7:47A GPU can do a staggering number of 7:49operations per second, but it cannot 7:52pull the model weights and the working 7:54set quickly enough. And if it can't do 7:56that, it stalls, right? Like it must 7:58have the ability to reference and pull 8:00from memory very rapidly for things like 8:02model weights or it's going to stall 8:03out. And that's what people mean by a 8:06memory wall physically on the chip. The 8:08firm semi analysis describes HBM as a as 8:12combining stack DRAM with ultraride data 8:15paths and notes that essentially all 8:16leading AI accelerators deployed for 8:19generative AI training and inference 8:20must use HBM. It is it is a requirement 8:23to have high bandwidth memory so that 8:26you can access that memory very very 8:29quickly in to deliver the kind of high 8:31quality inference that we want out of 8:33our AI models. But HBM has a second 8:36constraint that non-h hardware people 8:38usually miss. It's not just about 8:40manufacturing the memory dies. It's also 8:42about the packaging. So if you've heard 8:44the term coas, that's what this is 8:47about. I'm going to define it. If you 8:48haven't, don't worry. COS means chip on 8:51wafer on substrate. CO wos. This is 8:55TSMC's advanced packaging technology 8:58that lets you place logic dies and HBM 9:00stacks together on one large silicone 9:03interposer with dense interconnects in 9:05between. Again, that that dense 9:07bandwidth matters, right? TSMC 9:09explicitly describes co-as as a 9:11packaging that accommodates logic 9:14chiplets alongside HBM cubes that are 9:18stacked over a silicon interposer for AI 9:21and HBC workloads. If that sounds like a 9:23lot, think of it as we know we need to 9:26colllocate these tools so that we can 9:29get a hold of them fast. Imagine 9:31colllocating a giant apartment building 9:34and a giant downtown office building in 9:36the same block. Now people can move back 9:38and forth between home and work 9:40efficiently. It's a very similar 9:41concept. We're just operating at, you 9:43know, billionth of a meter scale here. 9:45The Financial Times has described 9:47advanced packaging as increasingly 9:49central as miniaturization is slowing 9:52and points out that techniques like HBM 9:54stacking and co-op style integration 9:56have become essentially required to get 9:59the kind of generative AI performance 10:00we're looking for. This is in line with 10:03the ongoing thesis and this is something 10:06Ethan Mik first called out. I really 10:08like it. He did his thesis on Moore's 10:10law which is something we're sort of in 10:12some senses past and some in some sense 10:14is still living through. Uh and what he 10:16pointed out is that Moore's law was not 10:17a single law, right? It actually is a 10:20reflection of a trend line captured by 10:23the allocation of capital and people on 10:25a singular problem over a very long 10:27period of time. That's exactly what 10:29we're seeing here with GPU technologies. 10:32COS and other technologies are basically 10:34ways that we are addressing the ongoing 10:36challenge of increasing AI performance 10:40even as we start to hit physical 10:41miniaturaturization limits. So now you 10:43have the chip ecosystem context. AI 10:46demand doesn't just pull on GPU supply. 10:49Therefore, it inherently tugs at HBM 10:53supply. That's why those Google execs 10:54got fired this week. It pulls on 10:56packaging capacity. It pulls on the 10:58ability of a few specialized 10:59manufacturers to ramp very quickly. 11:02Again, I want to remind you almost 11:04everywhere you look in the AI stack, one 11:06company is sitting there supplying a 11:08crucial component. It's it's amazing 11:09this whole ecosystem works together. And 11:11crucially, HBM is one of those 11:14bottlenecks. The major makers are SKH, 11:17Samsung, and Micron. That's it. That's 11:20all you got. Now, now we come back to 11:22the Gro story. Let me introduce you to 11:25SRAMM because SRAMM is where the Grock 11:27discourse course gets really interesting 11:30and really really weird. SRAMM is a 11:32different kind of memory. It's called 11:33static random access memory is a kind of 11:36memory used typically for caches and 11:38onchip storage. So the static part means 11:41it doesn't need constant refreshing the 11:43way DRAM does. So that means it can be 11:46faster to access even the DRAM. Like you 11:48know how we spend a lot of time talking 11:50about sort of DRAM as this cube stack 11:52that fits on the ship with this really 11:53wide highway etc. This RAM is faster 11:56because 11:59it literally exists on the chip. It's 12:01like imagine you have a live work 12:05solution right in the same building. 12:07Right? That's a terrible analogy but you 12:10get the idea. It literally exists on the 12:12chip so it can be faster. It's also much 12:14less dense and it's more expensive per 12:17bit. So the definition that we typically 12:19have is that SRAMM is faster than DRAM 12:22but more expensive in silicon area and 12:24cost because it's a little bit less 12:26dense. So SRAMM is typically used for 12:28cache and internal registers where you 12:30need the speed but you're not going to 12:31have as much memory while DRAM is used 12:33for main memory. It's not like there's a 12:35perfect solution. We are swapping back 12:37and forth between the two. That's how 12:38most chip architectures work. Here's the 12:40key thing that people misunderstand 12:41though. you you don't order SRAMM from a 12:44supplier the way you order those HBM 12:47stacks that I described. So you're not 12:49going to SK highix and saying I want 12:51some SRAM. SRAMM is generally built into 12:53your chip design because it literally is 12:55in the chip. So more SRAMM usually means 12:58more die physical space, right? That the 13:00die for the chip gets bigger which 13:02usually means higher cost, more yield 13:04complexity and SRAM scaling has been 13:07increasingly difficult in advanced chip 13:09design. So semiconductor engineering has 13:11been really explicit and said that 13:13SRAMM's inability to scale has has 13:16challenged our ability to hit power and 13:18performance goals because we need better 13:20SRAMM as chips continue to get better 13:22and SRAM remains the workhorse memory 13:24for AI. There's not sort of been a a 13:27solution there. This brings us to the 13:29reason Nvidia bought Grock. This is 13:32Grock's real technical wedge. Gro makes 13:35inference focused accelerators often 13:38described as LPUs or language processing 13:41units. Grock's own product brief for the 13:43Grock chip lists 230 megabytes of SRAMM 13:46per chip and claims up to 80 terabytes a 13:50second on die memory bandwidth. Grock's 13:53public blog framing makes the contrast 13:55really explicit, right? onchip SRAMM 13:58bandwidth upwards of 80 terabytes a 14:00second versus offchip high bandwidth 14:02memory they describe as roughly an order 14:05of magnitude lower in their comparison. 14:07So like on the order of 8 terabytes a 14:09second. And in a later post, Grock also 14:11described its approach as integrating 14:14hundreds of megs of onchip SRAMM as a 14:17primary weight storage for the model, 14:19not merely as a cache. And this is why 14:22chip people talk about gro as low 14:24latency. If the working set you need can 14:27live on the die on the chip, you can 14:30avoid an entire class of stalls and 14:33variability that come with model 14:35performance when models have to go off 14:37the chip. That can matter enormously for 14:40workloads where you feel that latency. 14:42Voice systems, interactive co-pilots, 14:45real-time agents, any workflow where a 14:47slow response breaks the user 14:48experience. But now here comes the 14:50nuance. SRAMM speed is real, but SRAMM 14:54capacity remains a constraint. The same 14:57GR Grock product brief that makes SRAM 14:59sound really magical forces you to face 15:02scaling math. 230 megs, that was a lot 15:05of memory back in the 1990s. I I 15:07remember when we had 230 megs. It's not 15:10a lot of memory in modern AI terms. A 15:12single HBM stack is measured in tens of 15:14gigabytes. Micron, for example, 15:16described its HPM3e as 24 gigs in an 15:19eight high stack and 36 gigs in a 12 15:22high stack. Samsung described its HP M3E 15:2612 highi is 36 gigs as well. So, we're 15:28getting to the point where we have tens 15:29of gigs available through wide 15:30interconnects on the chip. That's orders 15:33of magnitude more than 256 meg. So, 15:35SRAMM cannot and does not replace HBM. 15:39You can't get away from that. What SRAMM 15:42can do is win narrow slices of inference 15:44where the advantage of on die processing 15:47dominates and the workload can be shaped 15:49to fit that memory constraint. And 15:51that's why you see people describe SRAMM 15:53heavy designs as compelling for very 15:56deterministic inference but very weak at 15:59scale. They're playing with the physics 16:00and basically saying for simpler jobs 16:02where we need more deterministic 16:04performance for from our LLMs, Grock's 16:06solution can be useful. And it's not 16:08just the capacity issue, right? It's 16:10also the scaling issue. The industry has 16:12been wrestling with SRAMM bit cell 16:14scaling for a long time. Tom's Hardware 16:16is another newspaper that reported on 16:18this and and has reported that SRAMM 16:20density improvements have been hard at 16:23certain node transitions and in 16:25particular calls out that TSMC claimed 16:28meaningful SRAMM bit cell shrink at its 16:322nde 16:34after limited gains at 3 nanometer. In 16:36other words, the underlying point is 16:38that SRAMM is increasingly a first class 16:41chip design problem and even first class 16:44chip design firms like TSMC are 16:47struggling with continuing to shrink 16:50every generation to fit more on the 16:53chip. So now we can finally answer why 16:55would Nvidia do this? The most 16:57straightforward read is that Nvidia is 17:00treating inference as strategic. So 17:03let's define inference really plain. 17:05Training is when you take a model and 17:07you run enormous amounts of data through 17:09it to update its weights. Right? So this 17:11is the very expensive the one-time 17:13process of creating the model done. 17:15Inference is when you use the trained 17:18model to generate outputs. So every chat 17:22response you get from chat GPT every 17:24image response you get from nano banana 17:26every search rank every agent step. 17:28Training is episodic. It happens when 17:30you make a new model. Inference is 17:33continuous. So training is very capex 17:36heavy. You pay a lot and you just plonk 17:38it down and you train your model. 17:40Inference becomes operating expenses. If 17:43AI becomes embedded in products, most of 17:45the tokens on the planet will be served 17:48in inference, not burned in training. 17:52And that's why Grock's announcement is 17:54phrased the way it is. They're looking 17:56to expand access to quote high 17:58performance, lowcost inference. And 18:00that's why Reuters frames Nvidia's move 18:03as part of a competitive push as the 18:05market shifts toward inference versus 18:08training with Grock positioned as an 18:10inference specialist. Now, let's bring 18:12the financing story in because it's the 18:15same reality. It's just expressed 18:17through the capital markets instead of 18:19chip architecture. So, in October, 18:21Reuters reported reported that Elon 18:23Musk's XAI was nearing a $20 billion 18:26financing package tied to buying Nvidia 18:29processors for the Colossus 2 data 18:31center. The key detail is in the 18:33structure. An SPV or a special purpose 18:35vehicle that would raise a mix of equity 18:37and debt would buy GPUs and then 18:40effectively lease or rent that compute 18:42back to XAI. Reuters also relayed 18:45Bloomberg's reporting that Nvidia might 18:48invest up to $2 billion in the equity 18:50portion. So an SPV is basically a legal 18:54and financial wrapper built for one 18:56specific job. In this case, the job is 18:59turning GPUs into a financable asset 19:02class, a pool of hardware that can back 19:05debt, generate contracted cash flows, 19:07and be scaled without requiring the 19:09operating company to fund everything 19:11directly in a traditional manner. You 19:13can think of it as project financing for 19:15compute. In a world where GPUs are 19:17scarce, HBM is constrained, and power 19:20and data center capacity become really 19:22binding constraints, the ability to 19:24structure financing that locks in supply 19:26actually becomes part of the entire 19:28competitive AI game. Not because you 19:30want to create fancy vehicles for no 19:32purpose, but because it's how you 19:34guarantee you can run systems over time. 19:37And so this is where it all comes 19:39together. Gro NVIDIA is about pulling a 19:42specific inference capability, low 19:44latency, SRAMMheavy, deterministic 19:47serving of models closer into Nvidia's 19:50platform without forcing Nvidia to 19:52purchase the whole company. Just as 19:55XAI's SPV story is about locking in the 19:58physical substrate of scaling the GPUs 20:00by turning them into a finance supply 20:02chain, both of these stories are about 20:04the same thing. who controls the path 20:07from model capability to product 20:09capability at scale. Nvidia needs to 20:12play in that game and so Nvidia needed 20:14to buy Grock. And the Grock structure is 20:17really not unique anymore. It's part of 20:18a larger pattern as I called out of 20:20license and aqua hire deals that have 20:22become really common in Frontier AI. 20:25Reuters reported that Google did a $2.4 20:27billion licensing deal for Windsorf. I 20:30talked about it a few months ago. I 20:31flagged it as an issue then. Google 20:34hired key leaders and researchers and 20:36left windsurf independent and paying its 20:38investors via the license fee. This was 20:40also something we saw with Microsoft 20:42where Microsoft agreed to pay inflection 20:44about $650 million in a licensing deal 20:46while hiring key staff and that was not 20:49actually an acquisition, right? Amazon 20:51did the same thing with Adept. Amazon 20:53did it again with coariant on their 20:55robotics. Google did it with Character. 20:57Once you see these together, the whole 20:59story is not big tech is buying 21:01startups. The story is big tech is 21:03increasingly buying capabilities, people 21:05and rights without buying the companies 21:08outright. And that suggests to me again 21:11that what we value on the cap table is 21:14starting to change. And that changes 21:16employee outcomes and incentives in ways 21:18that are easy to miss unless you've 21:20lived through a few acquisitions. In a 21:22traditional acquisition, there's 21:23obviously a change of control and that 21:25triggers the contractual mechanisms like 21:27option plans that may have acceleration 21:29clauses, etc. And what it all adds up to 21:31is proceeds going to investors to 21:34preferred stock to common stock and in 21:36some cases to the employees if they have 21:38exercised their options etc. In a 21:40license and aqua hireer deal like what 21:42happened with Grock none of that 21:43occurred and to be fair you can see 21:45hints of how some of these deals are 21:47sometimes structured to address that 21:49concern. So the Wall Street Journal 21:51reported that Character.ai's AI's Google 21:53licensing fee was around $2.7 billion 21:56and some of that was used to buy out 21:58early investors, suggesting that there 21:59was an explicit attempt to create at 22:01least some liquidity without an 22:03acquisition event. But the larger point 22:05remains that these tend to be bespoke 22:07arrangements and they don't guarantee 22:09that everybody wins together, which is 22:11what many people have implicitly believe 22:13the Silicon Valley story to be about. If 22:15you sign up as one of the first 10 or 22:17the first 50 in a company, you think 22:19you're going to win with the founders. 22:21Not as much as the founder, but a little 22:23bit. So, if you're trying to take one 22:25deep lesson from this week, don't think 22:27of it as Nvidia is scared. I've seen 22:29that. And don't think of it as SRAM is 22:32the future. I've seen that, too. It's 22:34really that the AI race is forcing a 22:36vertical integration of realities that 22:38used to be separate. Hardware is not 22:40just hardware anymore. It's memory. It's 22:43packaging. Inference is not just a 22:45detail. Inference is becoming the whole 22:47game. Financing is not just fundraising 22:50anymore. It's a way to lock in supply. 22:52And acquisitions are not just 22:54acquisitions anymore. They're 22:55increasingly structured as a capability 22:58transfer to deliver the people and the 23:02license fees needed to secure a 23:04strategic advantage in the AI race. 23:06Nvidia needs to be in the inference 23:08game. Nvidia needs to have products that 23:13are strong on fast inference to continue 23:16to evolve and maintain their lead. 23:18Nvidia does not want the designers of 23:22the TPU chip, which by the way, that is 23:25exactly who they got in the Grock deal. 23:28The Grock deal included the founder of 23:30Grock, who designed Google's TPU chip. 23:33They don't want that person loose on the 23:34market. They'd rather bring them in as 23:36insurance. And people do paint this as 23:38Nvidia is worried about Google. Nvidia 23:41is worried about the TPU domination. I 23:44don't think that's the correct 23:45interpretation because Google's 23:47advantage is predicated on Google's TPU 23:51chip remaining mostly inside the house. 23:54Google does license their TPU chip, 23:57don't get me wrong, but they would 23:59rather you didn't buy it and they're 24:01okay with it being priced in such a way 24:03that it remains a nice to have for a lot 24:06of companies, not a must-have. And part 24:08of why is that if Google's TPU chips 24:11become commoditized, Google loses the 24:13competitive advantage they have with 24:14TPUs. Nvidia is in a different game. 24:16Nvidia is not in a hyperscaler 24:18modelmaker game. It's in the chip 24:20business. Nvidia needs to have the 24:23talent on side to make sure that they 24:26can tackle these specialized LPU 24:30applications without jeopardizing the 24:33core business. They need to make sure 24:36that they can bring in the technology 24:38and knowhow from Grock and use that as 24:42part of their continuing wedge that make 24:45them the only game in town at scale for 24:48model makers. And so this was a little 24:50bit of a defensive play by Jensen, but 24:54it's absolutely a play that makes sense 24:56when you think about who was involved 24:58and why it was worth getting that those 25:00people out of Grock. I wanted to take 25:03time to go through all of the details 25:04because one, I don't think the chip 25:06story is well understood. I don't think 25:08the nuances of the financing are well 25:10understood and why these multi-year 25:12deals are well understood. And I also 25:14don't think the talent story is well 25:16understood. So I hope this has given you 25:18a picture into how business is actually 25:21getting done at the cutting edge of AI 25:23and how we are able to continue to 25:26advance on the physical constraints that 25:29drive the model experiences we all use 25:32and love every day. Yeah, I guess thanks 25:34for coming to Professor Nate's class. 25:36This is a bit of a long one, but I hope 25:37you enjoyed