Learning Library

← Back to Library

Oracle, AMD, OpenAI Strike Massive AI Deals

Key Points

  • Oracle announced a massive cloud partnership to install 50,000 AMD AI chips by late 2026, a move echoing earlier OpenAI deals with AMD (≈6 GW of processors) and a potential $300 billion, five‑year agreement with Oracle.
  • The surge in AI chip demand is being driven by a rapid expansion of data centers, prompting concerns about inflated hype around AMD and Nvidia products while investors pull back on earlier AI bets.
  • Recent AI‑related news highlights industry efforts to curb misuse: Visa launched a framework to differentiate legitimate AI shopping assistants from malicious bots, and Salesforce unveiled AI‑generated voices for its customer‑support agents.
  • Oracle and IBM are collaborating on new enterprise AI agents to automate routine tasks such as reviewing intercompany agreements, aiming to free human workers for higher‑value activities.
  • Controversial developments include a VC fund that reportedly dismissed all its analysts, the launch of Reflection AI, and Sam Altman’s announcement that ChatGPT will soon allow verified adult users to access erotic content.

Sections

Full Transcript

# Oracle, AMD, OpenAI Strike Massive AI Deals **Source:** [https://www.youtube.com/watch?v=JQ0ZObgOoGQ](https://www.youtube.com/watch?v=JQ0ZObgOoGQ) **Duration:** 00:49:38 ## Summary - Oracle announced a massive cloud partnership to install 50,000 AMD AI chips by late 2026, a move echoing earlier OpenAI deals with AMD (≈6 GW of processors) and a potential $300 billion, five‑year agreement with Oracle. - The surge in AI chip demand is being driven by a rapid expansion of data centers, prompting concerns about inflated hype around AMD and Nvidia products while investors pull back on earlier AI bets. - Recent AI‑related news highlights industry efforts to curb misuse: Visa launched a framework to differentiate legitimate AI shopping assistants from malicious bots, and Salesforce unveiled AI‑generated voices for its customer‑support agents. - Oracle and IBM are collaborating on new enterprise AI agents to automate routine tasks such as reviewing intercompany agreements, aiming to free human workers for higher‑value activities. - Controversial developments include a VC fund that reportedly dismissed all its analysts, the launch of Reflection AI, and Sam Altman’s announcement that ChatGPT will soon allow verified adult users to access erotic content. ## Sections - [00:00:00](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=0s) **Untitled Section** - - [00:03:24](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=204s) **AI Chip Market Interconnectedness** - The speaker outlines how Nvidia, AMD, OpenAI, and Oracle are cyclically investing in each other and racing to supply AI chips for expanding data centers, raising questions about the sustainability and real‑economy impact of this tightly linked ecosystem. - [00:06:50](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=410s) **AI's Rapid ROI vs. Internet** - The speaker argues that AI is triggering a much larger and faster transformation than the past internet bubble, with substantial investment yielding near‑instant productivity gains because the necessary infrastructure already exists, suggesting that current deals will only become wilder. - [00:11:10](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=670s) **CUDA Dominance and AMD Catch‑Up** - The speakers compare Nvidia's mature CUDA ecosystem—including libraries like cuDNN—to AMD's newer, open‑source ROCm stack, noting that while AMD has closed the hardware gap, it still trails in software support as vendors build compatibility layers and OpenAI’s Triton abstracts away CUDA, intensifying the competition. - [00:14:40](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=880s) **Shifting Bottlenecks in AI Hardware** - The speakers compare AMD’s CUDA‑compatible GPUs to Nvidia’s broader industry push, noting that while inference dominates demand, future constraints may move from infrastructure to energy consumption. - [00:17:43](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=1063s) **AMD's Cost Efficiency Fuels AI Competition** - Speaker extols AMD's lower price‑per‑teraflop GPUs and IBM’s ultra‑low‑energy chip, noting how competition with Nvidia spurs innovation, then pivots to a government AI standards report assessing DeepSeek on 19 benchmarks. - [00:23:22](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=1402s) **Evaluating Model Performance Beyond Benchmarks** - The speakers argue that assessing AI models such as Deepseek requires looking past open benchmark scores to consider safety, cost‑effectiveness, and real‑world suitability, emphasizing independent testing and continual outperformance of prior and competitor models. - [00:26:34](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=1594s) **Benchmark vs Standard Evaluation Conflict** - The speaker contrasts Deepseek V2’s marketing‑driven benchmark claims with NIST’s emphasis on broader, safety‑and‑security‑oriented standard evaluations, highlighting the resulting contradictions between the two assessment approaches. - [00:29:58](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=1798s) **Reflection AI's Open-Source Frontier Claim** - The speakers examine Reflection AI's $2 billion raise and its ambition to become the U.S. leader in frontier AI open‑source, questioning its differentiation and competitiveness against established firms. - [00:36:30](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=2190s) **Execution Risks and Global LLM Competition** - The speakers discuss the high resource and expertise demands, limited transparency, and small teams behind frontier open‑source LLMs, while contrasting U.S. players like Reflection AI with international rivals such as Chinese labs like DeepSeek. - [00:40:51](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=2451s) **VC Firm Replaces Analysts with AI** - A venture capital fund claims it has eliminated human analysts in favor of AI agents, positioning itself as a low‑cost, turnkey provider of compute clusters, training data, and methodology for emerging frontier AI companies. - [00:45:23](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=2723s) **Automation, Coordination, and VC Insight** - The speaker likens assisted self‑checkout to faster service through human‑technology coordination, then draws a parallel to venture‑capital work, arguing that while quantitative analysis can be streamlined, the relational “soft” elements remain essential. - [00:49:15](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=2955s) **Podcast Wrap-Up and Thanks** - The host closes the episode by thanking the guests, urging listeners to follow the show on major podcast platforms, and teasing the next episode of “Mixture of Experts.” ## Full Transcript
0:01there is a lot of draw down on the 0:02investments which have been ma made into 0:04openi uh and there may be some inflated 0:08demand generation on the AMD on the 0:10Nvidia side now that's on one side on 0:13the other side if you're seeing the 0:14amount of data centers which are getting 0:16built or it actually tracks the amount 0:18of AI chips which are demanded 0:21>> all that and more on today's mixture of 0:23experts 0:24[Music] 0:29I'm Tim Huang and welcome to Mixture of 0:31Experts. Each week, Moe brings together 0:33a panel of brilliant, funny, thoughtful 0:35panelists to debate, discuss, and think 0:37through the latest news in artificial 0:38intelligence. Joining us today are three 0:40incredible panelists. So a very warm 0:42welcome to Vulmar Ulik who is VP AI 0:45infrastructure, Ambi Ganison who's a 0:48partner for AI and analytics and then 0:50Aaron Botman who is an IBM fellow and 0:52master inventor. There's lots lots lots 0:54to talk about today. We're going to talk 0:56about a huge deal between Oracle and 0:58AMD. We're going to talk about a new 1:00evaluation out of uh the Department of 1:02Commerce's Casey. We're going to talk 1:04about Reflection AI and a VC fund that 1:07apparently has fired all of its 1:08analysts. But first, I with the news. 1:16>> Hey everyone, I'm Eiley McConnan, a tech 1:18news writer for IBM Think. I'm here with 1:20a few AI headlines you might have missed 1:22this week. Visa introduced a new 1:24framework to help merchants distinguish 1:26between legitimate AI shopping 1:28assistants and malicious bots. The goal, 1:31cutting down on fraudulent purchases by 1:33AI agents. 1:35Oracle and IBM are teaming up to release 1:38several new enterprise agents that will 1:40automate tasks such as reviewing 1:42intercomp agreements. This means more 1:44time for tasks that actually require 1:47human touch. Is a customer service agent 1:50you're speaking to a human or a bot? At 1:53Dreamforce, Salesforce's banner event, 1:55the tech giant announced is introducing 1:57AI generated voices to its customer 1:59support agents. 2:01Sam Alman announced on X that come 2:03December, Chat GBT will share erotica 2:05with verified users over 18. What could 2:08possibly go wrong? Want to dive deeper 2:10into some of these topics? Subscribe to 2:12the Think newsletter linked in the show 2:14notes. And now back to the episode. 2:17[Music] 2:20Today I actually want to open with I 2:22think what was one of the big stories of 2:24the week uh which is that Oracle cloud 2:26uh came out and made an announcement 2:28that they will be deploying 50,000 AMD 2:32chips in the second half of 2026. Um and 2:35this itself would be a big deal uh just 2:38in terms of dollar amounts. Um but it 2:40follows on some two pretty interesting 2:41bits of news. Um, one of them is OpenAI 2:44earlier in the month announcing uh a 2:46deal with AMD where they were going to 2:48deploy about 6 gawatts worth of 2:50processors and then separately OpenAI 2:53announcing a deal with Oracle um that 2:55could be a 5-year deal that's worth as 2:57much as $300 billion. Um, and so I think 3:00Volmar, with you on the line, I kind of 3:01want to throw the ball to you first. Um, 3:03it kind of feels like, you know, when we 3:06talk about this a lot ate that sort of 3:08everybody's coming for Nvidia. Um, and 3:10there's this kind of interesting cluster 3:12emerging between OpenAI, AMD, and 3:15Oracle. I guess how threatening is this 3:16ultimately to Nvidia, or is this kind of 3:18just mostly like we're going to have to 3:20wait to see? 3:21>> I think that the market is uh opening 3:24up. I think the competitors are going 3:26after Nvidia. I think that's why Jensen 3:28has this extremely aggressive uh new new 3:31chip every year uh strategy. Um I think 3:34the more interesting part of that 3:36announcement is also that OpenAI is 3:38investing into AMD and takes a stake in 3:42AMD. So if you look at this ecosystem 3:45right now there is Nvidia investing into 3:49OpenAI then OpenAI cutting a deal with 3:52Oracle and buying a ton of uh Nvidia 3:55cards and then also investing into AMD. 3:58So I think there's um there's a lot of 4:00money going around um and it goes in 4:04circles which is always um a question 4:06when it starts going in circles like if 4:08it's a real economy or not like 4:10somewhere the money needs to come from 4:12um so I think um there is a lot of draw 4:14down on the investments which have been 4:16ma made into openai now that's on one 4:19side on the other side if you're seeing 4:21the amount of data centers which are 4:22getting built uh it actually tracks the 4:25amount of AI chips which are demanded 4:28and so I think from an OpenAI 4:29perspective, it probably makes a lot of 4:31sense to bifrocate to not have a single 4:33source and AMD needs to kind of flush 4:37their their first real data center 4:39chips. And so I think if you look at the 4:41overall deal um it is AMD chips with an 4:44investment of Open AI into AMD and then 4:47they need to house it somewhere. And 4:49Oracle is is a company which is putting 4:52AI chips into data centers, you know, by 4:54the truckload. And so that's a kind of 4:56natural thing. And we saw that with the, 4:59you know, almost 40% pop in Oracle 5:01shares when they announced their their 5:03multi-year deal with Nvidia. So I think 5:05the that troa is kind of a known troka 5:08and so I'm not surprised that um OpenAI 5:11is is diversifying including taking a 5:1410% stake in AMD. 5:15>> Yeah, absolutely. And Amy, maybe we'll 5:17turn over to you. I mean, I did see I 5:19don't know if you all saw this like 5:20visualization of where all the money has 5:22flowed in the AI economy and it's just 5:24it is to Vulmar's point like a big 5:26circle. It's like everybody's giving 5:27money to each other. Um, and I guess 5:29Ambi, I'm curious about what your 5:31thoughts on like is that is that signs 5:34of a bubble? I mean, because to Vulmar's 5:36point too, like on a certain level, 5:37everybody's just sending the money back 5:39and forth. But on the other hand, it's 5:41also like these buildouts are consistent 5:44with forecasted demand. So on a certain 5:46level, it's kind of like maybe not so 5:47crazy that we're sort of seeing this 5:48kind of thing happen. So curious about 5:50how you weigh this. I mean, obviously 5:51the underlying question if it's is if 5:52it's a bubble, but I think there's more 5:54interesting things to talk about there. 5:56So curious about your thoughts. 5:57>> Yeah, you know, I saw Immad uh he posted 6:02some things calling this as the AI 6:04bubble. But if you look at I saw some 6:06comparison in terms of the investments 6:08that have been made as a percentage of 6:10GDP, right? Um if you compare to the 6:13level of investments that were made in 6:14the com era versus what's happening in 6:17theai era now, right? It's still, you 6:21know, some ways to go before you even 6:23hit those limits. So I think it's still 6:25early stages and um you know so that's 6:28number one. I think that the the the the 6:31amount of investment that's yes there 6:33are deals that's happening but the 6:34amount of investment that's going on is 6:36still not as big a scale as you would 6:39imagine for the amount of um the benefit 6:44that the economy can um acrew from 6:47something like this in the future. 6:49Right? So I think there is a much much 6:50bigger wider transformation being um 6:54expected out of this. So yes there is 6:57probably a perception that there is a 6:58bubble but I also think like the amount 7:01of investment that's going in here is 7:03probably not commenurate to the level of 7:05unlock that can be garnered over the 7:07longer time horizon. Right. So 7:09>> yeah you're almost saying like you you 7:11might think these deals are crazy but 7:13just wait they're going to get crazier 7:15essentially. 7:17>> That's one way to put it. Uhhuh. 7:19>> Also, if you look at uh the internet 7:22bubble, right? So, there was a massive 7:24amount of fiber optic investments just 7:26to, you know, put wires into the ground. 7:28So, people actually dug trenches. Now, 7:30we are building data centers. Um, if you 7:32look at the return on investment in in 7:35the internet phase, that was, you know, 7:37investments over 20 years. Right now if 7:40you see the uh since um the introduction 7:43of CHBT we see a 43% increase in 7:46productivity and that's within like a 7:48year or two. So I think the internet 7:50bubble the productivity increases came 7:52just much much later than the 7:54productivity increases we see with AI 7:56it's like pretty much instantaneous 7:57because the the infrastructure is 7:59already present like distribution is 8:02there data centers are there people have 8:03computers right all that I mean do you 8:05remember the AOL you know CDs which you 8:08had to buy when the internet happened 8:10right so the all that infrastructure is 8:12present so the adoption is just so much 8:14faster 8:14>> and I feel like it's almost to be 8:16expected that this would happen Right. 8:18And I think we I was actually surprised 8:20that it took this long for AMD to breach 8:22this mode. Um you know I would have 8:25expected this to happen sooner than 8:26later. Right. Um like we we saw that the 8:29the model mode collapsed first, right? 8:32Um the app layer mode was already you 8:34know that was already fairly open. Um so 8:37there's heavy competition going on in 8:38the model and the app layer but then the 8:40infrastructure mode was the one that 8:42held really really steady right and I 8:44think we are seeing cracks open in that 8:45mode. You know, I think it's it's 8:47inevitable. This is just pure capitalism 8:49at play, right? This is always expected 8:51to happen and, you know, it's just 8:53surprised that it took uh this longer 8:55and you're just going to see a lot more 8:58of this at the infrastructure level 9:00happen, you know, going forward. 9:02>> Yeah, for sure. And Aaron, maybe I want 9:03to bring you in because I think on the 9:05question of the moat, right, the thing 9:06that people always bring up in the 9:08Nvidia AMD competition is well well 9:10CUDA, right? is like that's that's 9:13really that's that's going to be the 9:14thing that is kind of like ultimately 9:16Nvidia's great defense here. Do we think 9:18that even that part of the moat is 9:20getting chipped away in some sense, 9:21right? Like because I think part of it 9:22is like who's going to pour money where, 9:24but also part of it is just like kind of 9:26the optimization at this at this level. 9:28And so curious if you have forecasts 9:30there how you think about that. 9:31>> Well, so we followed the money so far, 9:33but I say let's follow the energy, 9:35right? Because um Oracle, they're going 9:38to deploy 50,000 AMD, you know, 450s. 9:41That's a lot. That's going to be So, so 9:43I did the math beforehand, but that's 9:45about, you know, 50 million kilowatt 9:46hours right now. I drive a Tesla and for 9:50that amount of power, I could drive 336 9:52million miles per month on that single 9:54charge, right? Do we really need to 9:57build all these additional in 9:59infrastructure places where we could 10:02share an infrastructure? Could this 10:04energy be used elsewhere? It could power 10:06a small city, right? it cost 5 million a 10:09month just for electricity bill alone 10:12right so I mean I you know I think 10:14rather than follow the money let's 10:15follow the power and the energy because 10:17that in turn is where the money is going 10:20right um but um but I mean anyways to to 10:24answer the question about CUDA right um 10:26so I think that um open AAI right uh 10:30because they're also deploying the these 10:31these AMD you know 450 chips right the 10:34hardware itself they're very comparable 10:36right So, so we can compare like these 10:38uh uh Instinct chips from uh from AMD to 10:43A100s to H100s, you know, to to Nvidia, 10:46right? And there's close to par. I mean, 10:48there there's there's there are some 10:50different technologies that are done, 10:52you know, with like the two 2 millimeter 10:53or two nanometer technology that's used 10:56to create the chips. But on the but on 10:58the flip side, what you mentioned is 11:00CUDA versus what's called um ROCM, 11:03right? So CUDA, right? That's Nvidia's 11:05proprietary parallel computing platform. 11:08You know, that's above and beyond, you 11:10know, years ahead of I think what AMD 11:13can provide with uh you know, RSCN, 11:16right? Which which is AMD's GPU 11:19computing, right? U it's now now it is 11:22open source. The adoption is smaller and 11:24it's less mature than a CUDA. Uh but on 11:27the other side you know I am impressed 11:29that AMD is working a lot with for 11:31example OpenAI right to to make it 11:35better right uh but but but again going 11:38back to Nvidia right they they have 11:40other libraries like u you know CUDNN 11:42right where it can help you performance 11:45tuning right um deep learning algorithms 11:48you know which are the fundamental 11:49building blocks of jai right um so you 11:53know so so I do think AMD has some 11:55catch-up to do in that area even though 11:57they've caught up on the hardware side. 11:58>> So I I want to jump in here. Uh so right 12:01now if you look at uh the the CUDA wall 12:05um effectively all vendors have decided 12:08to build compatibility layers and so you 12:10get um the equivalent of a CUDA 12:12implementation from AMD and Intel is on 12:15the same path. So we are like the world 12:18is standardizing against CUDA. Um and 12:21the that requires a re-implementation of 12:23you know where Nvidia had 10 15 years to 12:25actually build up that wall. Uh and then 12:27on the flip side on top of it you have 12:30uh uh OpenAI really aggressively driving 12:34uh Triton which is an abstract is a 12:36programming abstraction against GPUs 12:38which is effectively hiding the fact 12:40that there's CUDA under the under the 12:42covers. And so you have effectively 12:44attacks from both sides which will kind 12:46of like you know come out somewhere in 12:47the middle where you know OpenAI I mean 12:51they started Triton to become GPU 12:53independent and everybody else says oh 12:55well I need to support the ecosystem let 12:57me just be CUDA compliant and I mean 12:59Rockam if you look at um at their 13:03compute layer I mean AMD cards have been 13:05in high performance computing forever 13:07right I mean just like if you look at 13:09the the big cray machines they are all 13:10based on AMD GPUs 13:12Um and so those libraries are extremely 13:16hardened. I mean ATI which is AMD now, 13:18right? I mean the whole series of AMD uh 13:21chips comes out of the ATI fold and ATI 13:24and Nvidia were always going 13:25head-to-head and both of them had OpenGL 13:27implementation, OpenCL implementations. 13:29And then at some point AMD said okay we 13:32we are going a different route. We are 13:34making a new stack. we making everything 13:36open source and we call it rockm and AMD 13:38went all the way and said okay we even 13:40release the details of the chip the 13:41internals so you get the assembly 13:43language everything right so that's AMD 13:45strategy and video said no we do 13:47everything closed source and we actually 13:48have a a hardware abstraction layer 13:51where the card under the covers can be 13:53changed and we are compiling an 13:55intermediate binary to that underlying 13:58card and so they can actually hide some 14:00card pegularities in that compile layer 14:03so very different strategies 14:04But in the end the the you know if you 14:07took if if you go you know not even 14:0930,000 ft but if you go like maybe 3,000 14:12ft an AMD card and Nvidia cards affected 14:15the same fundamental architecture 14:17computer architecture right and so um I 14:20don't think that it's so hard because 14:22you know of that uh to see that um that 14:25compatibility layer being built. Yeah, 14:28you're almost saying the kind of like 14:30even though it has been much talked 14:31about as a massive mo it may actually be 14:33in practice like much more shallow than 14:35we think. 14:36>> We tested AMD cards over a year ago in 14:40CUDA compatibility mode and we can run 14:42large language models. So you don't 14:44support everything. you don't have the 14:45fluid dynamics and all that other stuff 14:47which invidia and if you if you look at 14:49in in Jensen's last keynote he's like 14:52hey we have these like 15 industries we 14:54are working in and we can do everything 14:56right but then on the flip side if you 14:57look where the highest demand is it's 14:59run VLM right run an inferencing engine 15:02and so if 90% is run an inferencing 15:04engine and 10% is all the other stuff I 15:07think that um you know the mode for the 15:09other stuff is not where we have you 15:11know trillions of dollars spent in in in 15:14um in, you know, industrial capacity. 15:16>> Yeah. You almost Jensen has these kind 15:17of interesting incentives to like try to 15:19like push the other stuff, right? If 15:21those grow really quickly, then he's got 15:22more of a moat than kind of the core use 15:24case. Um, which is pretty 15:26>> and I think in both of what Aaron and 15:27WMAR said, I feel like okay, now we're 15:30seeing yes, we thought always 15:31infrastructure was the the bottleneck. 15:33But I think, you know, now we may be 15:35reaching a spot as this grows between 15:37AMD versus Nvidia and the mode starts 15:39cracking um energy may start becoming 15:41the bottleneck, right? So that's going 15:44to be an interesting 15:46>> it's just infrastructure. 15:47>> It's all infrastructure to 15:48>> Volkart. I mean go on like it's it's yes 15:52now we are going one down now. Schneider 15:54is the bottleneck before you know HPM 15:56was the bottleneck. Yeah. 15:58>> Well, I think this will be kind of the 15:59interesting like vertical integration 16:00that you potentially see happening, 16:02right? Which is okay, we've got open AI 16:04taking stakes in a in the chip companies 16:07now, right? It feels like it's just a 16:09matter of time before this blob starts 16:10taking stakes in like energy, right? And 16:13then you kind of just like go down like 16:14you imagine like actually ultimately 16:15this will be sort of vertically 16:16integrated in a certain sense. 16:18>> I I think the interesting part here is 16:20that how fast the semiconductor industry 16:23was actually able to adjust if you think 16:26about it, right? I mean there's a lots 16:28of pricing flexibility in in in 16:30microprocessors and how slow all the 16:34other industries adjust like to try to 16:36try to buy a transformer. We are going 16:38into a world where uh the actual 16:40physical infrastructure which you know 16:42does not go through these extreme growth 16:44phases suddenly is forced to you know I 16:47need like 100x the capacity of of power 16:51>> like tomorrow 16:51>> tomorrow. Exactly. So I think but that 16:54will also lead to a like you know if I 16:57cannot buy data centers I will have a 16:59much higher replacement cycle because I 17:01can get a card in much faster which is 17:04twice as powerful than getting twice the 17:06power in right and so we will see 17:08probably for the for like the the next 17:12couple of years very very aggressive 17:14silicon replacement simply because the 17:16rest of the infrastructure cannot 17:18sustain this. It's cheaper to swap out 17:20the Nvidia card than to build another 17:22power plant. 17:23>> That's right. Yeah. I think that will be 17:24one of the funny legacies of this is 17:26obviously like building power is good 17:28for lots of different things, but like 17:30essentially like the accelerant of AI is 17:32going to basically like force a bunch of 17:33build out that will have all these 17:34really interesting collateral effects 17:36outside of AI. Um, and it's just kind of 17:38this weird like the tail has wagged the 17:40dog in a certain sense in the era that 17:41we're in. 17:42>> Yeah, I do like where where the AMD chip 17:43is going. It's it's more costefficient, 17:45right? Because it has a lower price per 17:47teraflop, right? Um, and so you know, 17:49you know, with with whenever you scale 17:51it up, you know, if you do the extreme 17:53testing of 50,000, you know, GPUs, 17:55they're going to be saving millions per 17:56month, which translates more than 17:58likely, you know, lower energy costs, 18:00right? And and then, you know, we we at 18:02IBM were, you know, we have this true 18:04north chip, right? That's very very low 18:06energy requirements, right? Can run on a 18:09on a mobile device, right? And it runs 18:11out of u, you know, neurolets, right? Um 18:14and so yeah and and so so I think this 18:17you know competition between AMD and 18:19Nvidia is good right right um that that 18:22it increases innovation um such that we 18:26can solve some of these you know big 18:28problems that are here and are going to 18:30be still coming. 18:35>> All right I'm going to move us on to our 18:36next topic though this is an amazing 18:38discussion kind of getting into the guts 18:39of this story. Um, next bit I wanted to 18:42kind of cover was this interesting 18:43report that came out of the center for 18:45AI standards and innovation. Uh, which 18:48is a a unit within the department of 18:50commerce that is kind of like the 18:51government's eval shop for AI models. Um 18:55and this was sort of one of the first 18:56kind of real sort of public reports that 18:58they have done where they basically said 19:01look we the government the department of 19:02commerce took a look at deepseek and we 19:05evaluated deepseek against 19 benchmarks 19:08and their conclusion is well in 19:10comparison to US models deepseeek lags 19:12in performance it lags in cost it lags 19:16in security and it also lags in 19:18adoption. Um, and so in some ways it was 19:20kind of like the don't worry about 19:22DeepSeek uh uh uh was the the headline 19:25in some ways. Um and I guess I mean 19:28maybe I'll throw it to you. Um thoughts 19:30on this? Like I guess when DeepSeek 19:32first hit I think we had a lot of 19:33discussion on that like oh man this is 19:35going to force everybody in the industry 19:36to have to adopt how do you compete with 19:39free you know huge danger. Um, and I 19:42guess the kind of question for you is 19:43looking at this analysis, maybe deepseek 19:45is not that big of a threat to American 19:47AI businesses as we thought. Um, is that 19:49the right way of thinking about it? 19:50>> Yeah, I think there's a couple of 19:52dimensions to this, right? So, I think 19:54we've always stressed that, hey, when 19:55you're building models, yes, you know, 19:58you should have the right level of 19:59guardrails, you should have the right 20:01level of security, especially in 20:02enterprise settings, right? I I'll talk 20:04about you know when I'm um talking to my 20:06clients and we we always talk about 20:08models then you know in an enterprise 20:10setting those things become really 20:12really paramount. So yeah, it's it's um 20:16I think it's it's interesting to see 20:17that you know um amongst the American 20:20models versus the Chinese models, right? 20:22You know, I think it becomes obvious um 20:24outside of all political contations, you 20:26know, the market always chooses for 20:28okay, for my enterprise requirements, 20:31I'm going to go with models that meet 20:33the appropriate security guard rails, 20:35the appropriate uh requirements and so 20:37on, right? Um the other angle that I saw 20:41from all of this is you know there is I 20:46think this this opened up so deepseek 20:48yes you know it it opened up um a lot of 20:51mind share to open models and open 20:54source models right um embedded in this 20:57report was also a statistics which you 20:59know we may not have captured which is 21:01there's like an increase in like 21:02thousand% in download of um you know 21:06deepseek models 21:08So yeah and the consumer behavior also 21:11drives enterprise behavior because 21:12consumers ultimately sit in enterprises 21:14as well. So from that perspective I see 21:17that the mind share opening up to just 21:20open you know open source open weight 21:22models. So you know not just working 21:25with proprietary models. So it's an 21:26interesting dynamic between yes I want 21:29models that are reliable, trustworthy, 21:32safe, but I also want a wide variety of 21:35models, choice of models and I want to 21:37be able to um have a an open element to 21:41it. I can go and inspect it and so on, 21:44right? you know, deepseek by nature of 21:46having open weights is what enabled us 21:48to go and, you know, pressure test and 21:50stress test all of this and then figure 21:53out um um you know, how it's performing, 21:55where the security implications are and 21:58so on and so forth. The third element in 22:01all of this is I feel like you know, 22:05yes, there is a bunch of metrics that 22:07we're talking about, but at the end of 22:08the day, you know, outside of the 22:10security guardrails and all of that, 22:11right? Um from a performance perspective 22:14like this is still falling into the trap 22:16of benchmark maxing right at the end of 22:19the day yes you know yes you do need 22:22some baseline benchmarks but at the end 22:24of the day when you are um doing this in 22:28an enterprise setting or for whatever 22:30use case right like you have to still 22:32work with how does it measure for you 22:35right so there has to be that dimension 22:37so yes I think that the safety element 22:40the trustworthy element Absolutely 22:42right. I think that gives us um a good 22:44connotation in terms of you know what 22:47should be used for instance what should 22:49not be used from an enterprise setting 22:51but there are these other angles that I 22:52think we should look at in terms of 22:54whether it's truly performant or not 22:56whether it's truly performant at the 22:58cost that it's supposed to perform or 23:00not as well right 23:01>> yeah are you kind of saying I mean 23:03because everybody has this desperate 23:04need they want to know who's who's ahead 23:06who's winning uh but I guess you're kind 23:08of saying that like maybe this analysis 23:10doesn't reveal ultimately a whole lot 23:12about who is ahead in some sense. Um 23:15because I think you you think ultimately 23:17like the benchmarks are a little bit 23:18artificial as a way of evaluating this. 23:20Is that the right way of kind of 23:22thinking about what you just said? 23:23>> Yeah. 23:23>> So yes, on the safety angle, right? I 23:25think there's a clear dimension on yes, 23:26you know, like if you're getting 23:27hijacked and if you're getting if you're 23:29falling through some of the security 23:31elements, then yes, I think that's a 23:32clear one. But from a true performance 23:34perspective and cost optimization 23:36perspective, I think we you know, we 23:38can't just take it at face value and say 23:40yes. Oh, you know this, you know, 23:42Deepseek is performing or not 23:43performing, right? You you really have 23:45to take a model like that and then see 23:47whether it's working for your use case 23:48or not 23:49>> because all these benchmarks are open, 23:52right? Uh you can actually train and and 23:54everybody who trains a model is 23:55constantly evaluating against these 23:57benchmarks and that's how you are 23:58stopping your training right once you 24:00hit certain levels of the benchmark or 24:01you don't improve. So I think the the we 24:05don't really have blackbox testing and 24:07the thing is in the market you know when 24:09you come out with a new model every new 24:12model you better beat your last model 24:13and you better beat you know the top of 24:15the of of the market. So I think it's 24:18interesting to see a uh an independent 24:21kind of closed source you know because 24:23if you look at the uh the the benchmarks 24:26NISRAN it's like cyber coding skills 24:29right it's like some of these benchmarks 24:31are open but others they're just like 24:33you know proprietary and so suddenly you 24:35see things where the model wasn't 24:38optimized for uh pop out right and so I 24:41think what we are seeing is that 24:42probably um deepseek overfitit on the 24:46public common benchmarks and really try 24:49to optimize for that. But from a you 24:51know making a splash uh that totally 24:53makes sense. So I think that it shows a 24:56little bit more of the depth of the US 24:58companies um of you know building more 25:01generic models because the technology 25:03hasn't been necessarily like you know 25:05single focused on hey we need to be on 25:08on that leaderboard but you know they 25:10made it on the leaderboard because 25:11there's deep tech behind it. Now does 25:13that mean that um you know the the team 25:17with new benchmarks cannot evaluate 25:18their models? Absolutely. I mean this is 25:20how you're building a model. If you're 25:21trying to build a generic reasoning 25:23model, you know you're making very very 25:25large benchmarks and then you're trying 25:26to figure out where your model has 25:27weaknesses. And the methodology of 25:30finding that that's the IP right the IP 25:33of the company when you're in this 25:34training is to know oh here my model is 25:37weak and here it's strong and those 25:38internal benchmarks they will not 25:40release. I'm sure that the Deep Seek 25:42people are like fully aware of whatever 25:45came out in the Nest report. Um they 25:47just will not go to market with that. 25:49Yeah. And Aaron, I think that's actually 25:50Vulmar's kind of pointing us to the 25:52direction that I want to sort of push 25:53you in, which is the dynamics of this 25:54are very interesting, particularly if 25:57you believe that like some of the most 25:58valuable evaluations, benchmarks you can 26:00do are all going to be blackbox. They're 26:02all going to be secret. Um, and I guess 26:04the kind of question is like if you in 26:06the in the future you feel like kind of 26:07the eval ecosystem is going to 26:09increasingly be kind of like more and 26:12more opaque, right? Because it's kind of 26:14the only way of sort of preserving some 26:16kind of genuine signal that you get from 26:18these evals. Um, because I agree with 26:20Vulmar. I think one thing I took from 26:21this Casey report was well, you know, 26:24deep sea kind of trained to the test and 26:26so maybe it was actually less impressive 26:28now that we look at it a little bit more 26:29closely. But it it suggests a lot of 26:31things about how we should do eval. 26:32>> Yeah. I mean, you know, on a surface 26:34level, this is a story about 26:35contradictions, right? Because Deepseek 26:37V2 was released in June 2024, right? And 26:40they made these big claims. You know, 26:42they said, "Hey, you know, Deepseek R1 26:44is 96% cheaper than 01, right? It 26:47performs better, if not equal on other 26:50benchmarks, you know, benchmarks such as 26:52the math 500 test, the measuring massive 26:55multi- language uh understanding. Um, 26:58but if you wrap all that together, you 27:00know, and then you look at what NIST 27:02did, you know, you ask yourself, how did 27:04this happen? You, you know, how did NIST 27:06all of a sudden come up with something 27:07different? And I think about this of 27:09like driving a car, right? So, a 27:11benchmark. What is it? Well, a benchmark 27:14on a car is like top speed going 0 to 60 27:16miles hour. What's the breaking 27:18distance? Um, now on the other hand, 27:21what is a standard evaluation? Well, a 27:23standard would be the crash safety, the 27:25emissions, the the cyber security for 27:28autonomous systems. And so what's 27:30happened is I believe that NIST focus on 27:33standards evaluations, right? Whereas, 27:36you know, you're looking at deepseek 27:37focus on benchmarks, right? Because 27:39whenever you start peeling it away, you 27:41know, you can see that uh, you know, um, 27:44the NIST organization, they're looking 27:46at benchmarks that don't pass 27:48adversarial prompts, you know, things 27:50that don't have political neutrality. 27:52um you know they're they're looking at 27:54um you you know those other types of 27:56areas which some of the benchmarker 27:59marks do have overlap in a vin diagram 28:01and the intersection right but they were 28:03measured in very different ways and I 28:06think the way that you know n did it um 28:08is very important right because this 28:10rolls into the operational cost right um 28:14the risk of using you know uh foreign 28:16type models right and and so and so that 28:20story of contradictions 28:22Right. I think just goes into how do you 28:24measure this and and how best you want 28:27to tell tell the story. you know this 28:28the saying goes I can have stats you 28:30know tell me what I want to hear right 28:32all the time right and and I think 28:35that's what's what's also happened here 28:37you know I can cherrypick different 28:39stats and measure it in different ways 28:41but um but but ultimately I think this 28:44is great news you know for the AI 28:46community right as as well as um giving 28:49us more choice and really beginning to 28:51elucidate you know what's what's 28:53happening underneath the cover and and 28:54and give us a more independent measure 28:57Yeah, on the cost angle I want to jump 28:59in. You know I think one of the other 29:00pieces that came in that report was um 29:03you know they advertise about the you 29:05know the per token cost like we always 29:08think about oh what's the consumption of 29:10tokens cost right like I was talking to 29:12my clients earlier this week and again 29:14you know thinking about models that's 29:15always front and center of my oh what's 29:17my per token consumption cost if I'm 29:20going to build use cases right um but I 29:23think the the interesting metric here 29:24the interesting comparative analysis 29:26here was that well if you're trying to 29:28use the same model or the two models for 29:31the same set of tasks, right? Then it's 29:34not just about your per token 29:36consumption cost. It's about the per um 29:39task cost, right? Even though your unit 29:42cost may be lower, your actual effective 29:45unit, which is the task completion cost, 29:48is actually way higher, right? So, I 29:50think there is some of those nuances 29:51that need to be had as well when we 29:53think about the economics of all of this 29:54as well. 29:59I'm going to bring us on and I think 30:00we'll actually bring a lot of the themes 30:02into the next story. So it kind of 30:04shifts us away from the world of sort of 30:06government evaluations of foreign 30:08open-source models uh to I think a 30:12related business story. Um so startup by 30:14the name of reflection AI uh just 30:17announced a raise that they had done. 30:19It's $2 billion at an eight billion 30:21dollar valuation, which in this day and 30:23age seems kind of oddly small in a 30:26certain sense. Um, and it's run by 30:28former DeepMind alumni. Uh, what I 30:30wanted to do by flagging the story 30:32though was that like their big pitch on 30:34this round, um, they had started as like 30:36we're going to do agents and a bunch of 30:38other kind of AI buzzwords. Their new 30:40pitch is we are going to be the leading 30:43frontier AI open-source company for the 30:47United States. And I guess maybe you 30:50know VM I'll kick it over to you. I'm 30:52sort of really interested in like won't 30:54the won't the leading open source player 30:56in frontier AI simply be the existing 30:59Frontier AI companies? Like I'm I'm 31:00really curious about if there's a 31:01there's room for this kind of pure open 31:04source upstart in the sort of the US 31:06market right now. Um and and how you 31:08size that up. 31:09>> Yeah. So I mean when I read the article 31:12it kind of said we are going to be uh 31:15open weights but not open training. So 31:18it's not that they are releasing the 31:20training sets and so from my perspective 31:23I'm not 100% sure where the 31:26differentiation will come. Now if you um 31:30they will have to fight against meta and 31:33the big question and and look I mean 31:35there are other companies which make 31:37frontier models. IBM is making frontier 31:38models. Everything is in the open. We we 31:41release where our training sets come 31:42from. You know, we are extremely open 31:44about like anything which goes into the 31:46model including indemnification. 31:49So now they are coming and saying well 31:51we give you another open- source model. 31:53So they better find an angle where that 31:56model like just being open source I 31:58think is insufficient and three three 32:00years ago you know this would have made 32:02a huge Yeah. Wow. Yeah. But now it's um 32:06I think a lot of the large players are 32:08already in the market. So they have to 32:09find an angle um you know where like if 32:12you're looking at an enthropic I mean 32:14they they really found you know the 32:16developer community and I think they may 32:18come with a similar angle of saying okay 32:21you know we find a a specific market 32:23where we have a model which is better 32:25than everybody else and we make it open 32:27source and you can fine-tune it to 32:28whatever you have. uh and if if they you 32:31know and then the question next question 32:33is like you know OpenAI was also called 32:35open and AI and now they are the clo the 32:38most closed company on the planet. So I 32:40we will see how that will play out and I 32:44think you know the more the marrier. If 32:45you have more choices it brings 32:47pressure. Um but I think also like as 32:50you just mentioned you know the the 32:52investment size is much smaller 32:54relatively speaking you know. Um uh so 32:58we may be at the tail end of the ability 33:01to create um you know foundation model 33:04companies. So it may just be like you 33:07know a tail company. It's a really 33:09strong team. they have an experience um 33:11and and they have proven you know a 33:13proven track record so I'm I'm sure you 33:15can find funding and if you look at the 33:17the players Nvidia's in it a bunch of 33:19others you know um you know that 33:22circular money thing again 33:23>> exactly so it may just be a yeah they 33:25take the the you know the billion 33:27dollars they get from Nvidia and another 33:29billion dollars and give it back to to 33:30Nvidia 33:31>> yeah back to 33:32>> um so but I think that overall I think 33:35we are probably from an investment 33:37perspective at the tail end of yet 33:39another uh large language you know 33:41foundation model company. I think that's 33:43kind of like an indicator because it has 33:45kind of slowed down and I think there 33:46are enough choices on the market and 33:48every VC put their chips on the table 33:50and they cannot you know they cannot 33:52invest into competing companies. So I 33:54think we are probably except for Nvidia 33:56because you know they get money from 33:57everybody 33:58>> because Nvidia 33:58>> yeah so um so we are I think probably at 34:02the tail end which is is good to see 34:03right. So that means uh the money now 34:06can go somewhere else. So I think we are 34:08we are from an investment perspective if 34:10you solely look from a VC perspective I 34:12think we are moving on from uh you know 34:15do we need another model company to what 34:18are the applications right and that's 34:21the I think you know we we got the the 34:23the fiber optics in the ground so that's 34:25done that's called Nvidia and AMD and 34:27we're building data centers so now we 34:30got you know uh we got TCP IP and we got 34:33that that's the foundation model guys 34:35and now the next question is who's going 34:37to be the Google and the Amazon and so 34:39that's that's really from my perspective 34:41that's where I'm looking is like you 34:42know what are the applications which are 34:44going to like really shift industries 34:47around and right now we are still at the 34:49plumbing layer right we are looking at 34:51electricity we're looking at GPUs etc 34:53those are all the ingredients to 34:55actually build the businesses which are 34:57going to be transformational but you 34:58need that capacity and that 35:00infrastructure in place 35:01>> Aaron do you buy uh Vulmar's assessment 35:03I guess it's a it's not necessarily a 35:05pessimistic view but sort of the idea 35:07that like the kind of era is already 35:08moving on and that I guess in some sense 35:11like open source is no longer kind of as 35:13cool or in the very least as distinctive 35:14as it used to be. Curious about how you 35:16size up the prospects of being able to 35:18create like a genuine independent open 35:20source competitor in the space. I mean I 35:22mean it looks as a you know reflection 35:24AI it definitely has all the ingredients 35:25to be a successful player but success is 35:29not guaranteed here you know I mean I 35:31mean I I always think in terms like 35:33Winston Churchill you you know says that 35:36you know Americans will always do the 35:38right thing as long as they've exhausted 35:40all other possibilities and reflection 35:42AI they don't have time to exhaust all 35:44possibilities here right to be to be 35:46successful right and so they're going to 35:48need to be very very focused um you know 35:51to make a difference you know and and 35:53this tail end which which yes I I agree 35:55that this is a tail end you know market 35:57where it's becoming very saturated uh 35:59quickly you know but I think some of the 36:01strengths are you know the track record 36:03right of the founding members you know I 36:04mean they're they're known came from 36:06deep mind right alpho and so on the 36:09funding and valuation seems to be 36:11somewhat decent but the investor roster 36:13looks good you know I mean they have 36:15nvidia sequoia talent recruitment market 36:17position uh but they have a very 36:20ambitious roadmap and it's all about 36:21speculation. They haven't released a 36:23single model yet, right? I mean, what 36:25are they actually going to do, right? 36:28And and so that gives you execution 36:30risk, right? Um Yeah. Yeah. Frontier 36:32LLMs, you know, they're extremely 36:34resource, but also expertise intensive, 36:37right? So both of those combined, you 36:39know, is hard, right? There's a lot of 36:41lot of competition. Um, you know, we've 36:43already mentioned open limitations. Only 36:46the weights are going to be open, right? 36:47It's not like how we provide not just 36:49open weights but we clearly say what is 36:52in our data pile right that we're using. 36:54Um they only have a team of 60 people 36:56but I mean you know you know so the pros 36:57and cons on that on that you know 36:59pendulum there go go go goes goes up and 37:01down. Um but I think you know we'll very 37:04we'll we'll see um if this is going to 37:06work out um and they're going to need to 37:07fail quick if they're going to be 37:09successful. 37:09>> Ambi want to zoom out a little bit. But 37:11we've been talking very much about the 37:12kind of US market, right? Like can a 37:14company like Reflection go toe-to-toe 37:16with Meta, right? Or even, I don't know, 37:18OpenAI's open source development over 37:20time. Um, on the backdrop of this Casey 37:23report, it's kind of interesting to 37:24think a little bit about, right, was 37:25which is okay. Well, let's just think 37:27about the international competition for 37:29open um, and think a little bit about 37:32like the peer competitor for reflection 37:33AI is not maybe necessarily meta. It's 37:36actually maybe like a a smaller lab like 37:39deepseek. And so kind of like the a lot 37:41of the natural competitors seem to be 37:42like these kind of Chinese open source 37:44upstarts. Um and yeah, curious about how 37:46you kind of size up that competition, 37:48right? Like maybe not like the small 37:50guys versus the big buys, but small guys 37:52against small guys is kind of the 37:53interesting comparison I want to go 37:54into. 37:55>> I think it's a good branding exercise 37:56they got going there, right? Which is uh 37:59Frontier Open model, etc. Um, so I think 38:03it's a it's a good branding in USB, but 38:06whether you put it in the frame of the 38:07US market or the international market 38:09with China, right, at the end of the 38:12day. So, you know, I sort of agree with 38:14what Bookmar and Aaron mentioned, I sort 38:15of don't agree that it's saturated in 38:18the sense that if you're thinking about 38:20generic LLMs, then yeah, you know, 38:22you're sort of reaching the tail end. 38:24Um, but again, you know, it's all 38:26speculative at this point. We don't know 38:28what they're releasing, what their um 38:30details are. So maybe they will pivot 38:32into other modalities right so go into 38:35like world model who knows right um so 38:37maybe that happens maybe that doesn't um 38:40and even in the you know the the pure 38:42text based large language models um the 38:45space where I don't see saturation is 38:47verticalization right so if you're 38:49thinking about generic LLMs yes um we're 38:52seeing some sort of saturation going if 38:55you think about LLMs applied in the 38:57context of coding coding agents and you 39:00know there I think you're starting to 39:01see I wouldn't say full saturation but 39:03you know fair fairly good maturity but 39:05you extrapolate it to say enterprise 39:08domains and verticalization in the 39:10enterprise context there it's still 39:13fairly um you know a wide swath that's 39:16still you know open to be um conquered 39:19there. So it it really depends on 39:22exactly what they're trying to build and 39:23where they're going to go right. So it's 39:25all pure speculation. Um you know 39:28whether you put it in the context of you 39:30know the the the Chinese models or the 39:33the frontier models in the US at the end 39:35of the day everyone's playing for the 39:37global market right and no one's going 39:39to be restricted to hey I'm going to go 39:41play only in this market right so at the 39:42end of the day you just have to look at 39:44the market as the market um as a whole 39:46market. So am I what you just said I 39:48think is an interesting one. I like you 39:51know verticalization if you look at 39:53companies right they they have 39:54proprietary data sets. What I haven't 39:57seen yet is the whole like global 40:00foundry like um you know uh business 40:03model where you say okay I'm a manufact 40:06I'm a manufacturer of your company's 40:08model. I'm giving you the core 40:10technologies. I give you, you know, help 40:13how to actually build it and you build 40:15your own model, you know, and we already 40:17did all the pre-training. So, I think 40:19that's this is where the, you know, 40:21these open-source or semi-opsource 40:23companies could go where there I think 40:25there are new opportunities. But right 40:27now, we we, you know, take this model or 40:29go home and the finetuning is kind of, 40:31you know, it doesn't really work. So I 40:33think that whole industry will go 40:35through a couple of iterations until you 40:37know companies can actually at a 40:39reasonable cost build their own 40:41proprietary models and I don't think 40:42that the industry isn't even created yet 40:45to do that and so I think there's still 40:46>> no it's an open it's an open swap there. 40:49>> Yeah. Yeah. That would be super 40:51interesting. You end up with kind of 40:52like the Foxcon models basically like 40:55yeah like 40:56>> we just have this huge compute cluster 40:58everybody can be a frontier AI company 40:59now. That plays one I haven't really 41:01heard about. That's not just the it's 41:03not only the hardware, right? But it's 41:04also like, okay, and I give you 90% of 41:06the training set and I give you training 41:08methodology, right? And and I'm just the 41:10assembler and then, you know, you pay me 41:12like $5 million, right? 41:14>> Mhm. Yeah, that's right. Yeah. 41:15>> Very cool. 41:20>> All right, I'm going to move us to a 41:21final kind of fun story. It's a little 41:23bit of a throwaway, but um I feel like 41:25the joke I have is like every few months 41:27there's a headline which is can you 41:29believe that they automated and fired 41:31all these people from this particular ex 41:32job? Um and that story has finally 41:35apparently come to VC. Um so there's a 41:38story out of Business Insider about this 41:40VC collective called David A's Venture 41:43Collective or DVC. Um relatively small 41:46fund given the numbers that we've been 41:47talking about today, $75 million. Um and 41:50what they have kind of done in terms of 41:52their promotion for their VC fund is to 41:55say we have fired all the analysts and 41:57replaced them with agents. Um, and this 42:00is maybe actually like somewhat more of 42:02an interesting story than it might look 42:03at um, at first glance because really 42:06the model that they're doing is they're 42:08saying, "Look, we're going to have a lot 42:09of LPs who are going to be various 42:12people at various companies. And what we 42:14want to do is use them to help us source 42:16deals for the VC fund, but a lot of the 42:18sourcing, diligence, analytics, analyst 42:21work essentially is going to be done by 42:23AI." Um, and so I think maybe Aaron, 42:27I'll kick it over to you. I was sort of 42:29interested in this idea that basically 42:31like what AI is going to do is not 42:33necessarily replace it certainly 42:35replaces the jobs but what it's kind of 42:37doing is like shifting the labor right 42:39so typically like LPS would never be 42:42like the ones sourcing but I guess the 42:44idea is with technology you might be 42:45able to lower the cost enough that they 42:47become the ones that source it's almost 42:49like a fancy version of now how you have 42:50to like check out all the groceries by 42:52yourself right in line just on the 42:54finance side and so anyways I was 42:56curious about Aaron what you thought 42:57about this business model. Do you think 42:59it's viable? Do you think it's mostly 43:00marketing? Uh how do you think about it? 43:02>> Yeah, I mean just to bring the story 43:03back down to earth, right? You know, 43:05whenever we say DVC has eliminated all 43:07of their analysts, it was five people. 43:10Five people, right? Um and they use AI 43:13agents to assist, right, with deal 43:14sourcing, portfolio monitoring, due 43:18diligence and so on. Um what I think 43:21should have happened right is that um 43:23instead of doing job replacement right 43:26because AI is not a story of mass job 43:28replacement or even small job 43:30replacement it's about job 43:31transformation right that rather than 43:34firing anybody right I think we should 43:36become AI translators you know that our 43:39value uh as as humans and as workers 43:42just become be becomes amplified uh by 43:45these types of tools you know so we have 43:46new roles that are emerging ethics 43:48engineers you know synthe synthetic data 43:50engineers, behavior engineers, auditors, 43:52right? Um those types. Um but I mean we 43:56we could even see maybe we're becoming 43:57agent orchestrators, moderators, even AI 44:00psychologists, you know. Um but but I 44:02mean that that said, you know, um I 44:05think that um the replacement of people 44:07is a misnomer. I think it's the 44:09transformation, you know, of people and 44:12doing that. Yeah. and and and and 44:15there's also this this sort of 44:16relativistic piece of of how each person 44:19interprets AI. You know, some people 44:21might think, oh, these AI agents are 44:22sentient, you know, where they can 44:24perceive, feel, and experience or 44:25they're sapient. They can think, they 44:26have deep reasoning, you know, but 44:28because I build a lot a lot of these 44:30systems, I think neither of those, 44:31right? But but a but somebody who's just 44:34a, you know, just just easy, you know, a 44:36user who comes in, you you know, does a 44:38flavor of of the month search and they 44:39see the results, wow, it must be 44:41sentient and or sapient, right? Um but 44:43but all of that right I think is 44:45important so that everybody you know has 44:47a fundamental level of understanding of 44:49AI and and can become a translator. So I 44:51would challenge DVC to maybe change the 44:54narrative a bit right and let's talk a 44:56bit about how people are going to be a 44:58amplified rather than replacing. Yeah, I 45:00mean that transformation narrative I 45:02think is an interesting one, Ambi. I 45:03think I mean doesn't all this beg the 45:05question of why you even have a VC fund 45:07in the end? Like you have a bunch of LPs 45:09that are now going to do all the work of 45:11sourcing and diligent companies but now 45:15with agents I suppose but it sort of 45:17begs the question of like what this 45:18business even is once you've done that. 45:21>> Yeah, you you talked about selfch 45:23checkouts, right? I don't know about you 45:25guys, when I go to my local Costco, 45:27right? I go to the selfch checkckout 45:29lane and I still have my cashier still 45:31help me check it out but then we just do 45:32it in a much faster pace right to to 45:35Aaron's point like it's not a complete 45:37replacement but you the the the 45:39coordination is what makes it uh faster 45:42and more effective so you know I think 45:44there is a little bit of a spin going on 45:46in this DVC article um this news here so 45:50you once you separate the the wheat from 45:52the chaff um yes I I think that's a very 45:56pertinent question to be asked right 45:57like Okay, what exactly is the you know 46:01core analysis or really you know 46:03hypothesis testing and stress testing 46:05that you're really doing that warrants 46:08um you know deep expertise versus just 46:12you know some rudimentary analysis right 46:14so I think there's it it also exposes a 46:16little bit of like okay what exactly is 46:19the the um the in-depth stuff that's 46:21going on right but I think the flip side 46:23to that is when you think about um the 46:24VC funds it's not just pure the analysis 46:27Right. There's also the relationship 46:28angles to it. So yeah, I think you have 46:31to look at it in the the the angle of 46:34both the hard and the soft elements. So 46:37yeah, I think you're trying to chip away 46:39a little bit on the hard elements and 46:40make them a little easier, but the core 46:42soft elements still exist, right? So 46:44you're just leveraging them as much as 46:46possible. 46:46>> Lar, I'll give you the last word here. 46:48>> Yeah, 46:48>> if you have any thoughts. 46:49>> Yeah, I mean I was inventor for a year 46:51and a half, so I'm I've seen the the 46:53belly of the beast, right? So um the 46:56first thing is the fund is very small, 46:58right? It's $75 million that's almost 47:00nothing. Um so the way usually these 47:02funds work is you take a bunch of high 47:04net worth individuals and they throw you 47:06know everybody throws in a million 2 47:08million 5 million like on the top and 47:10then you you build a fund that runs over 47:1210 years and then you're trying to 47:14return the money. Um if if you are on 47:18such small scale everything you do you 47:21invest in people. There's no because 47:22like you are effectively pre-product 47:25you're pre- anything right so because I 47:27mean how much money can they put in half 47:28a million a million so everything is 47:30effectively relationships and so the 47:32biggest problem of the of the VC world 47:34is deal flow and so the natural thing 47:36what you do is you you try to find 47:38people you incentivize to bring you deal 47:40flow and protocol if you are not at tier 47:42A everybody wants a tier A nobody wants 47:44to be invested by tier B so you need to 47:46say hey you know I have something to 47:48offer and so what they what they do is 47:50they say Okay, I take all these people 47:52who chip in money and they are my 47:54limited partners and I give them an an 47:56you know exceptional return but in uh 47:58for doing that you need to give me 48:00access to your your network and so the 48:02LPs are probably high net worth 48:04individuals who are on the tech industry 48:06and so suddenly you get signal and you 48:08get early signal because you have 48:09someone who is your spokesperson that's 48:11how you get deal flow in now the if you 48:14look at analysis it's like okay if you 48:16are putting a half a million dollar in 48:18the analysis is okay can you you know 48:20can you write code and you know are you 48:23a good human being and that's pretty 48:24much it and then is the idea anywhere 48:26reasonable. If you look at deep 48:28analysis, so you you have a labor pool 48:31analysis problem. So are people good or 48:33bad? How do you find them? How do you 48:34help these companies off the ground? But 48:36this is much more like doing market 48:38intelligence. You know, is there a 48:39market? Is there no market? What's the 48:41product? And then which people should 48:42you hire? So and that analysis is 48:44something which you know you can really 48:46AI because uh the other analysis is lot 48:49usually when you're later stage is much 48:51deeper financial analysis you know so 48:53you're looking at you know what's your 48:54revenue, what's your forecast, what's 48:55your cost etc. But that happens in the 48:58you know postp product and already in 49:00market. So because that's such an early 49:02stage um actually doing you know 49:05analysis is not that much of an analysis 49:08but a crapshot and so it's really like 49:10okay you know having having a 49:12relationship to people. 49:13>> Well that's a great note to end on. Uh 49:15that's all the time that we have for 49:16today and uh thanks to Ambi uh Aaron and 49:19Vulmar for joining us. We'll hopefully 49:20have you on again very soon and thanks 49:23to all you listeners. So, if you enjoyed 49:24what you heard, you can get us on Apple 49:25Podcast, Spotify, and podcast platforms 49:27everywhere. And we'll see you next week 49:29on Mixture of Experts. 49:31[Music]