Oracle, AMD, OpenAI Strike Massive AI Deals
Key Points
- Oracle announced a massive cloud partnership to install 50,000 AMD AI chips by late 2026, a move echoing earlier OpenAI deals with AMD (≈6 GW of processors) and a potential $300 billion, five‑year agreement with Oracle.
- The surge in AI chip demand is being driven by a rapid expansion of data centers, prompting concerns about inflated hype around AMD and Nvidia products while investors pull back on earlier AI bets.
- Recent AI‑related news highlights industry efforts to curb misuse: Visa launched a framework to differentiate legitimate AI shopping assistants from malicious bots, and Salesforce unveiled AI‑generated voices for its customer‑support agents.
- Oracle and IBM are collaborating on new enterprise AI agents to automate routine tasks such as reviewing intercompany agreements, aiming to free human workers for higher‑value activities.
- Controversial developments include a VC fund that reportedly dismissed all its analysts, the launch of Reflection AI, and Sam Altman’s announcement that ChatGPT will soon allow verified adult users to access erotic content.
Sections
- Untitled Section
- AI Chip Market Interconnectedness - The speaker outlines how Nvidia, AMD, OpenAI, and Oracle are cyclically investing in each other and racing to supply AI chips for expanding data centers, raising questions about the sustainability and real‑economy impact of this tightly linked ecosystem.
- AI's Rapid ROI vs. Internet - The speaker argues that AI is triggering a much larger and faster transformation than the past internet bubble, with substantial investment yielding near‑instant productivity gains because the necessary infrastructure already exists, suggesting that current deals will only become wilder.
- CUDA Dominance and AMD Catch‑Up - The speakers compare Nvidia's mature CUDA ecosystem—including libraries like cuDNN—to AMD's newer, open‑source ROCm stack, noting that while AMD has closed the hardware gap, it still trails in software support as vendors build compatibility layers and OpenAI’s Triton abstracts away CUDA, intensifying the competition.
- Shifting Bottlenecks in AI Hardware - The speakers compare AMD’s CUDA‑compatible GPUs to Nvidia’s broader industry push, noting that while inference dominates demand, future constraints may move from infrastructure to energy consumption.
- AMD's Cost Efficiency Fuels AI Competition - Speaker extols AMD's lower price‑per‑teraflop GPUs and IBM’s ultra‑low‑energy chip, noting how competition with Nvidia spurs innovation, then pivots to a government AI standards report assessing DeepSeek on 19 benchmarks.
- Evaluating Model Performance Beyond Benchmarks - The speakers argue that assessing AI models such as Deepseek requires looking past open benchmark scores to consider safety, cost‑effectiveness, and real‑world suitability, emphasizing independent testing and continual outperformance of prior and competitor models.
- Benchmark vs Standard Evaluation Conflict - The speaker contrasts Deepseek V2’s marketing‑driven benchmark claims with NIST’s emphasis on broader, safety‑and‑security‑oriented standard evaluations, highlighting the resulting contradictions between the two assessment approaches.
- Reflection AI's Open-Source Frontier Claim - The speakers examine Reflection AI's $2 billion raise and its ambition to become the U.S. leader in frontier AI open‑source, questioning its differentiation and competitiveness against established firms.
- Execution Risks and Global LLM Competition - The speakers discuss the high resource and expertise demands, limited transparency, and small teams behind frontier open‑source LLMs, while contrasting U.S. players like Reflection AI with international rivals such as Chinese labs like DeepSeek.
- VC Firm Replaces Analysts with AI - A venture capital fund claims it has eliminated human analysts in favor of AI agents, positioning itself as a low‑cost, turnkey provider of compute clusters, training data, and methodology for emerging frontier AI companies.
- Automation, Coordination, and VC Insight - The speaker likens assisted self‑checkout to faster service through human‑technology coordination, then draws a parallel to venture‑capital work, arguing that while quantitative analysis can be streamlined, the relational “soft” elements remain essential.
- Podcast Wrap-Up and Thanks - The host closes the episode by thanking the guests, urging listeners to follow the show on major podcast platforms, and teasing the next episode of “Mixture of Experts.”
Full Transcript
# Oracle, AMD, OpenAI Strike Massive AI Deals **Source:** [https://www.youtube.com/watch?v=JQ0ZObgOoGQ](https://www.youtube.com/watch?v=JQ0ZObgOoGQ) **Duration:** 00:49:38 ## Summary - Oracle announced a massive cloud partnership to install 50,000 AMD AI chips by late 2026, a move echoing earlier OpenAI deals with AMD (≈6 GW of processors) and a potential $300 billion, five‑year agreement with Oracle. - The surge in AI chip demand is being driven by a rapid expansion of data centers, prompting concerns about inflated hype around AMD and Nvidia products while investors pull back on earlier AI bets. - Recent AI‑related news highlights industry efforts to curb misuse: Visa launched a framework to differentiate legitimate AI shopping assistants from malicious bots, and Salesforce unveiled AI‑generated voices for its customer‑support agents. - Oracle and IBM are collaborating on new enterprise AI agents to automate routine tasks such as reviewing intercompany agreements, aiming to free human workers for higher‑value activities. - Controversial developments include a VC fund that reportedly dismissed all its analysts, the launch of Reflection AI, and Sam Altman’s announcement that ChatGPT will soon allow verified adult users to access erotic content. ## Sections - [00:00:00](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=0s) **Untitled Section** - - [00:03:24](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=204s) **AI Chip Market Interconnectedness** - The speaker outlines how Nvidia, AMD, OpenAI, and Oracle are cyclically investing in each other and racing to supply AI chips for expanding data centers, raising questions about the sustainability and real‑economy impact of this tightly linked ecosystem. - [00:06:50](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=410s) **AI's Rapid ROI vs. Internet** - The speaker argues that AI is triggering a much larger and faster transformation than the past internet bubble, with substantial investment yielding near‑instant productivity gains because the necessary infrastructure already exists, suggesting that current deals will only become wilder. - [00:11:10](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=670s) **CUDA Dominance and AMD Catch‑Up** - The speakers compare Nvidia's mature CUDA ecosystem—including libraries like cuDNN—to AMD's newer, open‑source ROCm stack, noting that while AMD has closed the hardware gap, it still trails in software support as vendors build compatibility layers and OpenAI’s Triton abstracts away CUDA, intensifying the competition. - [00:14:40](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=880s) **Shifting Bottlenecks in AI Hardware** - The speakers compare AMD’s CUDA‑compatible GPUs to Nvidia’s broader industry push, noting that while inference dominates demand, future constraints may move from infrastructure to energy consumption. - [00:17:43](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=1063s) **AMD's Cost Efficiency Fuels AI Competition** - Speaker extols AMD's lower price‑per‑teraflop GPUs and IBM’s ultra‑low‑energy chip, noting how competition with Nvidia spurs innovation, then pivots to a government AI standards report assessing DeepSeek on 19 benchmarks. - [00:23:22](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=1402s) **Evaluating Model Performance Beyond Benchmarks** - The speakers argue that assessing AI models such as Deepseek requires looking past open benchmark scores to consider safety, cost‑effectiveness, and real‑world suitability, emphasizing independent testing and continual outperformance of prior and competitor models. - [00:26:34](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=1594s) **Benchmark vs Standard Evaluation Conflict** - The speaker contrasts Deepseek V2’s marketing‑driven benchmark claims with NIST’s emphasis on broader, safety‑and‑security‑oriented standard evaluations, highlighting the resulting contradictions between the two assessment approaches. - [00:29:58](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=1798s) **Reflection AI's Open-Source Frontier Claim** - The speakers examine Reflection AI's $2 billion raise and its ambition to become the U.S. leader in frontier AI open‑source, questioning its differentiation and competitiveness against established firms. - [00:36:30](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=2190s) **Execution Risks and Global LLM Competition** - The speakers discuss the high resource and expertise demands, limited transparency, and small teams behind frontier open‑source LLMs, while contrasting U.S. players like Reflection AI with international rivals such as Chinese labs like DeepSeek. - [00:40:51](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=2451s) **VC Firm Replaces Analysts with AI** - A venture capital fund claims it has eliminated human analysts in favor of AI agents, positioning itself as a low‑cost, turnkey provider of compute clusters, training data, and methodology for emerging frontier AI companies. - [00:45:23](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=2723s) **Automation, Coordination, and VC Insight** - The speaker likens assisted self‑checkout to faster service through human‑technology coordination, then draws a parallel to venture‑capital work, arguing that while quantitative analysis can be streamlined, the relational “soft” elements remain essential. - [00:49:15](https://www.youtube.com/watch?v=JQ0ZObgOoGQ&t=2955s) **Podcast Wrap-Up and Thanks** - The host closes the episode by thanking the guests, urging listeners to follow the show on major podcast platforms, and teasing the next episode of “Mixture of Experts.” ## Full Transcript
there is a lot of draw down on the
investments which have been ma made into
openi uh and there may be some inflated
demand generation on the AMD on the
Nvidia side now that's on one side on
the other side if you're seeing the
amount of data centers which are getting
built or it actually tracks the amount
of AI chips which are demanded
>> all that and more on today's mixture of
experts
[Music]
I'm Tim Huang and welcome to Mixture of
Experts. Each week, Moe brings together
a panel of brilliant, funny, thoughtful
panelists to debate, discuss, and think
through the latest news in artificial
intelligence. Joining us today are three
incredible panelists. So a very warm
welcome to Vulmar Ulik who is VP AI
infrastructure, Ambi Ganison who's a
partner for AI and analytics and then
Aaron Botman who is an IBM fellow and
master inventor. There's lots lots lots
to talk about today. We're going to talk
about a huge deal between Oracle and
AMD. We're going to talk about a new
evaluation out of uh the Department of
Commerce's Casey. We're going to talk
about Reflection AI and a VC fund that
apparently has fired all of its
analysts. But first, I with the news.
>> Hey everyone, I'm Eiley McConnan, a tech
news writer for IBM Think. I'm here with
a few AI headlines you might have missed
this week. Visa introduced a new
framework to help merchants distinguish
between legitimate AI shopping
assistants and malicious bots. The goal,
cutting down on fraudulent purchases by
AI agents.
Oracle and IBM are teaming up to release
several new enterprise agents that will
automate tasks such as reviewing
intercomp agreements. This means more
time for tasks that actually require
human touch. Is a customer service agent
you're speaking to a human or a bot? At
Dreamforce, Salesforce's banner event,
the tech giant announced is introducing
AI generated voices to its customer
support agents.
Sam Alman announced on X that come
December, Chat GBT will share erotica
with verified users over 18. What could
possibly go wrong? Want to dive deeper
into some of these topics? Subscribe to
the Think newsletter linked in the show
notes. And now back to the episode.
[Music]
Today I actually want to open with I
think what was one of the big stories of
the week uh which is that Oracle cloud
uh came out and made an announcement
that they will be deploying 50,000 AMD
chips in the second half of 2026. Um and
this itself would be a big deal uh just
in terms of dollar amounts. Um but it
follows on some two pretty interesting
bits of news. Um, one of them is OpenAI
earlier in the month announcing uh a
deal with AMD where they were going to
deploy about 6 gawatts worth of
processors and then separately OpenAI
announcing a deal with Oracle um that
could be a 5-year deal that's worth as
much as $300 billion. Um, and so I think
Volmar, with you on the line, I kind of
want to throw the ball to you first. Um,
it kind of feels like, you know, when we
talk about this a lot ate that sort of
everybody's coming for Nvidia. Um, and
there's this kind of interesting cluster
emerging between OpenAI, AMD, and
Oracle. I guess how threatening is this
ultimately to Nvidia, or is this kind of
just mostly like we're going to have to
wait to see?
>> I think that the market is uh opening
up. I think the competitors are going
after Nvidia. I think that's why Jensen
has this extremely aggressive uh new new
chip every year uh strategy. Um I think
the more interesting part of that
announcement is also that OpenAI is
investing into AMD and takes a stake in
AMD. So if you look at this ecosystem
right now there is Nvidia investing into
OpenAI then OpenAI cutting a deal with
Oracle and buying a ton of uh Nvidia
cards and then also investing into AMD.
So I think there's um there's a lot of
money going around um and it goes in
circles which is always um a question
when it starts going in circles like if
it's a real economy or not like
somewhere the money needs to come from
um so I think um there is a lot of draw
down on the investments which have been
ma made into openai now that's on one
side on the other side if you're seeing
the amount of data centers which are
getting built uh it actually tracks the
amount of AI chips which are demanded
and so I think from an OpenAI
perspective, it probably makes a lot of
sense to bifrocate to not have a single
source and AMD needs to kind of flush
their their first real data center
chips. And so I think if you look at the
overall deal um it is AMD chips with an
investment of Open AI into AMD and then
they need to house it somewhere. And
Oracle is is a company which is putting
AI chips into data centers, you know, by
the truckload. And so that's a kind of
natural thing. And we saw that with the,
you know, almost 40% pop in Oracle
shares when they announced their their
multi-year deal with Nvidia. So I think
the that troa is kind of a known troka
and so I'm not surprised that um OpenAI
is is diversifying including taking a
10% stake in AMD.
>> Yeah, absolutely. And Amy, maybe we'll
turn over to you. I mean, I did see I
don't know if you all saw this like
visualization of where all the money has
flowed in the AI economy and it's just
it is to Vulmar's point like a big
circle. It's like everybody's giving
money to each other. Um, and I guess
Ambi, I'm curious about what your
thoughts on like is that is that signs
of a bubble? I mean, because to Vulmar's
point too, like on a certain level,
everybody's just sending the money back
and forth. But on the other hand, it's
also like these buildouts are consistent
with forecasted demand. So on a certain
level, it's kind of like maybe not so
crazy that we're sort of seeing this
kind of thing happen. So curious about
how you weigh this. I mean, obviously
the underlying question if it's is if
it's a bubble, but I think there's more
interesting things to talk about there.
So curious about your thoughts.
>> Yeah, you know, I saw Immad uh he posted
some things calling this as the AI
bubble. But if you look at I saw some
comparison in terms of the investments
that have been made as a percentage of
GDP, right? Um if you compare to the
level of investments that were made in
the com era versus what's happening in
theai era now, right? It's still, you
know, some ways to go before you even
hit those limits. So I think it's still
early stages and um you know so that's
number one. I think that the the the the
amount of investment that's yes there
are deals that's happening but the
amount of investment that's going on is
still not as big a scale as you would
imagine for the amount of um the benefit
that the economy can um acrew from
something like this in the future.
Right? So I think there is a much much
bigger wider transformation being um
expected out of this. So yes there is
probably a perception that there is a
bubble but I also think like the amount
of investment that's going in here is
probably not commenurate to the level of
unlock that can be garnered over the
longer time horizon. Right. So
>> yeah you're almost saying like you you
might think these deals are crazy but
just wait they're going to get crazier
essentially.
>> That's one way to put it. Uhhuh.
>> Also, if you look at uh the internet
bubble, right? So, there was a massive
amount of fiber optic investments just
to, you know, put wires into the ground.
So, people actually dug trenches. Now,
we are building data centers. Um, if you
look at the return on investment in in
the internet phase, that was, you know,
investments over 20 years. Right now if
you see the uh since um the introduction
of CHBT we see a 43% increase in
productivity and that's within like a
year or two. So I think the internet
bubble the productivity increases came
just much much later than the
productivity increases we see with AI
it's like pretty much instantaneous
because the the infrastructure is
already present like distribution is
there data centers are there people have
computers right all that I mean do you
remember the AOL you know CDs which you
had to buy when the internet happened
right so the all that infrastructure is
present so the adoption is just so much
faster
>> and I feel like it's almost to be
expected that this would happen Right.
And I think we I was actually surprised
that it took this long for AMD to breach
this mode. Um you know I would have
expected this to happen sooner than
later. Right. Um like we we saw that the
the model mode collapsed first, right?
Um the app layer mode was already you
know that was already fairly open. Um so
there's heavy competition going on in
the model and the app layer but then the
infrastructure mode was the one that
held really really steady right and I
think we are seeing cracks open in that
mode. You know, I think it's it's
inevitable. This is just pure capitalism
at play, right? This is always expected
to happen and, you know, it's just
surprised that it took uh this longer
and you're just going to see a lot more
of this at the infrastructure level
happen, you know, going forward.
>> Yeah, for sure. And Aaron, maybe I want
to bring you in because I think on the
question of the moat, right, the thing
that people always bring up in the
Nvidia AMD competition is well well
CUDA, right? is like that's that's
really that's that's going to be the
thing that is kind of like ultimately
Nvidia's great defense here. Do we think
that even that part of the moat is
getting chipped away in some sense,
right? Like because I think part of it
is like who's going to pour money where,
but also part of it is just like kind of
the optimization at this at this level.
And so curious if you have forecasts
there how you think about that.
>> Well, so we followed the money so far,
but I say let's follow the energy,
right? Because um Oracle, they're going
to deploy 50,000 AMD, you know, 450s.
That's a lot. That's going to be So, so
I did the math beforehand, but that's
about, you know, 50 million kilowatt
hours right now. I drive a Tesla and for
that amount of power, I could drive 336
million miles per month on that single
charge, right? Do we really need to
build all these additional in
infrastructure places where we could
share an infrastructure? Could this
energy be used elsewhere? It could power
a small city, right? it cost 5 million a
month just for electricity bill alone
right so I mean I you know I think
rather than follow the money let's
follow the power and the energy because
that in turn is where the money is going
right um but um but I mean anyways to to
answer the question about CUDA right um
so I think that um open AAI right uh
because they're also deploying the these
these AMD you know 450 chips right the
hardware itself they're very comparable
right So, so we can compare like these
uh uh Instinct chips from uh from AMD to
A100s to H100s, you know, to to Nvidia,
right? And there's close to par. I mean,
there there's there's there are some
different technologies that are done,
you know, with like the two 2 millimeter
or two nanometer technology that's used
to create the chips. But on the but on
the flip side, what you mentioned is
CUDA versus what's called um ROCM,
right? So CUDA, right? That's Nvidia's
proprietary parallel computing platform.
You know, that's above and beyond, you
know, years ahead of I think what AMD
can provide with uh you know, RSCN,
right? Which which is AMD's GPU
computing, right? U it's now now it is
open source. The adoption is smaller and
it's less mature than a CUDA. Uh but on
the other side you know I am impressed
that AMD is working a lot with for
example OpenAI right to to make it
better right uh but but but again going
back to Nvidia right they they have
other libraries like u you know CUDNN
right where it can help you performance
tuning right um deep learning algorithms
you know which are the fundamental
building blocks of jai right um so you
know so so I do think AMD has some
catch-up to do in that area even though
they've caught up on the hardware side.
>> So I I want to jump in here. Uh so right
now if you look at uh the the CUDA wall
um effectively all vendors have decided
to build compatibility layers and so you
get um the equivalent of a CUDA
implementation from AMD and Intel is on
the same path. So we are like the world
is standardizing against CUDA. Um and
the that requires a re-implementation of
you know where Nvidia had 10 15 years to
actually build up that wall. Uh and then
on the flip side on top of it you have
uh uh OpenAI really aggressively driving
uh Triton which is an abstract is a
programming abstraction against GPUs
which is effectively hiding the fact
that there's CUDA under the under the
covers. And so you have effectively
attacks from both sides which will kind
of like you know come out somewhere in
the middle where you know OpenAI I mean
they started Triton to become GPU
independent and everybody else says oh
well I need to support the ecosystem let
me just be CUDA compliant and I mean
Rockam if you look at um at their
compute layer I mean AMD cards have been
in high performance computing forever
right I mean just like if you look at
the the big cray machines they are all
based on AMD GPUs
Um and so those libraries are extremely
hardened. I mean ATI which is AMD now,
right? I mean the whole series of AMD uh
chips comes out of the ATI fold and ATI
and Nvidia were always going
head-to-head and both of them had OpenGL
implementation, OpenCL implementations.
And then at some point AMD said okay we
we are going a different route. We are
making a new stack. we making everything
open source and we call it rockm and AMD
went all the way and said okay we even
release the details of the chip the
internals so you get the assembly
language everything right so that's AMD
strategy and video said no we do
everything closed source and we actually
have a a hardware abstraction layer
where the card under the covers can be
changed and we are compiling an
intermediate binary to that underlying
card and so they can actually hide some
card pegularities in that compile layer
so very different strategies
But in the end the the you know if you
took if if you go you know not even
30,000 ft but if you go like maybe 3,000
ft an AMD card and Nvidia cards affected
the same fundamental architecture
computer architecture right and so um I
don't think that it's so hard because
you know of that uh to see that um that
compatibility layer being built. Yeah,
you're almost saying the kind of like
even though it has been much talked
about as a massive mo it may actually be
in practice like much more shallow than
we think.
>> We tested AMD cards over a year ago in
CUDA compatibility mode and we can run
large language models. So you don't
support everything. you don't have the
fluid dynamics and all that other stuff
which invidia and if you if you look at
in in Jensen's last keynote he's like
hey we have these like 15 industries we
are working in and we can do everything
right but then on the flip side if you
look where the highest demand is it's
run VLM right run an inferencing engine
and so if 90% is run an inferencing
engine and 10% is all the other stuff I
think that um you know the mode for the
other stuff is not where we have you
know trillions of dollars spent in in in
um in, you know, industrial capacity.
>> Yeah. You almost Jensen has these kind
of interesting incentives to like try to
like push the other stuff, right? If
those grow really quickly, then he's got
more of a moat than kind of the core use
case. Um, which is pretty
>> and I think in both of what Aaron and
WMAR said, I feel like okay, now we're
seeing yes, we thought always
infrastructure was the the bottleneck.
But I think, you know, now we may be
reaching a spot as this grows between
AMD versus Nvidia and the mode starts
cracking um energy may start becoming
the bottleneck, right? So that's going
to be an interesting
>> it's just infrastructure.
>> It's all infrastructure to
>> Volkart. I mean go on like it's it's yes
now we are going one down now. Schneider
is the bottleneck before you know HPM
was the bottleneck. Yeah.
>> Well, I think this will be kind of the
interesting like vertical integration
that you potentially see happening,
right? Which is okay, we've got open AI
taking stakes in a in the chip companies
now, right? It feels like it's just a
matter of time before this blob starts
taking stakes in like energy, right? And
then you kind of just like go down like
you imagine like actually ultimately
this will be sort of vertically
integrated in a certain sense.
>> I I think the interesting part here is
that how fast the semiconductor industry
was actually able to adjust if you think
about it, right? I mean there's a lots
of pricing flexibility in in in
microprocessors and how slow all the
other industries adjust like to try to
try to buy a transformer. We are going
into a world where uh the actual
physical infrastructure which you know
does not go through these extreme growth
phases suddenly is forced to you know I
need like 100x the capacity of of power
>> like tomorrow
>> tomorrow. Exactly. So I think but that
will also lead to a like you know if I
cannot buy data centers I will have a
much higher replacement cycle because I
can get a card in much faster which is
twice as powerful than getting twice the
power in right and so we will see
probably for the for like the the next
couple of years very very aggressive
silicon replacement simply because the
rest of the infrastructure cannot
sustain this. It's cheaper to swap out
the Nvidia card than to build another
power plant.
>> That's right. Yeah. I think that will be
one of the funny legacies of this is
obviously like building power is good
for lots of different things, but like
essentially like the accelerant of AI is
going to basically like force a bunch of
build out that will have all these
really interesting collateral effects
outside of AI. Um, and it's just kind of
this weird like the tail has wagged the
dog in a certain sense in the era that
we're in.
>> Yeah, I do like where where the AMD chip
is going. It's it's more costefficient,
right? Because it has a lower price per
teraflop, right? Um, and so you know,
you know, with with whenever you scale
it up, you know, if you do the extreme
testing of 50,000, you know, GPUs,
they're going to be saving millions per
month, which translates more than
likely, you know, lower energy costs,
right? And and then, you know, we we at
IBM were, you know, we have this true
north chip, right? That's very very low
energy requirements, right? Can run on a
on a mobile device, right? And it runs
out of u, you know, neurolets, right? Um
and so yeah and and so so I think this
you know competition between AMD and
Nvidia is good right right um that that
it increases innovation um such that we
can solve some of these you know big
problems that are here and are going to
be still coming.
>> All right I'm going to move us on to our
next topic though this is an amazing
discussion kind of getting into the guts
of this story. Um, next bit I wanted to
kind of cover was this interesting
report that came out of the center for
AI standards and innovation. Uh, which
is a a unit within the department of
commerce that is kind of like the
government's eval shop for AI models. Um
and this was sort of one of the first
kind of real sort of public reports that
they have done where they basically said
look we the government the department of
commerce took a look at deepseek and we
evaluated deepseek against 19 benchmarks
and their conclusion is well in
comparison to US models deepseeek lags
in performance it lags in cost it lags
in security and it also lags in
adoption. Um, and so in some ways it was
kind of like the don't worry about
DeepSeek uh uh uh was the the headline
in some ways. Um and I guess I mean
maybe I'll throw it to you. Um thoughts
on this? Like I guess when DeepSeek
first hit I think we had a lot of
discussion on that like oh man this is
going to force everybody in the industry
to have to adopt how do you compete with
free you know huge danger. Um, and I
guess the kind of question for you is
looking at this analysis, maybe deepseek
is not that big of a threat to American
AI businesses as we thought. Um, is that
the right way of thinking about it?
>> Yeah, I think there's a couple of
dimensions to this, right? So, I think
we've always stressed that, hey, when
you're building models, yes, you know,
you should have the right level of
guardrails, you should have the right
level of security, especially in
enterprise settings, right? I I'll talk
about you know when I'm um talking to my
clients and we we always talk about
models then you know in an enterprise
setting those things become really
really paramount. So yeah, it's it's um
I think it's it's interesting to see
that you know um amongst the American
models versus the Chinese models, right?
You know, I think it becomes obvious um
outside of all political contations, you
know, the market always chooses for
okay, for my enterprise requirements,
I'm going to go with models that meet
the appropriate security guard rails,
the appropriate uh requirements and so
on, right? Um the other angle that I saw
from all of this is you know there is I
think this this opened up so deepseek
yes you know it it opened up um a lot of
mind share to open models and open
source models right um embedded in this
report was also a statistics which you
know we may not have captured which is
there's like an increase in like
thousand% in download of um you know
deepseek models
So yeah and the consumer behavior also
drives enterprise behavior because
consumers ultimately sit in enterprises
as well. So from that perspective I see
that the mind share opening up to just
open you know open source open weight
models. So you know not just working
with proprietary models. So it's an
interesting dynamic between yes I want
models that are reliable, trustworthy,
safe, but I also want a wide variety of
models, choice of models and I want to
be able to um have a an open element to
it. I can go and inspect it and so on,
right? you know, deepseek by nature of
having open weights is what enabled us
to go and, you know, pressure test and
stress test all of this and then figure
out um um you know, how it's performing,
where the security implications are and
so on and so forth. The third element in
all of this is I feel like you know,
yes, there is a bunch of metrics that
we're talking about, but at the end of
the day, you know, outside of the
security guardrails and all of that,
right? Um from a performance perspective
like this is still falling into the trap
of benchmark maxing right at the end of
the day yes you know yes you do need
some baseline benchmarks but at the end
of the day when you are um doing this in
an enterprise setting or for whatever
use case right like you have to still
work with how does it measure for you
right so there has to be that dimension
so yes I think that the safety element
the trustworthy element Absolutely
right. I think that gives us um a good
connotation in terms of you know what
should be used for instance what should
not be used from an enterprise setting
but there are these other angles that I
think we should look at in terms of
whether it's truly performant or not
whether it's truly performant at the
cost that it's supposed to perform or
not as well right
>> yeah are you kind of saying I mean
because everybody has this desperate
need they want to know who's who's ahead
who's winning uh but I guess you're kind
of saying that like maybe this analysis
doesn't reveal ultimately a whole lot
about who is ahead in some sense. Um
because I think you you think ultimately
like the benchmarks are a little bit
artificial as a way of evaluating this.
Is that the right way of kind of
thinking about what you just said?
>> Yeah.
>> So yes, on the safety angle, right? I
think there's a clear dimension on yes,
you know, like if you're getting
hijacked and if you're getting if you're
falling through some of the security
elements, then yes, I think that's a
clear one. But from a true performance
perspective and cost optimization
perspective, I think we you know, we
can't just take it at face value and say
yes. Oh, you know this, you know,
Deepseek is performing or not
performing, right? You you really have
to take a model like that and then see
whether it's working for your use case
or not
>> because all these benchmarks are open,
right? Uh you can actually train and and
everybody who trains a model is
constantly evaluating against these
benchmarks and that's how you are
stopping your training right once you
hit certain levels of the benchmark or
you don't improve. So I think the the we
don't really have blackbox testing and
the thing is in the market you know when
you come out with a new model every new
model you better beat your last model
and you better beat you know the top of
the of of the market. So I think it's
interesting to see a uh an independent
kind of closed source you know because
if you look at the uh the the benchmarks
NISRAN it's like cyber coding skills
right it's like some of these benchmarks
are open but others they're just like
you know proprietary and so suddenly you
see things where the model wasn't
optimized for uh pop out right and so I
think what we are seeing is that
probably um deepseek overfitit on the
public common benchmarks and really try
to optimize for that. But from a you
know making a splash uh that totally
makes sense. So I think that it shows a
little bit more of the depth of the US
companies um of you know building more
generic models because the technology
hasn't been necessarily like you know
single focused on hey we need to be on
on that leaderboard but you know they
made it on the leaderboard because
there's deep tech behind it. Now does
that mean that um you know the the team
with new benchmarks cannot evaluate
their models? Absolutely. I mean this is
how you're building a model. If you're
trying to build a generic reasoning
model, you know you're making very very
large benchmarks and then you're trying
to figure out where your model has
weaknesses. And the methodology of
finding that that's the IP right the IP
of the company when you're in this
training is to know oh here my model is
weak and here it's strong and those
internal benchmarks they will not
release. I'm sure that the Deep Seek
people are like fully aware of whatever
came out in the Nest report. Um they
just will not go to market with that.
Yeah. And Aaron, I think that's actually
Vulmar's kind of pointing us to the
direction that I want to sort of push
you in, which is the dynamics of this
are very interesting, particularly if
you believe that like some of the most
valuable evaluations, benchmarks you can
do are all going to be blackbox. They're
all going to be secret. Um, and I guess
the kind of question is like if you in
the in the future you feel like kind of
the eval ecosystem is going to
increasingly be kind of like more and
more opaque, right? Because it's kind of
the only way of sort of preserving some
kind of genuine signal that you get from
these evals. Um, because I agree with
Vulmar. I think one thing I took from
this Casey report was well, you know,
deep sea kind of trained to the test and
so maybe it was actually less impressive
now that we look at it a little bit more
closely. But it it suggests a lot of
things about how we should do eval.
>> Yeah. I mean, you know, on a surface
level, this is a story about
contradictions, right? Because Deepseek
V2 was released in June 2024, right? And
they made these big claims. You know,
they said, "Hey, you know, Deepseek R1
is 96% cheaper than 01, right? It
performs better, if not equal on other
benchmarks, you know, benchmarks such as
the math 500 test, the measuring massive
multi- language uh understanding. Um,
but if you wrap all that together, you
know, and then you look at what NIST
did, you know, you ask yourself, how did
this happen? You, you know, how did NIST
all of a sudden come up with something
different? And I think about this of
like driving a car, right? So, a
benchmark. What is it? Well, a benchmark
on a car is like top speed going 0 to 60
miles hour. What's the breaking
distance? Um, now on the other hand,
what is a standard evaluation? Well, a
standard would be the crash safety, the
emissions, the the cyber security for
autonomous systems. And so what's
happened is I believe that NIST focus on
standards evaluations, right? Whereas,
you know, you're looking at deepseek
focus on benchmarks, right? Because
whenever you start peeling it away, you
know, you can see that uh, you know, um,
the NIST organization, they're looking
at benchmarks that don't pass
adversarial prompts, you know, things
that don't have political neutrality.
um you know they're they're looking at
um you you know those other types of
areas which some of the benchmarker
marks do have overlap in a vin diagram
and the intersection right but they were
measured in very different ways and I
think the way that you know n did it um
is very important right because this
rolls into the operational cost right um
the risk of using you know uh foreign
type models right and and so and so that
story of contradictions
Right. I think just goes into how do you
measure this and and how best you want
to tell tell the story. you know this
the saying goes I can have stats you
know tell me what I want to hear right
all the time right and and I think
that's what's what's also happened here
you know I can cherrypick different
stats and measure it in different ways
but um but but ultimately I think this
is great news you know for the AI
community right as as well as um giving
us more choice and really beginning to
elucidate you know what's what's
happening underneath the cover and and
and give us a more independent measure
Yeah, on the cost angle I want to jump
in. You know I think one of the other
pieces that came in that report was um
you know they advertise about the you
know the per token cost like we always
think about oh what's the consumption of
tokens cost right like I was talking to
my clients earlier this week and again
you know thinking about models that's
always front and center of my oh what's
my per token consumption cost if I'm
going to build use cases right um but I
think the the interesting metric here
the interesting comparative analysis
here was that well if you're trying to
use the same model or the two models for
the same set of tasks, right? Then it's
not just about your per token
consumption cost. It's about the per um
task cost, right? Even though your unit
cost may be lower, your actual effective
unit, which is the task completion cost,
is actually way higher, right? So, I
think there is some of those nuances
that need to be had as well when we
think about the economics of all of this
as well.
I'm going to bring us on and I think
we'll actually bring a lot of the themes
into the next story. So it kind of
shifts us away from the world of sort of
government evaluations of foreign
open-source models uh to I think a
related business story. Um so startup by
the name of reflection AI uh just
announced a raise that they had done.
It's $2 billion at an eight billion
dollar valuation, which in this day and
age seems kind of oddly small in a
certain sense. Um, and it's run by
former DeepMind alumni. Uh, what I
wanted to do by flagging the story
though was that like their big pitch on
this round, um, they had started as like
we're going to do agents and a bunch of
other kind of AI buzzwords. Their new
pitch is we are going to be the leading
frontier AI open-source company for the
United States. And I guess maybe you
know VM I'll kick it over to you. I'm
sort of really interested in like won't
the won't the leading open source player
in frontier AI simply be the existing
Frontier AI companies? Like I'm I'm
really curious about if there's a
there's room for this kind of pure open
source upstart in the sort of the US
market right now. Um and and how you
size that up.
>> Yeah. So I mean when I read the article
it kind of said we are going to be uh
open weights but not open training. So
it's not that they are releasing the
training sets and so from my perspective
I'm not 100% sure where the
differentiation will come. Now if you um
they will have to fight against meta and
the big question and and look I mean
there are other companies which make
frontier models. IBM is making frontier
models. Everything is in the open. We we
release where our training sets come
from. You know, we are extremely open
about like anything which goes into the
model including indemnification.
So now they are coming and saying well
we give you another open- source model.
So they better find an angle where that
model like just being open source I
think is insufficient and three three
years ago you know this would have made
a huge Yeah. Wow. Yeah. But now it's um
I think a lot of the large players are
already in the market. So they have to
find an angle um you know where like if
you're looking at an enthropic I mean
they they really found you know the
developer community and I think they may
come with a similar angle of saying okay
you know we find a a specific market
where we have a model which is better
than everybody else and we make it open
source and you can fine-tune it to
whatever you have. uh and if if they you
know and then the question next question
is like you know OpenAI was also called
open and AI and now they are the clo the
most closed company on the planet. So I
we will see how that will play out and I
think you know the more the marrier. If
you have more choices it brings
pressure. Um but I think also like as
you just mentioned you know the the
investment size is much smaller
relatively speaking you know. Um uh so
we may be at the tail end of the ability
to create um you know foundation model
companies. So it may just be like you
know a tail company. It's a really
strong team. they have an experience um
and and they have proven you know a
proven track record so I'm I'm sure you
can find funding and if you look at the
the players Nvidia's in it a bunch of
others you know um you know that
circular money thing again
>> exactly so it may just be a yeah they
take the the you know the billion
dollars they get from Nvidia and another
billion dollars and give it back to to
Nvidia
>> yeah back to
>> um so but I think that overall I think
we are probably from an investment
perspective at the tail end of yet
another uh large language you know
foundation model company. I think that's
kind of like an indicator because it has
kind of slowed down and I think there
are enough choices on the market and
every VC put their chips on the table
and they cannot you know they cannot
invest into competing companies. So I
think we are probably except for Nvidia
because you know they get money from
everybody
>> because Nvidia
>> yeah so um so we are I think probably at
the tail end which is is good to see
right. So that means uh the money now
can go somewhere else. So I think we are
we are from an investment perspective if
you solely look from a VC perspective I
think we are moving on from uh you know
do we need another model company to what
are the applications right and that's
the I think you know we we got the the
the fiber optics in the ground so that's
done that's called Nvidia and AMD and
we're building data centers so now we
got you know uh we got TCP IP and we got
that that's the foundation model guys
and now the next question is who's going
to be the Google and the Amazon and so
that's that's really from my perspective
that's where I'm looking is like you
know what are the applications which are
going to like really shift industries
around and right now we are still at the
plumbing layer right we are looking at
electricity we're looking at GPUs etc
those are all the ingredients to
actually build the businesses which are
going to be transformational but you
need that capacity and that
infrastructure in place
>> Aaron do you buy uh Vulmar's assessment
I guess it's a it's not necessarily a
pessimistic view but sort of the idea
that like the kind of era is already
moving on and that I guess in some sense
like open source is no longer kind of as
cool or in the very least as distinctive
as it used to be. Curious about how you
size up the prospects of being able to
create like a genuine independent open
source competitor in the space. I mean I
mean it looks as a you know reflection
AI it definitely has all the ingredients
to be a successful player but success is
not guaranteed here you know I mean I
mean I I always think in terms like
Winston Churchill you you know says that
you know Americans will always do the
right thing as long as they've exhausted
all other possibilities and reflection
AI they don't have time to exhaust all
possibilities here right to be to be
successful right and so they're going to
need to be very very focused um you know
to make a difference you know and and
this tail end which which yes I I agree
that this is a tail end you know market
where it's becoming very saturated uh
quickly you know but I think some of the
strengths are you know the track record
right of the founding members you know I
mean they're they're known came from
deep mind right alpho and so on the
funding and valuation seems to be
somewhat decent but the investor roster
looks good you know I mean they have
nvidia sequoia talent recruitment market
position uh but they have a very
ambitious roadmap and it's all about
speculation. They haven't released a
single model yet, right? I mean, what
are they actually going to do, right?
And and so that gives you execution
risk, right? Um Yeah. Yeah. Frontier
LLMs, you know, they're extremely
resource, but also expertise intensive,
right? So both of those combined, you
know, is hard, right? There's a lot of
lot of competition. Um, you know, we've
already mentioned open limitations. Only
the weights are going to be open, right?
It's not like how we provide not just
open weights but we clearly say what is
in our data pile right that we're using.
Um they only have a team of 60 people
but I mean you know you know so the pros
and cons on that on that you know
pendulum there go go go goes goes up and
down. Um but I think you know we'll very
we'll we'll see um if this is going to
work out um and they're going to need to
fail quick if they're going to be
successful.
>> Ambi want to zoom out a little bit. But
we've been talking very much about the
kind of US market, right? Like can a
company like Reflection go toe-to-toe
with Meta, right? Or even, I don't know,
OpenAI's open source development over
time. Um, on the backdrop of this Casey
report, it's kind of interesting to
think a little bit about, right, was
which is okay. Well, let's just think
about the international competition for
open um, and think a little bit about
like the peer competitor for reflection
AI is not maybe necessarily meta. It's
actually maybe like a a smaller lab like
deepseek. And so kind of like the a lot
of the natural competitors seem to be
like these kind of Chinese open source
upstarts. Um and yeah, curious about how
you kind of size up that competition,
right? Like maybe not like the small
guys versus the big buys, but small guys
against small guys is kind of the
interesting comparison I want to go
into.
>> I think it's a good branding exercise
they got going there, right? Which is uh
Frontier Open model, etc. Um, so I think
it's a it's a good branding in USB, but
whether you put it in the frame of the
US market or the international market
with China, right, at the end of the
day. So, you know, I sort of agree with
what Bookmar and Aaron mentioned, I sort
of don't agree that it's saturated in
the sense that if you're thinking about
generic LLMs, then yeah, you know,
you're sort of reaching the tail end.
Um, but again, you know, it's all
speculative at this point. We don't know
what they're releasing, what their um
details are. So maybe they will pivot
into other modalities right so go into
like world model who knows right um so
maybe that happens maybe that doesn't um
and even in the you know the the pure
text based large language models um the
space where I don't see saturation is
verticalization right so if you're
thinking about generic LLMs yes um we're
seeing some sort of saturation going if
you think about LLMs applied in the
context of coding coding agents and you
know there I think you're starting to
see I wouldn't say full saturation but
you know fair fairly good maturity but
you extrapolate it to say enterprise
domains and verticalization in the
enterprise context there it's still
fairly um you know a wide swath that's
still you know open to be um conquered
there. So it it really depends on
exactly what they're trying to build and
where they're going to go right. So it's
all pure speculation. Um you know
whether you put it in the context of you
know the the the Chinese models or the
the frontier models in the US at the end
of the day everyone's playing for the
global market right and no one's going
to be restricted to hey I'm going to go
play only in this market right so at the
end of the day you just have to look at
the market as the market um as a whole
market. So am I what you just said I
think is an interesting one. I like you
know verticalization if you look at
companies right they they have
proprietary data sets. What I haven't
seen yet is the whole like global
foundry like um you know uh business
model where you say okay I'm a manufact
I'm a manufacturer of your company's
model. I'm giving you the core
technologies. I give you, you know, help
how to actually build it and you build
your own model, you know, and we already
did all the pre-training. So, I think
that's this is where the, you know,
these open-source or semi-opsource
companies could go where there I think
there are new opportunities. But right
now, we we, you know, take this model or
go home and the finetuning is kind of,
you know, it doesn't really work. So I
think that whole industry will go
through a couple of iterations until you
know companies can actually at a
reasonable cost build their own
proprietary models and I don't think
that the industry isn't even created yet
to do that and so I think there's still
>> no it's an open it's an open swap there.
>> Yeah. Yeah. That would be super
interesting. You end up with kind of
like the Foxcon models basically like
yeah like
>> we just have this huge compute cluster
everybody can be a frontier AI company
now. That plays one I haven't really
heard about. That's not just the it's
not only the hardware, right? But it's
also like, okay, and I give you 90% of
the training set and I give you training
methodology, right? And and I'm just the
assembler and then, you know, you pay me
like $5 million, right?
>> Mhm. Yeah, that's right. Yeah.
>> Very cool.
>> All right, I'm going to move us to a
final kind of fun story. It's a little
bit of a throwaway, but um I feel like
the joke I have is like every few months
there's a headline which is can you
believe that they automated and fired
all these people from this particular ex
job? Um and that story has finally
apparently come to VC. Um so there's a
story out of Business Insider about this
VC collective called David A's Venture
Collective or DVC. Um relatively small
fund given the numbers that we've been
talking about today, $75 million. Um and
what they have kind of done in terms of
their promotion for their VC fund is to
say we have fired all the analysts and
replaced them with agents. Um, and this
is maybe actually like somewhat more of
an interesting story than it might look
at um, at first glance because really
the model that they're doing is they're
saying, "Look, we're going to have a lot
of LPs who are going to be various
people at various companies. And what we
want to do is use them to help us source
deals for the VC fund, but a lot of the
sourcing, diligence, analytics, analyst
work essentially is going to be done by
AI." Um, and so I think maybe Aaron,
I'll kick it over to you. I was sort of
interested in this idea that basically
like what AI is going to do is not
necessarily replace it certainly
replaces the jobs but what it's kind of
doing is like shifting the labor right
so typically like LPS would never be
like the ones sourcing but I guess the
idea is with technology you might be
able to lower the cost enough that they
become the ones that source it's almost
like a fancy version of now how you have
to like check out all the groceries by
yourself right in line just on the
finance side and so anyways I was
curious about Aaron what you thought
about this business model. Do you think
it's viable? Do you think it's mostly
marketing? Uh how do you think about it?
>> Yeah, I mean just to bring the story
back down to earth, right? You know,
whenever we say DVC has eliminated all
of their analysts, it was five people.
Five people, right? Um and they use AI
agents to assist, right, with deal
sourcing, portfolio monitoring, due
diligence and so on. Um what I think
should have happened right is that um
instead of doing job replacement right
because AI is not a story of mass job
replacement or even small job
replacement it's about job
transformation right that rather than
firing anybody right I think we should
become AI translators you know that our
value uh as as humans and as workers
just become be becomes amplified uh by
these types of tools you know so we have
new roles that are emerging ethics
engineers you know synthe synthetic data
engineers, behavior engineers, auditors,
right? Um those types. Um but I mean we
we could even see maybe we're becoming
agent orchestrators, moderators, even AI
psychologists, you know. Um but but I
mean that that said, you know, um I
think that um the replacement of people
is a misnomer. I think it's the
transformation, you know, of people and
doing that. Yeah. and and and and
there's also this this sort of
relativistic piece of of how each person
interprets AI. You know, some people
might think, oh, these AI agents are
sentient, you know, where they can
perceive, feel, and experience or
they're sapient. They can think, they
have deep reasoning, you know, but
because I build a lot a lot of these
systems, I think neither of those,
right? But but a but somebody who's just
a, you know, just just easy, you know, a
user who comes in, you you know, does a
flavor of of the month search and they
see the results, wow, it must be
sentient and or sapient, right? Um but
but all of that right I think is
important so that everybody you know has
a fundamental level of understanding of
AI and and can become a translator. So I
would challenge DVC to maybe change the
narrative a bit right and let's talk a
bit about how people are going to be a
amplified rather than replacing. Yeah, I
mean that transformation narrative I
think is an interesting one, Ambi. I
think I mean doesn't all this beg the
question of why you even have a VC fund
in the end? Like you have a bunch of LPs
that are now going to do all the work of
sourcing and diligent companies but now
with agents I suppose but it sort of
begs the question of like what this
business even is once you've done that.
>> Yeah, you you talked about selfch
checkouts, right? I don't know about you
guys, when I go to my local Costco,
right? I go to the selfch checkckout
lane and I still have my cashier still
help me check it out but then we just do
it in a much faster pace right to to
Aaron's point like it's not a complete
replacement but you the the the
coordination is what makes it uh faster
and more effective so you know I think
there is a little bit of a spin going on
in this DVC article um this news here so
you once you separate the the wheat from
the chaff um yes I I think that's a very
pertinent question to be asked right
like Okay, what exactly is the you know
core analysis or really you know
hypothesis testing and stress testing
that you're really doing that warrants
um you know deep expertise versus just
you know some rudimentary analysis right
so I think there's it it also exposes a
little bit of like okay what exactly is
the the um the in-depth stuff that's
going on right but I think the flip side
to that is when you think about um the
VC funds it's not just pure the analysis
Right. There's also the relationship
angles to it. So yeah, I think you have
to look at it in the the the angle of
both the hard and the soft elements. So
yeah, I think you're trying to chip away
a little bit on the hard elements and
make them a little easier, but the core
soft elements still exist, right? So
you're just leveraging them as much as
possible.
>> Lar, I'll give you the last word here.
>> Yeah,
>> if you have any thoughts.
>> Yeah, I mean I was inventor for a year
and a half, so I'm I've seen the the
belly of the beast, right? So um the
first thing is the fund is very small,
right? It's $75 million that's almost
nothing. Um so the way usually these
funds work is you take a bunch of high
net worth individuals and they throw you
know everybody throws in a million 2
million 5 million like on the top and
then you you build a fund that runs over
10 years and then you're trying to
return the money. Um if if you are on
such small scale everything you do you
invest in people. There's no because
like you are effectively pre-product
you're pre- anything right so because I
mean how much money can they put in half
a million a million so everything is
effectively relationships and so the
biggest problem of the of the VC world
is deal flow and so the natural thing
what you do is you you try to find
people you incentivize to bring you deal
flow and protocol if you are not at tier
A everybody wants a tier A nobody wants
to be invested by tier B so you need to
say hey you know I have something to
offer and so what they what they do is
they say Okay, I take all these people
who chip in money and they are my
limited partners and I give them an an
you know exceptional return but in uh
for doing that you need to give me
access to your your network and so the
LPs are probably high net worth
individuals who are on the tech industry
and so suddenly you get signal and you
get early signal because you have
someone who is your spokesperson that's
how you get deal flow in now the if you
look at analysis it's like okay if you
are putting a half a million dollar in
the analysis is okay can you you know
can you write code and you know are you
a good human being and that's pretty
much it and then is the idea anywhere
reasonable. If you look at deep
analysis, so you you have a labor pool
analysis problem. So are people good or
bad? How do you find them? How do you
help these companies off the ground? But
this is much more like doing market
intelligence. You know, is there a
market? Is there no market? What's the
product? And then which people should
you hire? So and that analysis is
something which you know you can really
AI because uh the other analysis is lot
usually when you're later stage is much
deeper financial analysis you know so
you're looking at you know what's your
revenue, what's your forecast, what's
your cost etc. But that happens in the
you know postp product and already in
market. So because that's such an early
stage um actually doing you know
analysis is not that much of an analysis
but a crapshot and so it's really like
okay you know having having a
relationship to people.
>> Well that's a great note to end on. Uh
that's all the time that we have for
today and uh thanks to Ambi uh Aaron and
Vulmar for joining us. We'll hopefully
have you on again very soon and thanks
to all you listeners. So, if you enjoyed
what you heard, you can get us on Apple
Podcast, Spotify, and podcast platforms
everywhere. And we'll see you next week
on Mixture of Experts.
[Music]