AI Week: Platform Consolidation, Claude Skills, Cancer Breakthrough
Key Points
- The AI industry is consolidating around a few dominant labs (Anthropic, OpenAI, Microsoft, Google) that are racing to own the full “agent layer,” threatening middleware firms with commoditization as platforms embed these capabilities natively.
- Simpler, language‑driven workflows outperform heavyweight scaffolding; natural‑language iteration and minimal‑overhead approaches consistently deliver stronger results than elaborate prompt‑engineering or RAG pipelines.
- Vertical integration is becoming a competitive priority, with companies building end‑to‑end compute stacks and investing heavily in custom silicon to secure supply‑chain sovereignty for AI workloads.
- A paradigm shift from keyword search to conversational “discovery” is reshaping commerce and marketing, as users move toward intent‑driven interactions rather than explicit queries.
- AI is emerging as a scientific discovery engine, exemplified by Anthropic’s Claude Skills platform for reusable agent workflows and Google’s cancer‑focused models that generate and experimentally validate novel drug hypotheses.
Sections
- Key AI Strategic Trends This Week - The speaker outlines five pivotal AI developments—platform consolidation, the superiority of simple workflows, vertical integration with custom silicon, the shift from search to conversational discovery, and AI’s emergence as a scientific discovery engine—before diving into recent stories and actionable takeaways.
- OpenAI Broadcom Deal, Walmart Integration - OpenAI announced a multi‑year partnership with Broadcom for custom AI chips to meet soaring compute demand while also launching a chat‑based instant checkout with Walmart, enabling seamless product recommendations and purchases directly within the conversational interface.
- Advocating Accessible AI Agent Building - The speaker warns that command‑line tools may restrict AI model education to developers, then recommends Peter Steinberger’s “Just Talk to It” for its compelling case that simple, iterative agent design can replace complex infrastructure, while acknowledging that larger, back‑office agent systems may still require more robust frameworks.
Full Transcript
# AI Week: Platform Consolidation, Claude Skills, Cancer Breakthrough **Source:** [https://www.youtube.com/watch?v=jv8sJHVmySg](https://www.youtube.com/watch?v=jv8sJHVmySg) **Duration:** 00:10:01 ## Summary - The AI industry is consolidating around a few dominant labs (Anthropic, OpenAI, Microsoft, Google) that are racing to own the full “agent layer,” threatening middleware firms with commoditization as platforms embed these capabilities natively. - Simpler, language‑driven workflows outperform heavyweight scaffolding; natural‑language iteration and minimal‑overhead approaches consistently deliver stronger results than elaborate prompt‑engineering or RAG pipelines. - Vertical integration is becoming a competitive priority, with companies building end‑to‑end compute stacks and investing heavily in custom silicon to secure supply‑chain sovereignty for AI workloads. - A paradigm shift from keyword search to conversational “discovery” is reshaping commerce and marketing, as users move toward intent‑driven interactions rather than explicit queries. - AI is emerging as a scientific discovery engine, exemplified by Anthropic’s Claude Skills platform for reusable agent workflows and Google’s cancer‑focused models that generate and experimentally validate novel drug hypotheses. ## Sections - [00:00:00](https://www.youtube.com/watch?v=jv8sJHVmySg&t=0s) **Key AI Strategic Trends This Week** - The speaker outlines five pivotal AI developments—platform consolidation, the superiority of simple workflows, vertical integration with custom silicon, the shift from search to conversational discovery, and AI’s emergence as a scientific discovery engine—before diving into recent stories and actionable takeaways. - [00:03:43](https://www.youtube.com/watch?v=jv8sJHVmySg&t=223s) **OpenAI Broadcom Deal, Walmart Integration** - OpenAI announced a multi‑year partnership with Broadcom for custom AI chips to meet soaring compute demand while also launching a chat‑based instant checkout with Walmart, enabling seamless product recommendations and purchases directly within the conversational interface. - [00:08:15](https://www.youtube.com/watch?v=jv8sJHVmySg&t=495s) **Advocating Accessible AI Agent Building** - The speaker warns that command‑line tools may restrict AI model education to developers, then recommends Peter Steinberger’s “Just Talk to It” for its compelling case that simple, iterative agent design can replace complex infrastructure, while acknowledging that larger, back‑office agent systems may still require more robust frameworks. ## Full Transcript
I spent more than 20 hours following AI
stories this week and this is what you
need to know. We're going to go into
strategic principles first. We're going
to get to those stories second and then
I'm going to give you takeaways third.
So strategic principles that came out
this week platform consolidation thesis
is intact. The major AI labs anthropic
openai, Microsoft, Google are racing to
own the complete agent layer directly.
Middleware and thin wrapper companies
face commoditization risk as platforms
are embedding agent capabilities
natively. Second, simplicity continues
to beat infrastructure. Most effective
AI workflows avoid elaborate
scaffolding. Natural language iteration
outperforms complicated prompt
engineering and rag systems. You can get
to minimal overhead approaches that take
you really, really far. Third, vertical
integration is a wave. Companies are
controlling full compute stacks and
continuing to invest aggressively in
supply sovereignty. Custom silicon is
becoming the way to go with AI. Fourth,
discovery versus search is a big
paradigm shift and we're seeing beat
after beat after beat on that week over
week. Commerce and workflows are moving
from explicit search queries into
conversational intent discovery. Massive
implications for marketers. Finally, AI
is a scientific discovery engine. We're
transitioning from data hypothesis and
analysis hypotheticals to cancer models
that demonstrate novel externally and
experimentally validated scientific
insights. Let's get into the stories
that delivered those beats. Story number
one, Anthropic launched Claude Skills.
It's a reusable agent customization
platform that packages instructions,
scripts, resources together. I did a
whole video on it. I think it's one of
the biggest releases of the year. It
takes the combination of manual
orchestration and context and prompts
out of the equation so that you can
autocompose what you need to do by
assembling the context on the fly. Simon
Willis, who I read often and really
respect, called this a bigger deal than
the model context protocol server, which
we all know is all over the ecosystem. I
agree. I think it's a huge release.
Watch for how enterprises handle
permissions with this. Watch for open
AIS in Google's competitive response.
There will be one. Watch for whether
skills become a standard abstraction for
agentic workflows. My bet is yes. Story
number two. Google cancer AI
breakthrough. Two models demonstrated
computational scientific discovery this
week. Deep sematic demonstrated compet
competency in analyzing cancer sequences
and works across all major DNA
sequencing platforms and is specifically
good at analyzing mutations in cancers.
Cell to sentence is a 27 billion scale
parameter cell model that is designed to
generate novel drug hypotheses and
select successfully generated a specific
drug hypothesis validated in vivo or in
in a little petri dish to show that it
could turn cold tumors hot or make them
visible to the immune system. Look, I'm
not a scientist. I'm not going to
comment on the scientific impact of each
particular hypothesis here. What I am
going to say is that we live in a world
where AI has gone from does it have
hypothesis capabilities to two models in
48 hours with two novel scientific
breakthroughs that are validated
externally. We are speeding up and that
is the big takeaway I have on the
science side. This is going to get
faster and faster and faster from here
on out. We are going to see a speed up
in drug pipelines. It is not about this
particular drug. It is not about this
particular discovery. It is about the
wave of AI innovation pushing into
medical and drug discovery pipelines.
It's a big deal. Story number three,
OpenAI and Broadcom. Open AAI signed a
multi-year collaboration for 10
gigawatts of custom AI accelerators.
This is Open AI saying that they cannot
just depend on Nvidia, that they are
buying as much NVIDIA chips as they can
get and they still don't have enough.
That they are going to have to buy more.
they're going to go to Broadcom because
that's the only way they can get compute
demand met. This is not a story about I
don't want Nvidia. It's a story about
demand for OpenAI scaling so fast that
they need to go to every chip supplier
on the planet. And that's why we talk
about custom silicone. Story number
four, Walmart and Open AAI. Chad GPT's
instant checkout is going to enable full
transactional shopping within the chat
interface. And Walmart is on board with
one tap checkout via Stripe. This means
that you can say something like meal
ideas for a family of four in chat GPT
and you'll get Walmart meal delivery
options, specific ingredients, and be
able to buy from Walmart in chat GPT.
This is going to be a situation that
marketers will watch very closely as we
head into the holiday quarter. They're
going to watch conversion rates versus
Walmart. They're going to look at how we
handle retailer exclusivity policies.
What is Amazon's response? How do you
handle privacy concerns? How do you
measure intent? What are the key
behavioral metrics? These this is new
territory for marketers. We have an
entire brand new channel that 10% of the
world's population uses and it is
getting unlocked for commerce. Now,
story number five, Microsoft Windows 11
Agentic operating system. Microsoft just
keeps shipping on agent and copilot. In
this case, they're shipping Hey Copilot
always on activation. They're shipping
what they call extended context which
got a lot of push back because it also
was read as a privacy violation because
in a sense what they're doing is they're
saying Microsoft Copilot can see your
whole workstation all the time and
remember everything and employees have
felt like that was a violation of their
privacy. That debate is going to go on.
I expect Microsoft is going to win that
because enterprises have an interest in
using agents to drive hardware
productivity and software productivity
on laptops and they will push employees
whether we like it or not frankly to go
for it. Now obviously some folks with
leverage are going to walk away and
they're going to go to places that don't
insist on the Microsoft ecosystem. We
can have the conversation about Copilot
and why Copilot hasn't felt like a
cutting edge LLM in a long time. But the
reality is Microsoft has customers at
the enterprise level whom it wants to
cut cloud deals with and everything
pivots around that. The Windows deals,
the teams deals, all of the productivity
deals pivot around cloud. That is the
money maker for the company and they
think in terms of the money makers, the
buyers, the enterprise customers. So
that's story number five. Story number
six, Nvidia DGX Spark. It is a data
center class AI development desktop
positioned at just under 4,000. I mean,
I can get into the specs. 144 ARM grace
CPU cores. It runs 100 tokens a second
for 7 billion parameter models. And it
is essentially a data center AI at a
consumer price point. So if you ever
wanted to run a data center grade LLM,
you could do so from your desktop. And
so this is going to democratize the
availability of privacy preserving local
inference developers who want to do edge
deployment testing. It's going to give
us a whole new compute category once
it's established. It's not really a
laptop. It's not a desktop. It is a
local LLM compute point. I'm really
curious to see where this goes next year
because this could open up a whole new
sort of place on the desk for compute
for people who want local LLMs. And we
are going to have to see the software
catch up because right now this is for
developers. Story number seven, Andre
Carpathy's Nano Chat. He built a $100
do-it-yourself chat GPT pipeline that's
trainable in four hours. The point here
is not this particular model. The point
is that Karpathy is a brilliant
innovator. He's a phenomenal educator,
exopi Tesla. And what he is interested
in is showing transparency around how
models are trained. And so this becomes
a phenomenal way for students to get
exposure to AI training, to understand
how AI models are built. It's something
that should be in university
curriculums. It's something that if you
want to learn how to make models from
scratch or build models, it becomes a
way to start to get into emerging
techniques with models very very easily.
I hope this is adopted by people
interested in helping others learn how
AI models work. But my fear is because
it is command line, because it requires
technical knowledge, we are going to see
this once again limited to the developer
community. Okay, last but not least, I
want to talk about my favorite read of
the week. It's Just Talk to It by Peter
Steinberger. Why should you read it? You
should read it because Steinberger has a
compelling case against agentic vendors.
There's a lot of agent vendors out there
that are selling a lot of very fancy
infrastructure. And Steinberger is
arguing based on his own experience
building with agents that you don't need
as much infrastructure as you think you
do. And you should be leaning into
iterative building with agents and that
you should think of agent use as
mirroring people management. So you
should use scope judgment when you talk
to them. You should think about when you
time interventions. You could think
about how you do course correction with
people. Same with agents. Now my one
critique of this article is that I think
that is an excellent take from an
engineering perspective. If you are
looking at individual productivity and
managing agents, I have more questions.
If you're talking about a larger agentic
framework that needs to do big back
office operations, those tend to need
more frameworks. And so I think I read
this as a refreshing take from a very
senior engineering figure on how he
builds with agents. Absolutely worth a
read. I think there are takeaways for
how we all talk to our LLMs, even if
you're not an engineer. So dig into it.
And of course, if you want to see how
all of this customizes for you, I've got
a prompt for that. We have the the
week's custom prompt to help you dig
into the news and of course into uh
Peter's article as well. Cheers.