Beyond the AI Cold War
Key Points
- The U.S.–China AI “cold war” – with export bans and zero‑sum thinking – is making the world less safe and is based on outdated assumptions that don’t fit today’s internet‑driven technology.
- The belief that only one super‑intelligent AI will emerge (a “singleton”) is increasingly rejected; multiple powerful AIs will proliferate because the software can be copied and spread instantly online.
- Restricting AI exports paradoxically speeds up innovation, as shown by breakthroughs like DeepSeek achieving GPT‑4‑level performance with dramatically less compute and the explosion of open‑source models on platforms such as Hugging Face.
- Cross‑border collaboration and open research are already eroding the performance gap between Chinese and American models, demonstrating that knowledge flows like water and outpaces any containment strategy.
- A strategic shift toward cooperative governance rather than competitive rivalry is needed to harness this rapid diffusion for global safety.
Sections
- Rethinking AI Superpower Competition - The speaker argues that the US‑China AI arms race, driven by Cold‑War‑style zero‑sum thinking, is unsafe and calls for a new cooperative strategy that acknowledges multiple AIs will proliferate thanks to the internet’s near‑zero cost of collaboration.
- AI Race Outpaces Cold War Paradigms - The speaker argues that the unprecedented speed of global AI adoption makes traditional, decades‑long strategic frameworks obsolete, creating new security and economic risks as the US and China cling to competitive, containment‑style policies.
- Cooperative Frameworks for AI Risk - The speaker proposes practical bilateral steps—joint risk assessments, technical hotlines, and aligned safety standards—to manage shared AI threats while acknowledging competitive domains.
- Cooperating on AI’s Birth - The speaker argues that, despite inevitable great‑power competition, humanity must coordinate the early development of AI—treating it as a shared, fast‑moving risk akin to nuclear weapons—to secure long‑term flourishing and choose “smart rivalry” over destructive conflict.
Full Transcript
# Beyond the AI Cold War **Source:** [https://www.youtube.com/watch?v=xoGei3nXPH8](https://www.youtube.com/watch?v=xoGei3nXPH8) **Duration:** 00:11:37 ## Summary - The U.S.–China AI “cold war” – with export bans and zero‑sum thinking – is making the world less safe and is based on outdated assumptions that don’t fit today’s internet‑driven technology. - The belief that only one super‑intelligent AI will emerge (a “singleton”) is increasingly rejected; multiple powerful AIs will proliferate because the software can be copied and spread instantly online. - Restricting AI exports paradoxically speeds up innovation, as shown by breakthroughs like DeepSeek achieving GPT‑4‑level performance with dramatically less compute and the explosion of open‑source models on platforms such as Hugging Face. - Cross‑border collaboration and open research are already eroding the performance gap between Chinese and American models, demonstrating that knowledge flows like water and outpaces any containment strategy. - A strategic shift toward cooperative governance rather than competitive rivalry is needed to harness this rapid diffusion for global safety. ## Sections - [00:00:00](https://www.youtube.com/watch?v=xoGei3nXPH8&t=0s) **Rethinking AI Superpower Competition** - The speaker argues that the US‑China AI arms race, driven by Cold‑War‑style zero‑sum thinking, is unsafe and calls for a new cooperative strategy that acknowledges multiple AIs will proliferate thanks to the internet’s near‑zero cost of collaboration. - [00:03:16](https://www.youtube.com/watch?v=xoGei3nXPH8&t=196s) **AI Race Outpaces Cold War Paradigms** - The speaker argues that the unprecedented speed of global AI adoption makes traditional, decades‑long strategic frameworks obsolete, creating new security and economic risks as the US and China cling to competitive, containment‑style policies. - [00:06:30](https://www.youtube.com/watch?v=xoGei3nXPH8&t=390s) **Cooperative Frameworks for AI Risk** - The speaker proposes practical bilateral steps—joint risk assessments, technical hotlines, and aligned safety standards—to manage shared AI threats while acknowledging competitive domains. - [00:10:06](https://www.youtube.com/watch?v=xoGei3nXPH8&t=606s) **Cooperating on AI’s Birth** - The speaker argues that, despite inevitable great‑power competition, humanity must coordinate the early development of AI—treating it as a shared, fast‑moving risk akin to nuclear weapons—to secure long‑term flourishing and choose “smart rivalry” over destructive conflict. ## Full Transcript
The world's two AI superpowers are
locked in a competition that's making
everybody less safe. And today on July
4th, America's birthday, I want to talk
about the strategy shift that we could
choose to make that would keep everybody
safer. The current AI race is not
helping anybody, but I want to propose a
alternative solution that could actually
work. Let's start with how we got here.
Every transformative technology has
triggered a similar response to what
we're seeing right now. So, in some
ways, it's very understandable. Both
Washington and Beijing are reaching for
cold war era playbooks. Export controls,
technology denial, zero someum thinking.
There's an assumption that the others AI
dominance would mean defeat for the
other power. There is a narrative that
artificial super intelligence is right
around the corner, that it will be what
you would term a singleton world, which
means only one super intelligence will
develop. And if that's the case and it's
truly super intelligent, suddenly all of
this cold war thinking starts to make
sense. The problem is this. We don't
live in a singleton world. Even Sam
Alman has admitted he doesn't think
anymore that we're going to only have
one super intelligent AI or only one
generally intelligent AI. We're going to
have multiple. Do you know why? Because
this technology is extremely easy to
proliferate because it's built on the
back of the internet. And what did the
internet do? It took the cost of
cooperation between people to zero. In
the nuclear age, which was what the Cold
War was built on, we had physical
materials and clear boundaries. You had
to move physical materials around in
order to construct any kind of nuclear
weapon. In the space age, we had massive
infrastructure. You could track progress
through rocket launches. In the AI age,
everything spreads at internet speed.
There are no borders. Yesterday's
strategies fail at dealing with
tomorrow's technology. And that is what
we're looking at with AI. And that is
why I think the cold war frame is
incorrect empirically with the
technology that we have today. There is
a paradox with containment. When we put
export restrictions on another country,
we intend to slow progress. But instead,
because necessity is the mother of
invention, we trigger efficiency
breakthroughs. Deepseek achieved GPT4
level performance with 90% less compute.
Innovation thrives under pressure
consistently. We have 450,000
plus open models on hugging face. Open
AI models. Anybody can grab them.
Researchers from both nations routinely
publish together. That is, by the way, a
fantastic thing. That is a great thing.
Knowledge flows like water across
national borders. It flows like the
internet and performance gaps over time
are narrowing, not widening. Mary Mer
made that point really brilliantly in
her large deck that I summarized where
she talked about the fact that
effectively over the last two years, the
competitive difference between Chinese
models and American models has
disappeared. There's like a one or two
percentage point difference in
performance. It's not that big.
Meanwhile, the world continues to adopt
AI at a terrifyingly fast speed. Chat
GPT famously hit a 100 million users in
60 days, but that's old news now.
They're on track for a billion, 10 times
that number this year. What we talked
about in the Cold War was changes that
took decades. Things that took a long
time to adjust. It took decades for
nuclear weapons to proliferate. It took
decades for great power relationships to
change. with instant global
transmission, with half a million open
models, with the speed of intelligence
growth that we're seeing, none of those
old ways of thinking work. They just
don't. And I get it. Everybody has a
legitimate concern from an American
perspective. AI could be used for
authoritarian purposes. It could be used
in military applications. There could be
technology transfer to other countries
that could be uh enemies of the state.
Values alignment between AI systems is a
real concern from a Chinese perspective.
Technology embargos feel a lot like
containment. They feel like exclusion
from global AI standards. Security
vulnerabilities from foreign AI become a
real concern and economic
competitiveness is something that they
don't feel like they can trade down. So
both nations in their own world have
legitimate concerns. The question is
does the current approach address any of
these concerns for anybody or does it
just create new risks? I would argue
that it just creates new risks because
it locks us into a competitive mindset.
Uncontrolled AI will not recognize
borders if it transpires. Cyber
incidents from a misaligned AI will
cascade globally. And by the way, I am
actually more concerned about things
like large-scale cyber attacks that
cascade globally than I am about
something like Skynet. Bio-risks, if
that were to transpire, would affect the
entire human population. Economic AI
shocks, if that were to transpire, would
ripple worldwide. This is the same way
that Chernobyl didn't stop at borders.
If an accident happens to one of these
technologies, it's up to everybody to
cooperate to solve it. The 2008
financial crisis, it went global
immediately. I remember where I was.
Similarly, in 2020 with COVID, it went
global right away. AI risks will move
faster than biological risks and even
faster than financial market shocks in
certain situations. What I want to see
is a cooperative framework that will
enable both superpowers in AI to work
together to converge around common
standards that contain systemic risk.
And I want to go further than just
saying we should do that and actually
propose some principles that we can talk
about. And I know I I have no illusions.
I do not think people in government are
watching this video, but it's still
worth us as a society talking about a
global society because everybody shares
risk when AI is not well managed. So,
core principle number one, graduated
engagement. Compete where values and
interests diverge, sure, but cooperate
where existential risks converge. And we
have existential risks with AI even if
we stop short of a Skynet scenario that
are still worth working on cooperation
for. build trust through little tangible
steps and verify technical cooperation.
These are things that like we can choose
to do. Sure, there are areas where
there's natural competition. Economic
applications, national security systems,
governance models, domestic
implementations. I get it. We don't have
to try and fully align there. But
there's also areas where we can
reasonably cooperate. Preventing
autonomous weapons proliferation, that
seems like something everybody would
have an incentive for. Biod defense a AI
safety protocols that seems reasonable.
Financial system stability. Everyone has
an incentive to keep the financial
system stable and critical
infrastructure protection. We can work
on a common core of risks that we would
want to contain and agree on a framework
for cooperation to address those. We
could choose to do that. So what are
some practical steps that we could
imagine? Can you tell I worked at the
model United Nations? I was such a nerd
as a kid. Anyway, joint risk assessment.
Both nations AI scientists could
identify shared risks. They could focus
on technical issues, not politics.
Somewhat similar to the climate science
panels. The focus would be building
common understanding. Incident
communication channels, technical
hotlines for AI anomalies, preventing
misunderstanding during a crisis. We had
hotlines during the Cold War. We don't
have an AI hotline. Why don't we have an
AI hotline? What about parallel safety
standards? They don't have to be
identical. They don't even have to They
don't even have to be fully
interoperable. They just need to be
interoperable enough that there's some
sense of common safety measures.
International aviation is a good
example. We have different airlines but
common safety standards. Each nation
implements it in their own way. We need
a similar sort of approach with AI. It
would be helpful if we could also agree
and this is probably a little bit of a
stretch but could we agree on research
transparency zones places where
everybody could come together to
research to learn about AI to
investigate AI safety. It benefits
everybody. supposed to threaten nobody
and it keeps competitive advantages as
something that can be worked on together
and sort of diffuses some of that great
power tension. Third party verification
Switzerland, Singapore, someone who's
known for being neutral could act as a
validator. Technical verification could
occur and both nation secrets could be
respected. I get that I'm talking at a
little bit of a high level. I am not
going to the level where I'm talking
about specific systems because one, if I
knew about them and I talked about them,
I'm sure I would get in trouble. I don't
know about them. And two, they're
evolving very quickly. And so it doesn't
make sense to actually go to the 10,000
ft level and talk about specific
technical systems when they're all being
built. It is more important to talk
about operative principles because at
the moment the operative principle seems
to be competition. And in this case, I
think it was more rational to be
competitive when the technology had a
different footprint. Nuclear
proliferation and competitiveness and
mutually assured destruction, that was
all the language of the cold war and it
kind of worked. It held the world in
tension, but it held it stable. I do not
think this equilibrium is stable. If we
have competition under a fastmoving
technology footprint, it's not a stable
situation, and that is dangerous for
everybody regardless of where you live.
And so I think it's more productive to
have a more cooperative stance. And so
my ask is that we think less about how
we can maintain a competitive advantage
in a way that's zero sum and more about
how we can start to think about
establishing practical frameworks that
show that we can build trust step by
step. It's essentially an ask that we
return to the idea of America is a place
where we can establish a sense of human
flourishing that survives the AI age.
Not that I'm saying the founders or the
framers anticipated the AI age. Heck,
most of us didn't anticipate the AI age
30 or 40 years ago. There were only a
few that were visionary. But now we're
here and now we need to think about how
these long-term principles apply in this
new world we find ourselves in. And in a
sense, that's all our jobs because as a
species, it's our job to figure out how
we establish human flourishing with AI
for the next 500 years, for the next
thousand years. And if we're going to do
that, it means getting this part right
right now. It means getting the birth of
AI right. And so my thinking on July 4th
is let's be cooperative about the birth
of AI within reason. I know we're going
to be divergent as great powers on
different things, but as much as we can
be cooperative, I think everybody will
benefit because this baby AI is growing
up really, really fast. So that's my
July 4th reflection. Great powers have
competed through history, but even the
nuclear weapons story taught us that
some risks require coordination. AI
presents even like even greater shared
dangers because it's moving faster. And
I do believe that we can compete to some
degree while cooperating to prevent real
disaster. The choice is smart rivalry or
destructive rivalry. We can be rivals
like brothers, right? I have a brother.
I like him a lot. We're rivals in a lot
of fun ways, but we're also friends. We
also have each other's backs. And even
if that's not a perfect analogy, the
idea of a smart rivalry is something
that I think you can take away from
this. Happy July 4th. Cheers.