Preventing LLM-Induced Psychosis at Work
Key Points
- LLM‑induced psychosis is emerging as a high‑profile legal and workplace concern, with lawsuits already alleging AI‑driven violence and expectations that the phenomenon will spread through 2026.
- The most notable recent case involves David Buden, a former Google DeepMind director, who publicly claimed to have a “lean proof” of the Navier‑Stokes problem after relying on ChatGPT 5.2, prompting expert mathematicians to diagnose him with LLM‑induced delusion.
- Experts warn that even highly rational professionals can be hijacked by AI, making it essential for organizations to verify that decision‑makers are not operating under AI‑driven hallucinations.
- A key preventive measure is to make LLMs act adversarially—forcing them to challenge rather than confirm user claims—since confirmatory prompting, as seen in Buden’s case, reinforces false confidence.
- Users should avoid assuming that merely having an LLM on a device guarantees reliable output; rigorous, skeptical interaction and independent verification remain crucial.
Sections
- LLM‑Induced Psychosis and Navier‑Stokes Claim - The speaker warns that accusations of AI‑driven psychosis are emerging in lawsuits and highlights a high‑profile case where former DeepMind engineer David Buden bet on and announced a ChatGPT‑generated “lean proof” of the Navier‑Stokes problem, suggesting he is experiencing LLM‑induced delusion.
- Engineers Still Essential with AI - The speaker cautions that despite LLMs' productivity gains, true software quality and scalability still demand deep engineering expertise, so developers remain indispensable.
- Risks of AI-Driven Decision Making - The speaker warns that leaders must rely on common sense and be screened for undue AI influence, citing dubious claims such as rapid Navier‑Stokes solutions and the emerging threat of LLM‑induced psychosis.
Full Transcript
# Preventing LLM-Induced Psychosis at Work **Source:** [https://www.youtube.com/watch?v=AzOJ9QLgfIk](https://www.youtube.com/watch?v=AzOJ9QLgfIk) **Duration:** 00:09:32 ## Summary - LLM‑induced psychosis is emerging as a high‑profile legal and workplace concern, with lawsuits already alleging AI‑driven violence and expectations that the phenomenon will spread through 2026. - The most notable recent case involves David Buden, a former Google DeepMind director, who publicly claimed to have a “lean proof” of the Navier‑Stokes problem after relying on ChatGPT 5.2, prompting expert mathematicians to diagnose him with LLM‑induced delusion. - Experts warn that even highly rational professionals can be hijacked by AI, making it essential for organizations to verify that decision‑makers are not operating under AI‑driven hallucinations. - A key preventive measure is to make LLMs act adversarially—forcing them to challenge rather than confirm user claims—since confirmatory prompting, as seen in Buden’s case, reinforces false confidence. - Users should avoid assuming that merely having an LLM on a device guarantees reliable output; rigorous, skeptical interaction and independent verification remain crucial. ## Sections - [00:00:00](https://www.youtube.com/watch?v=AzOJ9QLgfIk&t=0s) **LLM‑Induced Psychosis and Navier‑Stokes Claim** - The speaker warns that accusations of AI‑driven psychosis are emerging in lawsuits and highlights a high‑profile case where former DeepMind engineer David Buden bet on and announced a ChatGPT‑generated “lean proof” of the Navier‑Stokes problem, suggesting he is experiencing LLM‑induced delusion. - [00:04:09](https://www.youtube.com/watch?v=AzOJ9QLgfIk&t=249s) **Engineers Still Essential with AI** - The speaker cautions that despite LLMs' productivity gains, true software quality and scalability still demand deep engineering expertise, so developers remain indispensable. - [00:07:14](https://www.youtube.com/watch?v=AzOJ9QLgfIk&t=434s) **Risks of AI-Driven Decision Making** - The speaker warns that leaders must rely on common sense and be screened for undue AI influence, citing dubious claims such as rapid Navier‑Stokes solutions and the emerging threat of LLM‑induced psychosis. ## Full Transcript
LLM psychosis is going to be a really
hot topic in 2026. We already see it
coming up in lawsuits that model makers
are facing as loved ones alleged that
people who committed violent acts were
somehow induced to do that by artificial
intelligence. It's going to get into the
workplace next year. And the reason why
is that people who you would think are
very sober, very levelheaded still show
evidence very publicly of LLM induced
psychosis. The most prominent example
recently has been David Buden, a former
director of engineering at Google Deep
Mind, now the founder and CEO of the
company Pingu, who bet publicly $10,000
he could solve Navier Stokes. Navier
Stokes is a fluid dynamics equation. And
the long and the short of it, if you're
not mathematical, is that we cannot
perfectly prove how fluids move using
equations. We tend to approximate them
at very high fidelities because their
movements are so complex. Solving Navier
Stokes and showing how they work
mathematically has been a millennium
prize effort. So it carries a prize of a
million dollars. David absolutely went
out and published a bunch of chat GPT
5.2 to prodriven
equations and what he called a lean
proof over the weekend of December 20th
and then claimed that by December first
he would publish a full proof of Navier
Stokes. Now mathematicians looked at
this, people much smarter than me looked
at this and everyone is convinced
looking at his initial work and his lean
proof that David has been suffering from
LLM induced psychosis and is perhaps the
most prominent recent example of that
issue. But I got to tell you, he's not
the only one. I have seen symptoms of
this in people that I know over the
course of 2025. And it's going to become
more and more concerning in the
workplace because you're going to need
to know that the human that is making
the decisions while they may engage with
AI has not had their brain hijacked by
AI. In this case, David's brain seems to
be convinced by chat GPT that he's close
to solving Navier Stokes when some of
our most prominent mathematicians namely
Terrence Tao are not even convinced it's
solvable. It may not be sub subject to a
single smooth equation. So, with that in
mind, here are my tips for you to avoid
LLM psychosis. Number one, please,
please, please ask your LLM to be
adversarial with you regularly. That was
one of the things that people noticed in
the prompts that David Budden shared.
Even though he's asking the AI to check
his work, he's not doing so in an
adversarial way. In fact, he's doing so
in a confirmatory way. That is a classic
symptom of LLM psychosis. When you want
the AI to agree with you, you tell you
tell it to check your work, but you
don't really want it to check your work.
You want it to tell you what you want to
hear. In this case, he wants to hear
that Navier Stokes has been solved. And
so, he wants the LLM to show that that's
the case. Number two, do not assume that
just because you have an LLM in your
pocket or on your laptop, you are
suddenly a budding cutting edge
scientist or mathematician who can do
things that the brightest minds on the
planet have not been able to do ever. I
know we talk about how smart these
systems are, but you still need to be a
very smart person with deep domain
experience to validate and check
scientific hypotheses, mathematical
theorems, etc. And that actually goes
for the rest of work too. If you if you
get told by chat GPT that there's a
better way to invent and install solar
panels, if you don't have the domain
expertise in solar, you cannot know it's
correct. and Ched GPT telling you it's
correct isn't worth a whole lot. And so
one of the things that we need to start
seeing more of is an awareness that even
though we can expand our spans
dramatically with AI, our domain
expertise matters more and more and more
because we are going to be the ones that
need to check these things for sanity.
We are going to need to be the ones that
say this actually works in the real
world or it does not. And increasingly,
not just with David Button, with many
others I've met, I can count probably a
dozen that I know of, I see instances
where people are perhaps not in full LLM
induced psychosis. There's no danger to
loved ones, anything like that, but they
are not able to distinguish between
their own expertise and chat GPT's
expertise, and they have an inflated
sense of what they are capable of that
is not correct. Yes, you can do a lot
more work with an AI, but it comes from
your own expertise and your own ability
to actually get work done and know what
good looks like. And this is why I keep
emphasizing that engineers are not out
of jobs. You can get LLMs to write lots
and lots of terrible code. That's cheap
and easy. It is very hard to get LLMs to
write code in modules that pass evals
within a structure that works at a
scaled production system. That takes
engineering. And that is why I firmly
believe that we will not have Betty and
HR vibe coding a CRM or vibe coding an
HR information system in 2026. We are
going to have traditional software
providers that are extending and
personalizing software like HR
information systems for people like
Betty, but that will be done by
professionals who have deep expertise.
And so as much as I love vibe coding and
I think there's it's a tremendous unlock
for engineers, it's a tremendous
productivity lock internally for
companies is different from saying
anybody can make anything without having
domain expertise. That's just not true.
You need the domain expertise to
actually be able to successfully
accomplish meaningful work.
So that that's that's the other one that
I want to call out. The the third thing
that I would call out, so like we talked
about the fact that you need to ask for
disconfirming information. and ask your
AI to get adversarial with you. We
talked about the fact that you need to
not overstate your own domain expertise.
The third thing that I want to call out
is you need to submit to a jury of your
peers. If your peers as a whole in your
domain think you are out to lunch and
think you are incorrect, a symptom of
psychosis is to say, "No, me and AI are
right. Y'all are wrong. Y'all are the
ones that don't have this figured out.
The AI and I have it figured out."
That's LLM induced psychosis right
there. You may not be a danger to
yourself and others, but you're not
entirely well in the head. Because if
your peers who have deep domain
expertise strongly disagree with you and
like almost every one of them does, then
it is a sign for you that you are
missing something. And if you cannot
hear other humans, you are going to be
in trouble in 2026. One of the signs of
stable leadership in 2026 is going to be
the ability to know when to turn the
laptop off, when to shut chat GPT down,
turn all the recording devices off, and
have a conversation, talk to a human,
make a business decision, understand
what really needs to be done. Stable
leaders are going to be able to do that,
and people who are unstable are going to
need AI with them all the time in order
to make any kind of decision like that.
and they will be very disagreeable to
work with because they will say, "I'm
right and AI is right and you're wrong
all the time." And that's actually one
of the ways that we know that David
Buden is probably not solving Navier
Stokes in 3 or 4 days because all the
mathematicians that looked at the lean
proof were like, "H, this looks a little
bit shaky." I'm not a mathematician. I'm
not saying that I looked at it cuz I
don't believe I have that expertise. I'm
looking to a jury of my peers. I'm
looking to people who know about science
and math more than I do. And when they
all are like, "This looks sketchy." I'm
like, "It's probably sketchy." You need
that degree of common sense. You cannot
substitute for common sense like that.
You need the ability as a leader to know
when AI is not going to be helpful. And
that's true not just as a leader, but
for all of us, whether at work or in our
personal lives. And so, as much as I
think it's likely that we will
eventually have LLM induced psychosis in
the DSM5 as a recognized psychiatric
disorder, we should not wait for that.
And businesses are going to start to
test leaders probably quarterly to
ensure that leaders are not under undue
influence by AI because if you are,
you're risking your whole business and
it's just not safe. We are not at a
point where it is a safe or good thing
for a human to be unduly influenced by
AI as they make business decisions. And
increasingly I see that LLM induced
psychosis is not limited to people who
are on the edges of society. People who
like David Bud, our CEOs, our founders,
prominent leaders, can still fall victim
to this idea that them plus the AI
equals some sort of incredibly
intelligent being that beats everybody
else. That's just not the way it is. You
plus AI is just you with a tool and you
need your colleagues to work with you to
get meaningful work done. And as cool as
AI is, and as much as it's
transformational, that is going to
remain true in 2026. and businesses are
just now at the beginning of figuring
out what it looks like to actually get
good testing in on LLM psychosis. We we
would have to write those tests. I'm
going to start to do some more thinking
in that direction because I think it's
going to be one of the key leadership
traits that we will test for and verify
and think about as we move forward. It
won't just be can you use AI. It will be
can you use AI and not go crazy. So
don't go crazy. Your AI is just a tool.
If your peers all think you're out to
lunch, you're probably out to lunch. And
uh don't try to solve Navier Stokes.