Learning Library

← Back to Library

Preventing LLM-Induced Psychosis at Work

Key Points

  • LLM‑induced psychosis is emerging as a high‑profile legal and workplace concern, with lawsuits already alleging AI‑driven violence and expectations that the phenomenon will spread through 2026.
  • The most notable recent case involves David Buden, a former Google DeepMind director, who publicly claimed to have a “lean proof” of the Navier‑Stokes problem after relying on ChatGPT 5.2, prompting expert mathematicians to diagnose him with LLM‑induced delusion.
  • Experts warn that even highly rational professionals can be hijacked by AI, making it essential for organizations to verify that decision‑makers are not operating under AI‑driven hallucinations.
  • A key preventive measure is to make LLMs act adversarially—forcing them to challenge rather than confirm user claims—since confirmatory prompting, as seen in Buden’s case, reinforces false confidence.
  • Users should avoid assuming that merely having an LLM on a device guarantees reliable output; rigorous, skeptical interaction and independent verification remain crucial.

Full Transcript

# Preventing LLM-Induced Psychosis at Work **Source:** [https://www.youtube.com/watch?v=AzOJ9QLgfIk](https://www.youtube.com/watch?v=AzOJ9QLgfIk) **Duration:** 00:09:32 ## Summary - LLM‑induced psychosis is emerging as a high‑profile legal and workplace concern, with lawsuits already alleging AI‑driven violence and expectations that the phenomenon will spread through 2026. - The most notable recent case involves David Buden, a former Google DeepMind director, who publicly claimed to have a “lean proof” of the Navier‑Stokes problem after relying on ChatGPT 5.2, prompting expert mathematicians to diagnose him with LLM‑induced delusion. - Experts warn that even highly rational professionals can be hijacked by AI, making it essential for organizations to verify that decision‑makers are not operating under AI‑driven hallucinations. - A key preventive measure is to make LLMs act adversarially—forcing them to challenge rather than confirm user claims—since confirmatory prompting, as seen in Buden’s case, reinforces false confidence. - Users should avoid assuming that merely having an LLM on a device guarantees reliable output; rigorous, skeptical interaction and independent verification remain crucial. ## Sections - [00:00:00](https://www.youtube.com/watch?v=AzOJ9QLgfIk&t=0s) **LLM‑Induced Psychosis and Navier‑Stokes Claim** - The speaker warns that accusations of AI‑driven psychosis are emerging in lawsuits and highlights a high‑profile case where former DeepMind engineer David Buden bet on and announced a ChatGPT‑generated “lean proof” of the Navier‑Stokes problem, suggesting he is experiencing LLM‑induced delusion. - [00:04:09](https://www.youtube.com/watch?v=AzOJ9QLgfIk&t=249s) **Engineers Still Essential with AI** - The speaker cautions that despite LLMs' productivity gains, true software quality and scalability still demand deep engineering expertise, so developers remain indispensable. - [00:07:14](https://www.youtube.com/watch?v=AzOJ9QLgfIk&t=434s) **Risks of AI-Driven Decision Making** - The speaker warns that leaders must rely on common sense and be screened for undue AI influence, citing dubious claims such as rapid Navier‑Stokes solutions and the emerging threat of LLM‑induced psychosis. ## Full Transcript
0:00LLM psychosis is going to be a really 0:02hot topic in 2026. We already see it 0:04coming up in lawsuits that model makers 0:07are facing as loved ones alleged that 0:09people who committed violent acts were 0:11somehow induced to do that by artificial 0:13intelligence. It's going to get into the 0:14workplace next year. And the reason why 0:16is that people who you would think are 0:19very sober, very levelheaded still show 0:22evidence very publicly of LLM induced 0:26psychosis. The most prominent example 0:28recently has been David Buden, a former 0:30director of engineering at Google Deep 0:32Mind, now the founder and CEO of the 0:36company Pingu, who bet publicly $10,000 0:40he could solve Navier Stokes. Navier 0:42Stokes is a fluid dynamics equation. And 0:45the long and the short of it, if you're 0:47not mathematical, is that we cannot 0:49perfectly prove how fluids move using 0:52equations. We tend to approximate them 0:54at very high fidelities because their 0:57movements are so complex. Solving Navier 1:00Stokes and showing how they work 1:01mathematically has been a millennium 1:03prize effort. So it carries a prize of a 1:05million dollars. David absolutely went 1:08out and published a bunch of chat GPT 1:125.2 to prodriven 1:14equations and what he called a lean 1:17proof over the weekend of December 20th 1:20and then claimed that by December first 1:23he would publish a full proof of Navier 1:25Stokes. Now mathematicians looked at 1:27this, people much smarter than me looked 1:29at this and everyone is convinced 1:32looking at his initial work and his lean 1:34proof that David has been suffering from 1:36LLM induced psychosis and is perhaps the 1:39most prominent recent example of that 1:42issue. But I got to tell you, he's not 1:44the only one. I have seen symptoms of 1:46this in people that I know over the 1:48course of 2025. And it's going to become 1:51more and more concerning in the 1:52workplace because you're going to need 1:53to know that the human that is making 1:55the decisions while they may engage with 1:57AI has not had their brain hijacked by 1:59AI. In this case, David's brain seems to 2:02be convinced by chat GPT that he's close 2:05to solving Navier Stokes when some of 2:07our most prominent mathematicians namely 2:09Terrence Tao are not even convinced it's 2:11solvable. It may not be sub subject to a 2:14single smooth equation. So, with that in 2:16mind, here are my tips for you to avoid 2:20LLM psychosis. Number one, please, 2:23please, please ask your LLM to be 2:26adversarial with you regularly. That was 2:28one of the things that people noticed in 2:30the prompts that David Budden shared. 2:33Even though he's asking the AI to check 2:35his work, he's not doing so in an 2:37adversarial way. In fact, he's doing so 2:40in a confirmatory way. That is a classic 2:43symptom of LLM psychosis. When you want 2:45the AI to agree with you, you tell you 2:48tell it to check your work, but you 2:50don't really want it to check your work. 2:53You want it to tell you what you want to 2:55hear. In this case, he wants to hear 2:57that Navier Stokes has been solved. And 3:00so, he wants the LLM to show that that's 3:02the case. Number two, do not assume that 3:05just because you have an LLM in your 3:08pocket or on your laptop, you are 3:10suddenly a budding cutting edge 3:12scientist or mathematician who can do 3:14things that the brightest minds on the 3:16planet have not been able to do ever. I 3:20know we talk about how smart these 3:21systems are, but you still need to be a 3:25very smart person with deep domain 3:28experience to validate and check 3:30scientific hypotheses, mathematical 3:33theorems, etc. And that actually goes 3:35for the rest of work too. If you if you 3:37get told by chat GPT that there's a 3:39better way to invent and install solar 3:42panels, if you don't have the domain 3:44expertise in solar, you cannot know it's 3:47correct. and Ched GPT telling you it's 3:49correct isn't worth a whole lot. And so 3:51one of the things that we need to start 3:53seeing more of is an awareness that even 3:57though we can expand our spans 3:58dramatically with AI, our domain 4:00expertise matters more and more and more 4:04because we are going to be the ones that 4:06need to check these things for sanity. 4:08We are going to need to be the ones that 4:09say this actually works in the real 4:11world or it does not. And increasingly, 4:14not just with David Button, with many 4:16others I've met, I can count probably a 4:17dozen that I know of, I see instances 4:20where people are perhaps not in full LLM 4:23induced psychosis. There's no danger to 4:25loved ones, anything like that, but they 4:27are not able to distinguish between 4:30their own expertise and chat GPT's 4:32expertise, and they have an inflated 4:35sense of what they are capable of that 4:37is not correct. Yes, you can do a lot 4:39more work with an AI, but it comes from 4:42your own expertise and your own ability 4:44to actually get work done and know what 4:46good looks like. And this is why I keep 4:48emphasizing that engineers are not out 4:50of jobs. You can get LLMs to write lots 4:53and lots of terrible code. That's cheap 4:56and easy. It is very hard to get LLMs to 4:59write code in modules that pass evals 5:01within a structure that works at a 5:03scaled production system. That takes 5:05engineering. And that is why I firmly 5:07believe that we will not have Betty and 5:10HR vibe coding a CRM or vibe coding an 5:13HR information system in 2026. We are 5:15going to have traditional software 5:17providers that are extending and 5:19personalizing software like HR 5:21information systems for people like 5:23Betty, but that will be done by 5:25professionals who have deep expertise. 5:27And so as much as I love vibe coding and 5:29I think there's it's a tremendous unlock 5:31for engineers, it's a tremendous 5:32productivity lock internally for 5:34companies is different from saying 5:36anybody can make anything without having 5:38domain expertise. That's just not true. 5:41You need the domain expertise to 5:42actually be able to successfully 5:44accomplish meaningful work. 5:48So that that's that's the other one that 5:49I want to call out. The the third thing 5:51that I would call out, so like we talked 5:52about the fact that you need to ask for 5:54disconfirming information. and ask your 5:56AI to get adversarial with you. We 5:58talked about the fact that you need to 5:59not overstate your own domain expertise. 6:02The third thing that I want to call out 6:04is you need to submit to a jury of your 6:06peers. If your peers as a whole in your 6:09domain think you are out to lunch and 6:11think you are incorrect, a symptom of 6:14psychosis is to say, "No, me and AI are 6:17right. Y'all are wrong. Y'all are the 6:19ones that don't have this figured out. 6:21The AI and I have it figured out." 6:22That's LLM induced psychosis right 6:24there. You may not be a danger to 6:25yourself and others, but you're not 6:27entirely well in the head. Because if 6:30your peers who have deep domain 6:32expertise strongly disagree with you and 6:34like almost every one of them does, then 6:36it is a sign for you that you are 6:38missing something. And if you cannot 6:41hear other humans, you are going to be 6:43in trouble in 2026. One of the signs of 6:46stable leadership in 2026 is going to be 6:49the ability to know when to turn the 6:51laptop off, when to shut chat GPT down, 6:54turn all the recording devices off, and 6:56have a conversation, talk to a human, 6:59make a business decision, understand 7:01what really needs to be done. Stable 7:03leaders are going to be able to do that, 7:05and people who are unstable are going to 7:08need AI with them all the time in order 7:10to make any kind of decision like that. 7:12and they will be very disagreeable to 7:14work with because they will say, "I'm 7:16right and AI is right and you're wrong 7:18all the time." And that's actually one 7:19of the ways that we know that David 7:21Buden is probably not solving Navier 7:24Stokes in 3 or 4 days because all the 7:26mathematicians that looked at the lean 7:28proof were like, "H, this looks a little 7:30bit shaky." I'm not a mathematician. I'm 7:32not saying that I looked at it cuz I 7:34don't believe I have that expertise. I'm 7:36looking to a jury of my peers. I'm 7:38looking to people who know about science 7:39and math more than I do. And when they 7:41all are like, "This looks sketchy." I'm 7:43like, "It's probably sketchy." You need 7:45that degree of common sense. You cannot 7:47substitute for common sense like that. 7:49You need the ability as a leader to know 7:52when AI is not going to be helpful. And 7:54that's true not just as a leader, but 7:56for all of us, whether at work or in our 7:58personal lives. And so, as much as I 8:00think it's likely that we will 8:01eventually have LLM induced psychosis in 8:03the DSM5 as a recognized psychiatric 8:05disorder, we should not wait for that. 8:07And businesses are going to start to 8:09test leaders probably quarterly to 8:12ensure that leaders are not under undue 8:14influence by AI because if you are, 8:17you're risking your whole business and 8:19it's just not safe. We are not at a 8:21point where it is a safe or good thing 8:23for a human to be unduly influenced by 8:25AI as they make business decisions. And 8:27increasingly I see that LLM induced 8:29psychosis is not limited to people who 8:32are on the edges of society. People who 8:34like David Bud, our CEOs, our founders, 8:36prominent leaders, can still fall victim 8:39to this idea that them plus the AI 8:41equals some sort of incredibly 8:43intelligent being that beats everybody 8:45else. That's just not the way it is. You 8:48plus AI is just you with a tool and you 8:50need your colleagues to work with you to 8:52get meaningful work done. And as cool as 8:54AI is, and as much as it's 8:56transformational, that is going to 8:57remain true in 2026. and businesses are 9:00just now at the beginning of figuring 9:01out what it looks like to actually get 9:04good testing in on LLM psychosis. We we 9:07would have to write those tests. I'm 9:08going to start to do some more thinking 9:09in that direction because I think it's 9:10going to be one of the key leadership 9:13traits that we will test for and verify 9:15and think about as we move forward. It 9:18won't just be can you use AI. It will be 9:20can you use AI and not go crazy. So 9:23don't go crazy. Your AI is just a tool. 9:25If your peers all think you're out to 9:26lunch, you're probably out to lunch. And 9:28uh don't try to solve Navier Stokes.