The video warns about “LLM psychosis,” where people become convinced of false beliefs due to overreliance on AI like ChatGPT, citing the example of David Budden’s unsubstantiated claim to have solved a major math problem with AI assistance. The speaker cautions that as AI becomes more prevalent in workplaces, such psychological effects could lead to poor decision-making and real-world consequences.
The video discusses the emerging phenomenon of “LLM psychosis,” a term used to describe situations where individuals become convinced of false or exaggerated beliefs due to their interactions with large language models (LLMs) like ChatGPT. The speaker predicts that this issue will become a major topic by 2026, especially as it begins to impact workplaces. Already, there are lawsuits where people allege that AI systems influenced individuals to commit violent acts, suggesting that the psychological effects of AI are starting to have real-world consequences.
A prominent recent example cited is David Budden, a former director of engineering at Google DeepMind and now CEO of Pingu. Budden publicly bet $10,000 that he could solve the Navier-Stokes equations, a set of fluid dynamics equations that have eluded a complete mathematical solution and are considered one of the Millennium Prize Problems, carrying a $1 million reward. Budden claimed to have made significant progress using ChatGPT 5.2 Pro, publishing what he called a “lean proof” and promising a full solution by December 1st.
However, the mathematical community, including experts much more qualified than the speaker, reviewed Budden’s work and found it unconvincing. The consensus is that Budden may be experiencing LLM-induced psychosis, where his belief in the AI-generated solution has overridden the skepticism and rigor typically required in mathematics. The speaker notes that even renowned mathematicians like Terence Tao are not convinced that the Navier-Stokes equations can be solved in the way Budden claims.
The speaker warns that Budden is not an isolated case. Over the course of 2025, the speaker has observed similar symptoms in other individuals, suggesting that LLM psychosis is becoming more widespread. This raises concerns for workplaces, where decision-makers might be unduly influenced or even “hijacked” by AI, leading to poor or irrational decisions based on AI outputs rather than sound human judgment.
In conclusion, the video emphasizes the need for vigilance as AI becomes more integrated into professional environments. It is crucial to ensure that humans remain in control and are not unduly swayed by AI-generated information, especially when the AI’s outputs can be persuasive but ultimately incorrect or misleading. The phenomenon of LLM psychosis, as illustrated by Budden’s case, serves as a warning of the psychological and practical risks associated with overreliance on AI systems.