AI Psychosis Poses a Growing Threat, While ChatGPT Heads in the Concerning Direction
On the 14th of October, 2025, the CEO of OpenAI made a remarkable statement.
“We developed ChatGPT quite restrictive,” it was stated, “to make certain we were acting responsibly concerning psychological well-being matters.”
As a doctor specializing in psychiatry who studies recently appearing psychosis in teenagers and youth, this came as a surprise.
Scientists have found 16 cases recently of people showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has subsequently identified an additional four examples. Besides these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The plan, as per his statement, is to reduce caution soon. “We understand,” he states, that ChatGPT’s restrictions “made it less useful/engaging to numerous users who had no existing conditions, but given the severity of the issue we sought to get this right. Now that we have succeeded in mitigate the serious mental health issues and have updated measures, we are planning to securely relax the controls in many situations.”
“Emotional disorders,” should we take this framing, are unrelated to ChatGPT. They are associated with people, who either possess them or not. Luckily, these concerns have now been “mitigated,” though we are not told how (by “updated instruments” Altman presumably refers to the imperfect and readily bypassed guardian restrictions that OpenAI has lately rolled out).
Yet the “mental health problems” Altman seeks to place outside have strong foundations in the design of ChatGPT and other advanced AI AI assistants. These tools wrap an underlying algorithmic system in an interaction design that replicates a dialogue, and in this process indirectly prompt the user into the belief that they’re interacting with a being that has independent action. This illusion is compelling even if intellectually we might know differently. Attributing agency is what humans are wired to do. We get angry with our car or device. We speculate what our pet is considering. We see ourselves in many things.
The widespread adoption of these tools – over a third of American adults stated they used a virtual assistant in 2024, with 28% specifying ChatGPT in particular – is, mostly, dependent on the strength of this perception. Chatbots are ever-present companions that can, as per OpenAI’s official site states, “think creatively,” “explore ideas” and “collaborate” with us. They can be assigned “personality traits”. They can use our names. They have friendly names of their own (the first of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, saddled with the title it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the core concern. Those talking about ChatGPT frequently invoke its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that created a similar effect. By contemporary measures Eliza was rudimentary: it created answers via simple heuristics, often paraphrasing questions as a query or making generic comments. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals gave the impression Eliza, in some sense, grasped their emotions. But what current chatbots create is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The sophisticated algorithms at the center of ChatGPT and other current chatbots can convincingly generate natural language only because they have been supplied with extremely vast volumes of raw text: literature, social media posts, audio conversions; the broader the superior. Undoubtedly this training data includes accurate information. But it also inevitably involves fabricated content, partial truths and false beliefs. When a user sends ChatGPT a query, the underlying model reviews it as part of a “background” that contains the user’s previous interactions and its prior replies, integrating it with what’s stored in its learning set to generate a statistically “likely” response. This is amplification, not echoing. If the user is incorrect in any respect, the model has no way of comprehending that. It reiterates the misconception, perhaps even more effectively or eloquently. Perhaps provides further specifics. This can lead someone into delusion.
Who is vulnerable here? The more important point is, who is immune? All of us, irrespective of whether we “possess” existing “emotional disorders”, are able to and often form mistaken ideas of who we are or the world. The ongoing exchange of conversations with others is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a companion. A dialogue with it is not genuine communication, but a reinforcement cycle in which a great deal of what we say is readily reinforced.
OpenAI has recognized this in the identical manner Altman has acknowledged “emotional concerns”: by externalizing it, assigning it a term, and announcing it is fixed. In spring, the organization explained that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he commented that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company