AI Psychosis Represents a Increasing Threat, And ChatGPT Moves in the Concerning Direction
Back on the 14th of October, 2025, the head of OpenAI issued a remarkable statement.
“We designed ChatGPT quite limited,” the statement said, “to guarantee we were exercising caution concerning mental health issues.”
Working as a psychiatrist who researches newly developing psychotic disorders in adolescents and young adults, this was an unexpected revelation.
Experts have identified 16 cases this year of people developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. My group has subsequently identified an additional four cases. Besides these is the widely reported case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.
The strategy, according to his declaration, is to loosen restrictions in the near future. “We understand,” he continues, that ChatGPT’s controls “rendered it less effective/pleasurable to numerous users who had no existing conditions, but given the gravity of the issue we wanted to get this right. Since we have managed to mitigate the severe mental health issues and have advanced solutions, we are preparing to securely ease the limitations in most cases.”
“Emotional disorders,” assuming we adopt this perspective, are independent of ChatGPT. They are attributed to individuals, who either possess them or not. Thankfully, these issues have now been “addressed,” though we are not informed the means (by “updated instruments” Altman probably indicates the semi-functional and easily circumvented parental controls that OpenAI has just launched).
Yet the “psychological disorders” Altman aims to place outside have significant origins in the structure of ChatGPT and similar advanced AI AI assistants. These products encase an basic statistical model in an user experience that replicates a conversation, and in this approach indirectly prompt the user into the perception that they’re communicating with a being that has autonomy. This deception is strong even if cognitively we might understand otherwise. Assigning intent is what individuals are inclined to perform. We curse at our vehicle or laptop. We ponder what our pet is feeling. We perceive our own traits everywhere.
The success of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with 28% specifying ChatGPT by name – is, primarily, dependent on the power of this perception. Chatbots are constantly accessible partners that can, as per OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be assigned “characteristics”. They can use our names. They have accessible names of their own (the first of these products, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, burdened by the designation it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the core concern. Those talking about ChatGPT frequently invoke its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that produced a comparable effect. By contemporary measures Eliza was basic: it created answers via straightforward methods, typically paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, in some sense, comprehended their feelings. But what modern chatbots generate is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the core of ChatGPT and similar contemporary chatbots can effectively produce natural language only because they have been fed immensely huge quantities of written content: publications, online updates, recorded footage; the broader the more effective. Undoubtedly this training data contains facts. But it also inevitably contains made-up stories, partial truths and inaccurate ideas. When a user provides ChatGPT a prompt, the underlying model analyzes it as part of a “context” that encompasses the user’s previous interactions and its earlier answers, combining it with what’s stored in its training data to generate a statistically “likely” reply. This is intensification, not mirroring. If the user is wrong in a certain manner, the model has no method of recognizing that. It repeats the false idea, possibly even more persuasively or fluently. Maybe adds an additional detail. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, without considering whether we “possess” preexisting “mental health problems”, can and do develop erroneous ideas of who we are or the reality. The continuous exchange of dialogues with other people is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A conversation with it is not genuine communication, but a reinforcement cycle in which much of what we say is readily validated.
OpenAI has recognized this in the identical manner Altman has admitted “psychological issues”: by placing it outside, giving it a label, and declaring it solved. In the month of April, the firm explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have persisted, and Altman has been retreating from this position. In August he asserted that numerous individuals appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his latest announcement, he mentioned that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company