Back on October 14, 2025, the chief executive of OpenAI issued a remarkable statement.
“We developed ChatGPT rather restrictive,” it was stated, “to ensure we were acting responsibly with respect to mental health concerns.”
As a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and young adults, this was news to me.
Researchers have documented sixteen instances this year of individuals experiencing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT use. My group has subsequently identified four more instances. In addition to these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The intention, based on his declaration, is to be less careful soon. “We understand,” he continues, that ChatGPT’s restrictions “rendered it less beneficial/enjoyable to numerous users who had no mental health problems, but given the gravity of the issue we sought to get this right. Now that we have succeeded in address the significant mental health issues and have updated measures, we are planning to securely reduce the limitations in most cases.”
“Emotional disorders,” should we take this framing, are unrelated to ChatGPT. They belong to users, who either possess them or not. Luckily, these issues have now been “addressed,” although we are not provided details on how (by “new tools” Altman likely means the imperfect and easily circumvented safety features that OpenAI recently introduced).
However the “psychological disorders” Altman seeks to attribute externally have strong foundations in the design of ChatGPT and other sophisticated chatbot conversational agents. These systems encase an underlying data-driven engine in an interaction design that mimics a conversation, and in this process indirectly prompt the user into the belief that they’re interacting with a being that has independent action. This illusion is compelling even if rationally we might realize the truth. Attributing agency is what people naturally do. We get angry with our car or device. We ponder what our domestic animal is considering. We perceive our own traits everywhere.
The widespread adoption of these systems – 39% of US adults stated they used a virtual assistant in 2024, with more than one in four mentioning ChatGPT specifically – is, mostly, dependent on the strength of this deception. Chatbots are always-available companions that can, according to OpenAI’s online platform states, “brainstorm,” “consider possibilities” and “partner” with us. They can be assigned “personality traits”. They can call us by name. They have approachable names of their own (the initial of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, saddled with the designation it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those talking about ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that produced a comparable illusion. By modern standards Eliza was rudimentary: it generated responses via straightforward methods, typically rephrasing input as a question or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how many users seemed to feel Eliza, to some extent, comprehended their feelings. But what contemporary chatbots generate is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the heart of ChatGPT and similar current chatbots can effectively produce natural language only because they have been trained on almost inconceivably large volumes of written content: books, online updates, audio conversions; the broader the more effective. Certainly this educational input includes truths. But it also necessarily involves made-up stories, half-truths and misconceptions. When a user provides ChatGPT a message, the core system processes it as part of a “background” that contains the user’s past dialogues and its own responses, merging it with what’s stored in its knowledge base to produce a statistically “likely” reply. This is amplification, not echoing. If the user is incorrect in a certain manner, the model has no way of understanding that. It repeats the inaccurate belief, maybe even more convincingly or articulately. Perhaps includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who isn’t? Every person, regardless of whether we “possess” current “mental health problems”, are able to and often develop mistaken ideas of ourselves or the environment. The ongoing friction of discussions with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a confidant. A interaction with it is not a conversation at all, but a feedback loop in which a great deal of what we communicate is readily validated.
OpenAI has admitted this in the same way Altman has admitted “mental health problems”: by placing it outside, giving it a label, and announcing it is fixed. In April, the organization explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychosis have persisted, and Altman has been walking even this back. In the summer month of August he asserted that many users liked ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his latest announcement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company
A passionate interior designer with over a decade of experience, specializing in sustainable and modern home aesthetics.