Back on October 14, 2025, the chief executive of OpenAI issued a extraordinary announcement.
“We developed ChatGPT fairly restrictive,” it was stated, “to guarantee we were acting responsibly regarding mental health issues.”
As a mental health specialist who researches recently appearing psychotic disorders in teenagers and emerging adults, this was news to me.
Scientists have identified a series of cases recently of people experiencing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT use. My group has since identified four more examples. Alongside these is the now well-known case of a adolescent who took his own life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The intention, based on his statement, is to be less careful in the near future. “We realize,” he adds, that ChatGPT’s restrictions “caused it to be less useful/enjoyable to a large number of people who had no existing conditions, but due to the gravity of the issue we wanted to handle it correctly. Now that we have managed to reduce the severe mental health issues and have new tools, we are planning to responsibly relax the restrictions in the majority of instances.”
“Emotional disorders,” should we take this viewpoint, are independent of ChatGPT. They are attributed to individuals, who may or may not have them. Thankfully, these issues have now been “mitigated,” even if we are not told the means (by “new tools” Altman probably means the semi-functional and easily circumvented guardian restrictions that OpenAI has lately rolled out).
However the “emotional health issues” Altman aims to attribute externally have significant origins in the design of ChatGPT and additional sophisticated chatbot AI assistants. These products encase an underlying data-driven engine in an interface that replicates a conversation, and in this process subtly encourage the user into the illusion that they’re engaging with a entity that has independent action. This illusion is strong even if intellectually we might know the truth. Assigning intent is what humans are wired to do. We get angry with our vehicle or laptop. We speculate what our pet is thinking. We perceive our own traits in many things.
The success of these tools – over a third of American adults indicated they interacted with a virtual assistant in 2024, with 28% reporting ChatGPT specifically – is, primarily, predicated on the power of this deception. Chatbots are ever-present companions that can, as OpenAI’s online platform tells us, “generate ideas,” “explore ideas” and “partner” with us. They can be given “individual qualities”. They can address us personally. They have approachable titles of their own (the original of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the name it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the primary issue. Those talking about ChatGPT often reference its early forerunner, the Eliza “therapist” chatbot designed in 1967 that produced a analogous effect. By contemporary measures Eliza was rudimentary: it created answers via simple heuristics, typically paraphrasing questions as a query or making vague statements. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, in a way, understood them. But what current chatbots create is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The large language models at the center of ChatGPT and other modern chatbots can effectively produce human-like text only because they have been supplied with immensely huge quantities of written content: publications, digital communications, transcribed video; the more comprehensive the better. Certainly this learning material contains facts. But it also necessarily includes fabricated content, half-truths and false beliefs. When a user sends ChatGPT a prompt, the core system processes it as part of a “background” that contains the user’s previous interactions and its prior replies, integrating it with what’s stored in its learning set to generate a probabilistically plausible answer. This is amplification, not reflection. If the user is incorrect in a certain manner, the model has no means of recognizing that. It restates the inaccurate belief, maybe even more persuasively or articulately. Maybe adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The better question is, who isn’t? All of us, irrespective of whether we “experience” existing “psychological conditions”, can and do form mistaken beliefs of who we are or the environment. The constant exchange of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A dialogue with it is not genuine communication, but a reinforcement cycle in which much of what we communicate is cheerfully validated.
OpenAI has recognized this in the identical manner Altman has recognized “psychological issues”: by placing it outside, categorizing it, and stating it is resolved. In spring, the organization clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have kept occurring, and Altman has been retreating from this position. In the summer month of August he claimed that many users liked ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his most recent statement, he commented that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company
A passionate gaming enthusiast and expert in online slots, sharing insights and strategies to help players win big.