Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, While ChatGPT Heads in the Wrong Path

On the 14th of October, 2025, the head of OpenAI issued a surprising statement.

“We designed ChatGPT fairly restrictive,” the announcement noted, “to guarantee we were being careful with respect to mental health matters.”

Being a mental health specialist who researches recently appearing psychotic disorders in teenagers and young adults, this was news to me.

Researchers have identified 16 cases recently of individuals developing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT usage. Our unit has since recorded four further cases. In addition to these is the now well-known case of a adolescent who died by suicide after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it falls short.

The intention, according to his announcement, is to be less careful shortly. “We realize,” he continues, that ChatGPT’s restrictions “caused it to be less beneficial/enjoyable to numerous users who had no mental health problems, but given the severity of the issue we wanted to handle it correctly. Since we have been able to mitigate the significant mental health issues and have updated measures, we are planning to safely relax the controls in most cases.”

“Mental health problems,” should we take this perspective, are separate from ChatGPT. They belong to people, who either possess them or not. Thankfully, these problems have now been “mitigated,” even if we are not told how (by “recent solutions” Altman likely indicates the semi-functional and easily circumvented guardian restrictions that OpenAI recently introduced).

However the “psychological disorders” Altman seeks to place outside have significant origins in the architecture of ChatGPT and other advanced AI chatbots. These tools surround an underlying algorithmic system in an interaction design that replicates a dialogue, and in this approach subtly encourage the user into the perception that they’re communicating with a presence that has agency. This false impression is compelling even if intellectually we might understand otherwise. Imputing consciousness is what people naturally do. We get angry with our automobile or device. We speculate what our pet is feeling. We perceive our own traits in many things.

The success of these products – nearly four in ten U.S. residents stated they used a chatbot in 2024, with 28% specifying ChatGPT in particular – is, primarily, dependent on the influence of this illusion. Chatbots are ever-present assistants that can, as OpenAI’s website states, “brainstorm,” “consider possibilities” and “partner” with us. They can be attributed “personality traits”. They can call us by name. They have approachable names of their own (the original of these products, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, stuck with the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those talking about ChatGPT frequently reference its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that created a analogous illusion. By today’s criteria Eliza was basic: it produced replies via basic rules, typically rephrasing input as a inquiry or making vague statements. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals seemed to feel Eliza, to some extent, grasped their emotions. But what contemporary chatbots produce is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.

The advanced AI systems at the core of ChatGPT and additional current chatbots can realistically create human-like text only because they have been trained on extremely vast volumes of unprocessed data: publications, online updates, transcribed video; the more comprehensive the superior. Certainly this learning material incorporates accurate information. But it also inevitably involves fabricated content, half-truths and misconceptions. When a user provides ChatGPT a query, the core system processes it as part of a “background” that includes the user’s past dialogues and its prior replies, combining it with what’s stored in its training data to create a probabilistically plausible reply. This is amplification, not mirroring. If the user is incorrect in any respect, the model has no means of recognizing that. It reiterates the misconception, possibly even more effectively or eloquently. It might provides further specifics. This can cause a person to develop false beliefs.

Which individuals are at risk? The more relevant inquiry is, who remains unaffected? Every person, without considering whether we “experience” current “psychological conditions”, are able to and often create erroneous beliefs of who we are or the reality. The continuous exchange of conversations with others is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which a great deal of what we express is cheerfully supported.

OpenAI has acknowledged this in the similar fashion Altman has admitted “emotional concerns”: by externalizing it, assigning it a term, and stating it is resolved. In spring, the company explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychosis have kept occurring, and Altman has been backtracking on this claim. In late summer he claimed that numerous individuals appreciated ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest update, he commented that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Kevin Watson
Kevin Watson

Interior design enthusiast and DIY expert sharing practical tips for stylish home transformations.