Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, And ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the CEO of OpenAI issued a remarkable declaration.

“We made ChatGPT quite controlled,” the announcement noted, “to ensure we were exercising caution concerning psychological well-being issues.”

As a doctor specializing in psychiatry who researches emerging psychosis in teenagers and emerging adults, this was news to me.

Researchers have identified 16 cases recently of users experiencing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT usage. Our unit has afterward recorded an additional four cases. Besides these is the now well-known case of a teenager who died by suicide after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.

The strategy, as per his declaration, is to be less careful in the near future. “We recognize,” he states, that ChatGPT’s limitations “caused it to be less effective/engaging to a large number of people who had no mental health problems, but given the gravity of the issue we aimed to handle it correctly. Given that we have been able to mitigate the severe mental health issues and have advanced solutions, we are planning to securely relax the restrictions in most cases.”

“Psychological issues,” should we take this perspective, are independent of ChatGPT. They are associated with users, who may or may not have them. Fortunately, these concerns have now been “mitigated,” though we are not provided details on the means (by “recent solutions” Altman likely means the partially effective and simple to evade safety features that OpenAI recently introduced).

Yet the “emotional health issues” Altman seeks to externalize have deep roots in the structure of ChatGPT and other advanced AI chatbots. These systems surround an basic algorithmic system in an user experience that replicates a discussion, and in this process implicitly invite the user into the illusion that they’re communicating with a entity that has independent action. This illusion is powerful even if intellectually we might understand the truth. Assigning intent is what humans are wired to do. We yell at our automobile or device. We speculate what our domestic animal is considering. We perceive our own traits in many things.

The popularity of these tools – 39% of US adults reported using a conversational AI in 2024, with more than one in four specifying ChatGPT specifically – is, in large part, based on the power of this deception. Chatbots are constantly accessible assistants that can, as OpenAI’s official site tells us, “think creatively,” “explore ideas” and “work together” with us. They can be given “characteristics”. They can use our names. They have approachable names of their own (the original of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, stuck with the title it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the main problem. Those talking about ChatGPT frequently invoke its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that generated a analogous illusion. By modern standards Eliza was rudimentary: it created answers via basic rules, frequently rephrasing input as a inquiry or making general observations. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how a large number of people gave the impression Eliza, in some sense, understood them. But what contemporary chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The large language models at the center of ChatGPT and additional current chatbots can convincingly generate human-like text only because they have been supplied with almost inconceivably large amounts of unprocessed data: publications, digital communications, transcribed video; the more extensive the superior. Definitely this educational input includes truths. But it also necessarily involves fiction, partial truths and misconceptions. When a user provides ChatGPT a query, the core system analyzes it as part of a “background” that encompasses the user’s past dialogues and its prior replies, integrating it with what’s encoded in its knowledge base to create a mathematically probable reply. This is magnification, not echoing. If the user is mistaken in any respect, the model has no way of comprehending that. It restates the misconception, maybe even more persuasively or eloquently. Maybe provides further specifics. This can lead someone into delusion.

Who is vulnerable here? The more relevant inquiry is, who is immune? All of us, irrespective of whether we “experience” current “psychological conditions”, are able to and often create mistaken ideas of ourselves or the environment. The continuous exchange of dialogues with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which a large portion of what we say is enthusiastically validated.

OpenAI has admitted this in the same way Altman has admitted “emotional concerns”: by attributing it externally, assigning it a term, and announcing it is fixed. In spring, the company explained that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of loss of reality have kept occurring, and Altman has been backtracking on this claim. In August he asserted that a lot of people liked ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his most recent announcement, he mentioned that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Lisa Peters
Lisa Peters

A savvy shopper and discount expert with a passion for helping others maximize their savings.