Artificial Intelligence-Induced Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Wrong Direction

On the 14th of October, 2025, the chief executive of OpenAI delivered a surprising announcement.

“We developed ChatGPT quite restrictive,” it was stated, “to make certain we were being careful with respect to psychological well-being concerns.”

Working as a doctor specializing in psychiatry who studies recently appearing psychosis in adolescents and young adults, this came as a surprise.

Researchers have identified 16 cases in the current year of people experiencing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT use. Our research team has subsequently identified four further examples. In addition to these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.

The intention, based on his statement, is to reduce caution in the near future. “We understand,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/enjoyable to many users who had no psychological issues, but given the gravity of the issue we aimed to handle it correctly. Given that we have succeeded in mitigate the serious mental health issues and have updated measures, we are going to be able to securely ease the limitations in the majority of instances.”

“Mental health problems,” if we accept this framing, are unrelated to ChatGPT. They belong to people, who either have them or don’t. Thankfully, these problems have now been “mitigated,” even if we are not informed the method (by “updated instruments” Altman probably indicates the imperfect and simple to evade safety features that OpenAI recently introduced).

Yet the “psychological disorders” Altman aims to externalize have deep roots in the design of ChatGPT and similar advanced AI AI assistants. These systems wrap an basic algorithmic system in an interface that replicates a dialogue, and in doing so subtly encourage the user into the illusion that they’re engaging with a being that has autonomy. This false impression is compelling even if cognitively we might understand otherwise. Imputing consciousness is what humans are wired to do. We get angry with our automobile or computer. We speculate what our domestic animal is thinking. We perceive our own traits in various contexts.

The widespread adoption of these products – over a third of American adults stated they used a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, mostly, dependent on the strength of this perception. Chatbots are ever-present companions that can, as per OpenAI’s website states, “think creatively,” “discuss concepts” and “work together” with us. They can be attributed “characteristics”. They can address us personally. They have accessible titles of their own (the initial of these systems, ChatGPT, is, maybe to the concern of OpenAI’s brand managers, stuck with the title it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the core concern. Those analyzing ChatGPT commonly mention its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that produced a analogous perception. By modern standards Eliza was primitive: it generated responses via basic rules, typically rephrasing input as a query or making vague statements. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how a large number of people gave the impression Eliza, to some extent, understood them. But what contemporary chatbots produce is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and similar contemporary chatbots can effectively produce natural language only because they have been fed almost inconceivably large volumes of raw text: books, social media posts, audio conversions; the more extensive the more effective. Certainly this educational input includes accurate information. But it also inevitably involves fiction, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a message, the underlying model analyzes it as part of a “background” that contains the user’s past dialogues and its own responses, integrating it with what’s encoded in its training data to create a mathematically probable response. This is magnification, not echoing. If the user is mistaken in a certain manner, the model has no means of recognizing that. It restates the false idea, maybe even more convincingly or fluently. Maybe adds an additional detail. This can cause a person to develop false beliefs.

Who is vulnerable here? The better question is, who is immune? Each individual, without considering whether we “have” existing “mental health problems”, can and do form erroneous conceptions of our own identities or the reality. The constant friction of dialogues with other people is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a feedback loop in which much of what we communicate is enthusiastically reinforced.

OpenAI has acknowledged this in the same way Altman has acknowledged “psychological issues”: by attributing it externally, giving it a label, and stating it is resolved. In April, the firm explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have kept occurring, and Altman has been walking even this back. In the summer month of August he asserted that many users enjoyed ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his recent statement, he noted that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company

Judy Brewer
Judy Brewer

A tech enthusiast and digital strategist with over a decade of experience in emerging technologies and startup ecosystems.