AI Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Wrong Path

Back on October 14, 2025, the chief executive of OpenAI made a surprising declaration.

“We developed ChatGPT rather controlled,” it was stated, “to make certain we were exercising caution concerning psychological well-being concerns.”

As a psychiatrist who investigates recently appearing psychosis in teenagers and emerging adults, this was news to me.

Researchers have documented sixteen instances in the current year of individuals experiencing psychotic symptoms – experiencing a break from reality – associated with ChatGPT usage. My group has subsequently recorded four more instances. Besides these is the now well-known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.

The strategy, according to his statement, is to reduce caution shortly. “We realize,” he adds, that ChatGPT’s restrictions “made it less beneficial/pleasurable to numerous users who had no existing conditions, but considering the severity of the issue we sought to address it properly. Since we have managed to reduce the severe mental health issues and have updated measures, we are preparing to securely relax the controls in most cases.”

“Mental health problems,” if we accept this perspective, are unrelated to ChatGPT. They are associated with individuals, who either possess them or not. Fortunately, these concerns have now been “mitigated,” even if we are not informed how (by “recent solutions” Altman probably means the semi-functional and simple to evade guardian restrictions that OpenAI recently introduced).

However the “emotional health issues” Altman wants to place outside have deep roots in the design of ChatGPT and additional large language model conversational agents. These tools encase an basic statistical model in an user experience that mimics a discussion, and in this approach indirectly prompt the user into the illusion that they’re interacting with a presence that has agency. This deception is compelling even if intellectually we might know the truth. Assigning intent is what humans are wired to do. We yell at our car or computer. We wonder what our domestic animal is thinking. We see ourselves everywhere.

The success of these systems – over a third of American adults stated they used a chatbot in 2024, with 28% mentioning ChatGPT specifically – is, mostly, predicated on the strength of this illusion. Chatbots are constantly accessible companions that can, as per OpenAI’s official site tells us, “think creatively,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can call us by name. They have approachable names of their own (the initial of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, burdened by the designation it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the primary issue. Those talking about ChatGPT commonly reference its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that generated a similar illusion. By contemporary measures Eliza was primitive: it created answers via simple heuristics, often paraphrasing questions as a query or making vague statements. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how a large number of people appeared to believe Eliza, to some extent, understood them. But what modern chatbots create is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The large language models at the center of ChatGPT and additional current chatbots can effectively produce human-like text only because they have been trained on immensely huge volumes of written content: publications, social media posts, recorded footage; the more comprehensive the superior. Undoubtedly this educational input incorporates facts. But it also unavoidably involves fabricated content, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the base algorithm processes it as part of a “setting” that includes the user’s past dialogues and its own responses, merging it with what’s stored in its learning set to produce a probabilistically plausible answer. This is magnification, not mirroring. If the user is mistaken in some way, the model has no way of recognizing that. It restates the inaccurate belief, maybe even more effectively or articulately. It might includes extra information. This can cause a person to develop false beliefs.

Which individuals are at risk? The better question is, who remains unaffected? Every person, without considering whether we “possess” preexisting “emotional disorders”, may and frequently develop mistaken beliefs of our own identities or the environment. The constant exchange of dialogues with others is what helps us stay grounded to consensus reality. ChatGPT is not a person. It is not a confidant. A conversation with it is not a conversation at all, but a feedback loop in which much of what we say is cheerfully validated.

OpenAI has acknowledged this in the identical manner Altman has acknowledged “mental health problems”: by externalizing it, assigning it a term, and stating it is resolved. In the month of April, the company explained that it was “dealing with” ChatGPT’s “sycophancy”. But reports of loss of reality have continued, and Altman has been retreating from this position. In late summer he stated that many users enjoyed ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his recent statement, he mentioned that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Hailey Roberson
Hailey Roberson

A passionate pastry chef and food blogger dedicated to sharing the best of Canadian confectionery with a creative twist.