A psychologist points out that our relationship with AI is causing mental disorders never seen

It often happens when a technology “exploits”: its benefits are more publicized than its potential hidden sides. And, when we face these, it’s too late. It happened with the advances in the atomic field, with genetics and also with the Internet and social networks. And now it is being seen in artificial intelligence: Our relationship with chatbots seems not to be so healthy.

Mental health professionals call it “psychosis of AI”: when resorting to AI models in search of advice, users are fascinated by the almost human responses of the machine and its flattening tendency. Thus, it becomes not only a tool, but a partner, and of the worst type, since He tells us what we want to hear and validating everything we sayregardless of how wrong or unbalanced it is.

This leads to cases like that of a man who was hospitalized several times after ChatgPT convinced that he could manipulate time, or that of another who believed he had discovered advances in physics. Sometimes, The situation becomes terribly tragic: The interactions with chatbots of AI have supposedly caused several deaths, including the suicide of a 16 -year -old.

If the “IA psychosis”, which is not yet an official diagnosis, will continue to be the preferred term is an unknown. But experts emphasize that there is something unique, strange and deeply alarming in these interactions with AI, since Many of the cases that have been made public, involve people without a history of mental illnesseseven if they do not perfectly fit the known types of psychosis.

In a recent interview, the clinical psychologist of Columbia University, Derrick Hull, who collaborates in the development of a therapeutic chatbot, said that “the cases reported They resemble what we could call delusions of ia than psychosis. The term psychosis is a broad term that describes hallucinations and a variety of other symptoms that it has not observed in the cases reported. ”

Hull, whose work with artificial intelligence seeks to develop a chatbot that challenges users in a healthy way Instead of agreeing with them, he cited the example of a man who thought he was a pioneer in a new field of “temporal” mathematics after extensive conversations with Chatgpt, convinced that his ideas would change the world while his real personal life stayed on the road.

But the spell broke when he asked another chatbot from AI, Google Gemini, to review his theory: The answer was that it was an “example of the ability of linguistic models to generate convincing narrativesbut completely false. ”

“Immediately, his certainty burst that bubble -adds Hull. That is not seen in people with schizophrenia or other types of psychotic experiences; Perception does not disappear so fast. ”

In short, according to Hull, we are seeing unbridled delusions, but not necessarily psychosis. This coincides with a recent study conducted by scientists from King’s College in London who examined more than one dozen cases of people who fell into a spiral of paranoid thoughts and experienced ruptures with reality. The results showed that patients were clearly induced to house delusional beliefs, but did not show signs of hallucinations and disorderly thoughts characteristic of schizophrenia and other forms of psychosis.

The most likely explanation, according to the authors, was no less worrying. The main author, Hamilton Morrin, described the bots as the creation of an “individual resonance chamber. The chatbots of AI could feed delusions in a way never seen before. ”

Hull agrees that something unique is happening, something we are only seeing the initial stages. “My prediction is that in the coming years new categories of disorders will emerge thanks to the AI ​​- concludes the scientist -. AI is kidnapping healthy processes in a way that leads to what we would call pathologyor some kind of dysfunction, instead of simply taking advantage of people who already experience some type of dysfunction. ”