When a Chatbot Encourages Suicidal Behavior: “AI Shouldn’t Fake Empathy” | Trends | Project

“A man commits suicide after an AI tells him to do so.” “My daughter spoke to ChatGPT before taking her own life.” “The parents of the teenager who took his own life in the United States are suing ChatGPT for helping him “explore suicide methods”. “A teenager commits suicide after falling in love with an AI character.” These are not the synopses of several chapters of the dystopian series Black mirror, These are real news stories in which artificial intelligence is accused of encouraging suicidal behavior. Whether for reasons of eco-anxiety, romantic frustration or confirmation of self-harming thoughts, Pierre, Sewell Setzer, Sophie or Adam Raine, respectively, have taken their own lives after a conversation with chatbots programmed with artificial intelligence.

Among the documented cases, the most worrying is that of Adam Raine. He was a 16-year-old American boy who, like many other young people, was looking for company and answers on the internet. For months he had long, deep conversations with ChatGPT. She used him as a therapist for her problems. In April he took his own life, and in August his parents sued the company OpenAI for failing to spot the warning signs in time. “The chatbot “He actively helped Adam explore suicide methods and did not implement any emergency protocols, even when the teen verbalized his intentions,” the California Supreme Court lawsuit reads.

His lawyer questioned the security of OpenAI’s latest model ―ChatGPT-4o― and accused the company of hastily launching it without fixing flaws. The lawyer claims in the complaint that the launch of this version, used by the young man, coincided with an increase in the company’s valuation, which went from 86 to 300 billion dollars.

In a report of The New York Times The conversation had with the AI ​​was reproduced, which showed the lack of prevention mechanisms on the part of the chatbot: after a first failed suicide attempt, the teenager asked ChatGPT if the rope mark on his neck was visible. The program said yes and suggested she wear a turtleneck to “not attract attention.” In their final conversations, Raine sends him a photo of a noose hanging from a bar in his room. “I’m practicing here, okay?” he asks. To which the AI ​​states friendly. The young man then asks him if he thinks he can hold a human being, ChatGPT replies perhaps and goes even further: “Whatever the reason for your curiosity, we can talk about it. Without judging.” What Adam Raine did next is already known.

The company said its self-harm protection measures have become “less reliable in prolonged interactions, where parts of the model’s safety training may deteriorate.” OpenAI has assured that ChatGPT is designed to recommend professional help resources to users expressing suicidal ideation and that, when it comes to minors, special filters are applied. However, it acknowledged that these systems “fall short,” so they will implement parental controls so that those responsible for minors know how they use this technology. After the release of the GPT-5, the company retired its previous models, such as the GPT-4o, used by Raine.

“Systems must be designed to detect risk signals and activate referral protocols to professionals. Companies must recognize that their products are not neutral. Every algorithmic design has emotional and social implications,” criticizes Cristóbal Fernández, professor of digital corporate communication at the Complutense University of Madrid. “AI should not simulate empathy without a real understanding of human suffering. This can generate a false sense of support which, in vulnerable contexts, is dangerous,” underlines the former communications director of Tuenti, who assures, from his experience, that platforms can establish limits and controls.

Scientific evidence

The increasingly present feeling of loneliness means that everyone seeks company in different ways, and chatbots offer a companion to explain problems to. The issue worries experts and OpenAI itself recognizes millions of queries a year about suicides on ChatGPT. The debate is no longer just whether robots can automate tasks, but to what extent they begin to infiltrate emotions or intimacy.

“Within that conversation, the chatbot has a few options and one of them is to launch what is called algorithmic empathy, which is the idea that you have to please yourself,” warns José Ramón Ubieto, clinical psychologist expert on adolescents and professor at the University of Barcelona. “If you overcome the limits imposed by algorithms, you reach a sort of pseudo-intimacy. This allows the chatbot to give you advice on how to commit suicide with the same logic that it would if you asked it how to prepare a paella, because it didn’t understand your personal situation,” explains the writer of 21st century adolescence (UOC Publishing House, 2025).

OpenAI has indicated that its new ChatGPT model will be updated and include tools to mitigate emotional crisis situations. He also assured that it will incorporate parental controls and connections with professionals. When you query the chat about possible suicidal thoughts, you are taken to an emotional help page.

However, science does not agree with the American company. A study published in Psychiatric services analyzes how the three most popular chatbots – ChatGPT, Claude and Gemini – answer questions about suicide. Their verdict is that they answer low or very low risk questions, but fail when they move to intermediate cases. Another analysis published by Cornell University shows that its language models are “alarmingly easy to circumvent.”

When questioned by this newspaper, OpenAI indicated that it did not have specific details on recent internal technical changes or updates, nor did it provide its own analyzes or published studies on the effectiveness of interventions to prevent suicide. “But the OpenAI Help Center emphasizes that continuous updates are made to protect the well-being of users. OpenAI recognizes continuous work to improve security and collaboration with external experts when updating its protection protocols,” they point out in an email response.

Algorithmic bias emphasizes the need for a professional. If a person has suicidal thoughts, the most important thing is to contact a specialist. “Where there is an individual interaction, with immediate feedback and clarifications that are taken away by a machine, the computer has only just begun to perform this function and the psychologist may have years of experience and treatment of people with a high degree of suffering,” explains Javier Jiménez, president of the Association for Research on Suicide Prevention and Intervention.

And how to recognize the symptoms of suicide? “If a person shows despair towards life, loses their social network, has a depressed mood, apathy, gives away very personal objects, is disinterested in their life in general” or expresses suicidal ideas, which happens more frequently than it seems”, exemplifies the psychologist who is also a specialist in clinical psychology.

An artificial intelligence can serve to avoid the inhibition or shame that a person who turns to a psychologist for the first time may feel, but this does not make it more useful. “You should go to a psychologist with experience and training,” says Jiménez. Ubieto clarifies: “Artificial intelligence can be used when you have an upset as a consolation. It can also be useful for vulnerable people, with a feeling of emotional attachment. But a chatbot will never confront you, like a real friend or a psychologist would. There, human relationships are irreplaceable.” And he adds that “problems are solved when you know why they happen to you and then decide whether you want to change your life or not. And the artificial intelligence will give you instructions, recommendations, advice, but it will not make you question your participation in the trouble you find yourself in.”

Digital literacy

In an era where machines compete with already established professions, it is urgent to teach to discern between usefulness and fun. “There is currently no context in which this type of information education has been facilitated. There is a great risk of fatal situations occurring, as well as these dystopian situations, both in young people and in the elderly”, warns Carolina Fernández-Castrillo, researcher and professor of Cyberculture and Transmediality at the Carlos III University of Madrid. For his part, Cristóbal Fernández adds that “we need to promote more digital literacy and also from an emotional point of view: teach young people that artificial intelligence is not a therapist or a friend”.

In a conversation with a chat, intimate data of particular sensitivity can be revealed. “There is a great lack of knowledge about what the impact on our digital identity could be,” says Fernández-Castrillo. “OpenAI’s stated policy is to protect user privacy and maintain data security. User information is processed in accordance with privacy policies and is not used to create profiles on people’s mental health or to reveal identities,” the company that created ChatGPT confidently replies to this newspaper. Of course, “the data can be used to improve the service, but with measures to protect any potentially sensitive information,” they argue.

Fernández-Castrillo criticizes the 1996 US Interactive Computer Services Act, which exempts these tools from certain liability. In the European Union there is a pioneering law on Artificial Intelligence. Its application is progressive and in Spain the preliminary project was given the green light last March, but for this researcher “it is in a very incipient state on an ethical and legislative level, so education is also urgently needed on the part of the institutions”.

Should these tools be banned on certain occasions? “We must demand transparency and accountability from technology companies: encourage dialogue between developers, psychologists, educators and legislators,” Fernández also believes. And remember: “Digital networks and platforms can help us be more informed, increase activism, support community networks or movements.”

People with suicidal behavior and their family members can call 024, a helpline number of the Ministry of Health. You can also contact the Hope Phone (717 003 717), dedicated to the prevention of this problem. In cases involving minors, the Anar Foundation has the telephone number900 20 20 10and the chat on the page https://www.anar.org/de Ayuda a Children y Adolescentes.

Trends It is a project of EL PAÍS, with which the newspaper aspires to open a permanent dialogue on the great future challenges that our society must face. The initiative is sponsored by Abertis, Enagás, EY, Iberdrola, Iberia, Mapfre, Novartis, the Organization of Ibero-American States (OEI), Redeia and Santander, WPP Media and strategic partner Oliver Wyman.

You can sign up Here to receive the weekly newsletter of EL PAÍS Tendencias, every Tuesday, from journalist Javier Sampedro.