From virtual assistants that can detect sadness in your voice to robots designed to simulate the warmth of an emotional bond, artificial intelligence (AI) is crossing a more intimate frontier. The fervor awakened by AI advances on an increasingly dense mattress of questions to which no one finishes answering. And while they have the potential to reduce bureaucracy or predict disease, large language models (LLMs) trained with data in multiple formats (text, image, and voice) are capable of something more disturbing: They can behave as if they understand human feelings.
Sensing and reading emotions is a slippery slope for artificial intelligence. Several studies indicate that AI-based chats can alleviate loneliness, but also isolate and generate addiction. An extreme case is that of Stein-Erik Soelberg, 56, who ended up killing his mother and committing suicide after long months of chatting with ChatGPT. Last night, the company OpenAI acknowledged that more than a million people talk to ChatGPT about suicide every week.
It is no longer just a question of whether machines can automate tasks, but also to what extent they begin to infiltrate critical areas such as emotions, identity or even freedom of expression that begin to be gradually influenced by algorithms. Daniel Innerarity, professor of Political and Social Philosophy at the University of the Basque Country, believes that humanity is in a situation advertising hypethat is, in a moment of strong (and perhaps exaggerated) anticipation.
“I call it digital history. There are great expectations and parallel fears. Between these two extremes we oscillate in an accelerated upward curve,” says the expert. Karen Vergara, a researcher in society, technology and gender at the NGO Amaranta (Chile), thinks something similar. “We are in a process of adapting and recognizing these technological and sociocultural advances,” he emphasizes, but adds an important nuance. Because while a part of society incorporates this technology into its daily life, there is another that remains on the sidelines. People for whom artificial intelligence is not a priority, trapped in precarious contexts and crossed by access gaps that remain open.
The big problem is not how sophisticated this technology developed in the last century can be when it comes to discovering behavioral patterns, but rather the excessive trust placed in it. A recent study by the MIT Media Lab in the United States identified patterns of user interaction that ranged from “socially vulnerable” individuals to intense feelings of loneliness. Even those dependent on technology, with a high emotional attachment, and the “casual” ones, who use AI in a more balanced way.
For Innerarity, thinking that someone committed suicide because “an algorithm recommended it” takes us back to a previous question about what happens in the head of a person who decides to trust a machine rather than a human being. “Surely the problem is older,” this philosopher underlines.
The company, says Innerarity, made a big mistake by anthropomorphizing artificial intelligence. “When I wrote A critical theory of artificial intelligence (Galaxia Gutenberg, 2025) I had to look for cover and the only thing I was clear about was that I didn’t want to give a robot a human shape,” he recalls. He is totally against representations of artificial intelligence with hands, feet and heads: “99% of the robots used by humans do not have an anthropomorphic shape.”
A digital oracle that reproduces prejudices
Mercedes Siles, professor of Algebra at the University of Malaga and member of the advisory board of the Hermes Foundation, proposes a simple image. A metaphor. It asks us to imagine artificial intelligence as a small box full of folded papers. A kind of less crunchy version of fortune cookies. Every morning, a person takes a sheet of paper that contains a phrase that, without knowing it, will guide his or her day. “What begins as a simple ritual gradually becomes a daily necessity. Over time, this practice creates an emotional dependence,” he explains.
Then the box, which was initially just an object, transforms into “an oracle. What no one notices is that this box has neither the wisdom nor the power that is attributed to it,” he explains. According to Siles, the algorithm is still a language. And like any language, it can reproduce sexist or racist prejudices. “When we talk about the ethics of language, we must also talk about the ethics of algorithms,” he adds.
From Latin America, where digital wounds are added to structural ones, Karen Vergara warns that the problem on that side of the map is more accentuated. Another ethical conflict he observes is excessive complacency. These machine learning models try to associate questions, classify them and, based on all the information, provide the most similar answer.
However, it ignores cultural contexts and mixes academic information with vague self-help phrases. “If we move away from that, it’s more likely that these types of virtual assistants and chatbots will end up reinforcing just one way of seeing the world and give that false sense of being the only friend who doesn’t judge you,” Vergara points out.
Siles then returns to the image. Compare human relationships like a forest. “If you look at what happens beneath the surface and underground, there is an interconnection and we cannot break it, we have to strengthen it. We have to rethink the kind of society we have,” he stresses.
Regulation, a dilemma
In August 2024, Europe crossed a threshold. The European AI Regulation entered into force and became the first global legal framework for artificial intelligence. A reminder to the governments of the European Union that security and fundamental rights are not optional, but also an invitation to develop a literacy process. Its application is progressing progressively and in Spain the preliminary project was given the green light last March.
But the political pace does not always keep pace with the speed of technology, and among those observing the panorama with concern is Professor Mercedes Siles, who does not hide her concern. He is alarmed by the lack of training, by institutional neglect, by the carelessness with which some companies use models without fully understanding the consequences.
“How dare we release these systems like this, to see what happens?” he asks. The expert insists that people need to be trained to understand what the limits are. Added to this vision is that of the philosopher Daniel Innerarity, who asks us to take a step further back. Don’t discuss regulations without first asking yourself what we’re really talking about when we talk about artificial intelligence.
“What kind of future are our predictive technologies shaping? What do we actually mean by intelligence?” he asks. For Innerarity, until these basic issues are resolved, any regulation runs the risk of being ineffective. Or, worse yet, arbitrary. “Without understanding, the brakes not only don’t work, they don’t even make sense,” he concludes.
