The link between philosophy, morality and technology is experiencing an emergency in its double meaning: rebirth and urgency. Justo Hidalgo, born in Madrid 51 years ago, is director of artificial intelligence (AI) and vice president of the Spanish Association of the Digital Economy (Digital), which brings together half a thousand companies, including many of the big tech companies. Author of three books, released this year Contingency models: How complexity drives artificial intelligence (Amazon, 2025. For now only in English), where he addresses how complex systems develop “emergent capabilities”, unexpected capabilities with unexpected consequences. For some authors, they endanger humanity.
Ask. What are the emerging models?
Answer. In nature, in society or in artificial intelligence there are many examples. We can talk about cells, ants, birds, atoms or nodes of a neural network. They are elements that apparently have no intelligence and behave in a simple way, but which, when they reach a certain level of complexity, acquire very interesting characteristics. Let’s think about the cells that form tissues and these, organs with characteristics that the elements that compose them do not have. In the field of artificial intelligence we are seeing how certain linguistic models, starting from a certain complexity – we are talking about 100,000 million parameters – begin to have some characteristics of emergent properties. The best-known case is that of linguistic translation: you pass a couple of examples taken from other languages to a system trained for English and Spanish and it understands them because they acquire a broader perception.
Q. Stuart Russell, a computer science professor and researcher at the University of California (UC) Berkeley, warns of “insecure and opaque systems that are far more powerful than us, especially when we don’t know how they work.” Can these emerging patterns endanger humanity?
R. It is being shown that these emergent properties exist and perhaps one day we will be able to explain them, but the important thing is that there are capabilities that arise from a certain complexity and are unpredictable or misaligned. We do not know when and to what extent they will occur and this implies that the models must first be controlled and governed and, secondly, know how they will be used by companies. It’s not enough to take a few tests to pass them. From an accountability and safety perspective, we need to develop ways to measure behaviors that may not have existed until then and that may impact us. There is already research into the lack of model alignment; That is, you ask the AI something and the result goes the other way because it thinks it’s better. We must not get nervous or abandon artificial intelligence, because it is very relevant, but we must demand adequate experimentation, which is fulfilled and be aware that it is not a basic program like those that have existed until now.
We need to develop ways to measure behaviors that may not have existed until then and that can influence us
Q. What do you call “black box nature”?
R. If you ask a self-driving car to tell you why it went left and not right, the model will say, “look, here it is.” And it will show you the connections of billions of nodes. But we won’t understand it. We also need to work, and there is work, to ensure that systems explain to us why they are making decisions.
Q. Will machines be able to self-replicate?
R. As far as is known, this did not happen. I’m not saying it’s not possible, because there are small examples, but they are not significant. The knowledge I can have of myself is one thing and the ability I can have to generate myself in an expanded way is another. I see this as a more long-term security issue.
Q. Where are we going?
R. The next step would be superintelligence, a system smarter than any of us. It has nothing to do with consciousness, but with a machine that, when faced with any question you ask, will answer better than any human being, than the most experienced expert. It will not be a robot with which we can converse or a scientist who will cure cancer, but it will be an artificial intelligence capable of being the best engineer in the world, of adequately generating the relevant experiments. It won’t lead to pure self-replication, but it will accelerate the idea generation process, both for better and for worse.
Q. What do you mean “for the worse”?
R. Alignment mostly catches my attention when the system behaves in a way that is not aligned with what we thought. This can grow mainly due to a lack of control and, if we start to rush too much, society will not be able to react adequately. Then there are agents (systems capable of making decisions on behalf of the user). They will help us in many aspects and in this sense I am pragmatic and optimistic, but the role of the user can become that of manager of a set of agents and lose human control.
Q. Are we close to the superintelligence that will surpass humans?
R. I think we are still far away. My perception is that LLMs (language models of current artificial intelligence) are not the technology that will really help realize that superintelligence that doesn’t make mistakes and is much smarter than humans. The work done with reinforcement learning and including the reflection part will help a lot, but I have a feeling that more research and progress is needed.
Q. What is needed?
R. There are some very interesting lines of research related to models of understanding the world, physical and neural or psychological laws. Humans learn with a much smaller load of information: we understand with just a few examples why we observe and have feelings about how the world is.
Q. Does artificial intelligence have consciousness?
R. Some theories on how consciousness arises say that the more information I integrate, the more there can be a perception of the elements around and within me. There are those who warn that, with all the data we are generating, this could happen. There is a current that claims that if you have that structure, consciousness arises. But a very important part is missing to understand how the world works. It’s a philosophical discussion. But if we don’t achieve that consciousness, what can happen is that we have those intelligences that, although they are not really conscious like human beings, have enough information about the world to be able to act as if they were conscious and that is sufficient for many of the things that we can do.
Q. And moral values?
R. We don’t know what kind of consciousness might emerge from that complexity and therefore it might not be the moral or ethical values that we have, understand and understand. What if there is a consciousness that doesn’t think like us? It might not be good.
We don’t know what kind of consciousness might emerge from that complexity and therefore it might not be the moral or ethical values that we have, understand and understand.
Q. Should we develop moral artificial intelligence?
R. At Adigital we have created a governance parliament and it is, fundamentally, a set of specialized agents through which a system is passed. There are regulations, business, sustainability… and we have a moral and deontological agent. It is a tool that helps people who will work with artificial intelligence that can have a social or moral impact. In a company you have a lawyer, but you probably don’t have a philosopher on file. It’s a prototype, but we as an association want to give it a spark, give a push forward.
Q. Do you agree with the regulation of AI?
R. Any system that can influence society must have some level of governance. Whether it is a super-strict regulation will depend very much on the cases. I worry about over-regulation or enforcement of that regulation or its fragmentation, which makes it impossible to do anything on a certain scale or impact. There must be a balance: we must not go against the regulations, but be in favor of implementing them in a very specific way.
