Why artificial intelligence can’t create new scientific knowledge | Trends | Project

In his work Reflections on languagepublished in 1975, Noam Chomsky states that children learn to speak not only by imitating what they hear, but by building a theory from sparse and disorderly information. According to the linguist, a child infers knowledge that goes well beyond what he has heard and which allows him to produce new sentences, without direct relation to his previous experiences. That is, it not only reproduces models, but creates original knowledge.

For researchers Teppo Felin, from Utah State University, and Matthias Holweg, from the University of Oxford, this is the central point that differentiates human learning and reasoning from artificial intelligence (AI). In his academic essay Theory is all you need: artificial intelligence, human cognition, and causal reasoning (Theory is all you need: artificial intelligence, human cognition, and causal reasoningis Spanish), published at the end of 2024 in the academic journal Science of strategythe authors call AI language generation “data-driven retrospective and imitative prediction,” while human cognition is “prospective and theory-driven.”

“There are researchers who have looked at how children process their environment. And it turns out that they are not only absorbing data, but they are constantly speculating or making hypotheses. If I drop the cup off the table, I learn something about the world around me. And it turns out that’s the point.” crucial point of the question: the ability to formulate conjectures, to want to experiment or advance hypotheses”, explains co-author Felin, 52 years old, in a video call from Utah.

The researcher, who is also the founder of the Institute for Interdisciplinary Studies at Utah State University, says one of his goals is to “dismantle all the hype that is given to artificial intelligence” and highlight how the human mind is unique in its causal and theoretical reasoning. The essay highlights how the mind is not an information processor and that human beings not only predict the world, but intervene in it and transform it. This, for the authors, dismantles the mind-machine analogies.

Galileo and the Wright Brothers

To show the limitations of linguistic models, Felin and Holweg present the example of how an artificial intelligence trained with the entire “predominant corpus” up to 1633 would deny Galileo Galilei’s heliocentric model. This is what the authors call “data-belief asymmetry,” that is, while AI will take something to be true if the majority of texts say so, humans can believe something that contradicts the data.

This asymmetry is what allows human cognition to form beliefs that may initially seem delusional or contrary to existing knowledge, but which can ultimately lead to new discoveries.

Felin claims that the great linguistic models are, for now, translators or reformulators that reflect models of the past. “In Galileo’s time, the data indicated that the Earth was not moving. And if you look around, you see that the Earth is not moving and that the sun appears to be moving from east to west. Therefore, an AI with a limit of 1633 will consider that model correct,” he adds.

In contrast, the authors use the example of the Wright brothers and how, in the 19th century, scientists considered flight impossible for objects heavier than air. But while scientific consensus ruled out human flight, the Wright brothers performed experiments to solve the lift, propulsion, and steering problems that proved flight was possible.

“In uncertain environments, only human theoretical thinking has the advantage because creativity depends on theories that question the data and not the algorithms. AI extrapolates data from the past to say what will happen in the future, but it only works when the environment does not change and there is no uncertainty,” says Felin.

The world is not a database

For Felin, human reasoning, despite its limitations, is the only thing that can accurately reflect a “constantly changing” world. “Humans can process a limited amount of data, we are biased and make bad decisions, but it turns out that we live in a very dynamic environment and artificial intelligence has no way to handle it,” explains the researcher.

Furthermore, the author points out that every day people have to make decisions without data. “In some ways, we’ve given too much importance to data because we don’t always have the right data in front of us. So you have to think about how to get that data and that’s what leads to creativity,” he continues.

Felin also warns against the “panic” that some specialists have caused regarding artificial intelligence. The essay, for example, cites the example of Geoffrey Hinton, the so-called “father of artificial intelligence” and winner of the Turing Prize in 2018, who hypothesized the possibility that linguistic models could possibly show forms of intelligence or consciousness. The authors reject this view and argue that equating the mind with these computational devices is “conceptually incorrect and philosophically reductive.”

The Finnish academic argues that artificial intelligence is “a technological wave with limitations, especially in areas that require real creativity, problem formulation and far-sighted strategic decision-making.” Felin compares language models to “a sort of dynamic Wikipedia” and emphasizes that artificial intelligence must be seen “as it is: statistics and machine learning in action, without anything mystical behind it”.

Trends It is a project of EL PAÍS, with which the newspaper aspires to open a permanent dialogue on the great future challenges that our society must face. The initiative is sponsored by Abertis, Enagás, EY, Iberdrola, Iberia, Mapfre, Novartis, the Organization of Ibero-American States (OEI), Redeia and Santander, WPP Media and strategic partner Oliver Wyman.

You can sign up Here to receive the weekly newsletter of EL PAÍS Tendencias, every Tuesday, from journalist Javier Sampedro.