The day we learned that Anthropic was once again raising billions of dollars to improve its textual models, Yann LeCun, head of AI research and Meta media’s face in this area, left. After twelve years spent building and directing Facebook’s Artificial Intelligence Research (Exhibition) in Paris, the 65-year-old French researcher announced on Facebook on November 19 that he would launch his own startup before the end of 2025.
The goal of this new entity is clear: to develop AI models capable of understanding the physical world. For LeCun, it’s nothing more and nothing less than that “leading to the next major revolution in AI: systems capable of understanding the physical world, equipped with permanent memory, capable of thinking and designing complex actions”. This is direct positioning that goes against the current Meta strategy.
Strategic gaps and hierarchies: the era of Zuckerberg II
The departure of one of global AI’s leading figures, a Turing Prize co-winner in 2018 for his pioneering work on convolutional neural networks, marks a fundamental disagreement with the latest direction of Mark Zuckerberg’s group. In recent months, Meta CEO recruited entrepreneur Alexandr Wang, co-founder of Scale AI, to put him at the head of a new entity called “Superintelligence Laboratory”brings together all the resources devoted to AI. Ironically or gradually sidelined, Yann LeCun, head of AI research at Meta, was integrated into this unit and placed under the direct responsibility of Alexandr Wang.
Beyond this overhaul of the hierarchy, what has changed is the goal. The group has massively reoriented towards the development of large language models (LLM), software that is built on ingesting and manipulating large amounts of text, and that supports interfaces such as ChatGPT or Gemini. This strategy, widely adopted by the industry, is a favorite target of Yann LeCun, who considers it a costly dead end and a trap for fundamental research.
The limited AI of the alley cat fought back “World Model”
Researchers have never stopped openly denouncing the limitations of the LLM. He popularized a surprising formula to deconstruct the enthusiasm of investors and the general public: “AI is currently no smarter than a street cat. » According to him, this language model “manipulating language, but understanding nothing about the real world”which makes them unable to take firm steps towards true human intelligence. LLM excels at predicting the next word, but has no concept of physical causality or object persistence.
LeCun, on the other hand, is relying “world model”. The alternative is AI that learns not by swallowing petabytes of text, but by experiencing the world by absorbing images and videos, or through robotic envelopes. This approach, which he compares to the learning of a child, will allow algorithms to gain a detailed understanding of how the world actually works and understand situations for which it has not been programmed. Such advances will pave the way for new applications, especially in the field of robotics, by making machines truly autonomous.
French Fair heritage and LeCun compass
Despite having freedom of expression, LeCun’s departure is a strong symbol for the French ecosystem. In Paris, the Ile-de-France native founded Fair in 2015. The laboratory was the birthplace of LLama, the model that repositioned Meta in the open source generative AI race, and trained major talents in the sector, including Guillaume Lample, who currently heads Mistral AI. The influence of this laboratory on French technology is undeniable.
With his prestigious pedigree and his Turing Prize, co-awarded with Yoshua Bengio and Geoffrey Hinton, the researcher is a true compass for the industry. “If Yann says it, it’s true”summarized a Paris researcher. This influence is also fueled by his 800,000 subscribers on X (formerly Twitter), where he does not hesitate to get into heated clashes with figures like Elon Musk.
The risk of too pure optimism
His role in Meta guaranteed him, according to him, a “unusual freedom of speech”, especially in defending a purely scientific vision, opposing the cataclysmic discourse regarding the existential risks of AI. Therefore, he believes that AI is inherently harmless and does not need to be dangerous “stored in a safe”.
However, their very scientific attitude makes them minimize strategic and social issues. His optimistic vision of the future of AI puts him in sharp contrast to his former deep learning colleague, Yoshua Bengio. The latter is worrying: “He and I believe that in a few years, AI will reach human levels of intelligence. But he thinks we can let it happen, that everything will be fine. However, history shows that companies often prioritize their profits to the detriment of the public interest. »
LeCun, for his part, remained in his position. He assured that we can trust countries and companies, assessing that “a country or company will not dominate the world by discovering “intelligence secrets””. He even decided: “There is no geopolitical race around AI. » It is on this belief that the pioneer will now launch his own race.