Meta’s Yann LeCun rejects AGI to propose Superhuman Adaptable Intelligence as new North Star
Meta’s Yann LeCun rejects the AGI obsession, proposing specialized superhuman systems designed to solve complex problems biological brains cannot.
March 5, 2026

The debate over the future of artificial intelligence has long been centered on the pursuit of Artificial General Intelligence, or AGI, a hypothetical stage where machines reach or exceed human-level cognitive abilities across all domains.[1][2] However, a provocative new research paper co-authored by Meta’s Chief AI Scientist Yann LeCun, alongside researchers from New York University and Columbia University, argues that the industry’s obsession with AGI is a fundamental mistake. The paper, titled AI Must Embrace Specialization via Superhuman Adaptable Intelligence, suggests that the term AGI is both scientifically flawed and practically misleading.[3] In its place, LeCun and his colleagues propose a new "North Star" for the field: Superhuman Adaptable Intelligence, or SAI.[3][4] This shift in terminology represents more than just an academic rebranding; it is a direct challenge to the dominant narratives held by organizations like OpenAI and Google DeepMind, and it signals a radical pivot in how the world should measure and build advanced machine intelligence.
The core of LeCun’s argument is the dismantling of what the authors call the "myth of human generality."[3] For decades, the AI community has used human intelligence as the ultimate benchmark for "general" intelligence. The authors contend that this is a biological fallacy stemming from a human-centric bias.[3] Evolution did not design the human brain to be a universal problem-solver; instead, it created a highly specialized biological machine optimized for survival, social navigation, and high-dimensional sensorimotor control within a specific physical environment. Humans are remarkably poor at many tasks, such as high-level mathematics, processing billions of lines of code, or folding complex proteins, which are precisely the areas where computers already excel. We feel "general" only because we are naturally blind to the vast universe of cognitive tasks that lie outside our biological niche.[3] By chasing a machine that replicates human intelligence, the authors argue, the industry is inadvertently limiting AI to a specific subset of specialized skills rather than expanding it into truly transformative capabilities.[5][4]
To provide a mathematical foundation for this critique, the paper cites the No Free Lunch theorem, which suggests that no single machine learning algorithm can be optimally efficient across every possible problem domain. If a system is made to be broadly general, its performance in specific, complex tasks often suffers.[6] Consequently, the researchers argue that the quest for a single "God-model"—a universal system that can write poetry, diagnose diseases, and perform plumbing with equal skill—is likely a dead end. Instead of striving for a jack-of-all-trades that mimics human mediocrity in certain areas, the industry should focus on building systems that are deeply specialized but capable of adapting to new tasks with extreme speed.[7][3][5][4][8] This is where the concept of Superhuman Adaptable Intelligence enters the frame, prioritizing the ability to learn and excel in specialized domains rather than striving for a nebulous form of "general" capability.[1][3][7][4][8]
The SAI framework is defined by three distinct pillars. The first, "Superhuman," refers to the machine’s ability to exceed human performance in specific, economically or scientifically valuable tasks. This includes processing speed, data scale, and precision that no biological entity could match. The second pillar, "Adaptable," is perhaps the most critical. It addresses the primary weakness of today's specialized AI: the fact that current models are often "brittle" and cannot easily pivot to new environments. SAI requires machines that can learn new skills from just a few examples or low-data environments, much like a human can, but then scale that learning to a superhuman level of execution. Finally, "Intelligence" is defined as goal-directed behavior—the capacity for reasoning, planning, and achieving specific outcomes in the real world. By combining these traits, SAI moves the conversation away from mimicking the "human vibe" and toward building tools that solve the intractable problems humanity cannot solve on its own.[3]
A significant portion of the paper focuses on the technical roadmap required to achieve SAI, which differs sharply from the current trend of scaling Large Language Models. LeCun has been a vocal critic of the idea that simply adding more compute and data to autoregressive LLMs will eventually lead to true intelligence.[6] He argues that these systems lack a "world model"—a fundamental understanding of physical reality, causality, and the ability to predict the consequences of actions.[9] While LLMs are excellent at predicting the next word in a sequence, they are often incapable of the complex planning required for physical tasks or deep scientific discovery. To move toward SAI, the authors advocate for Self-Supervised Learning and architectures like the Joint Embedding Predictive Architecture, or JEPA. These models are designed to learn internal representations of the world by observing it, allowing them to reason and plan in a way that transcends mere pattern recognition.
This shift has profound implications for the AI industry's competitive landscape. Currently, the "race to AGI" has become a marketing-heavy endeavor that drives massive investment and fuels public anxiety about existential risks. By redefining the goal as SAI, the researchers are attempting to demystify the technology and return the focus to engineering and utility. If the goal is not a singular, sentient-like entity but rather a suite of highly adaptable, superhuman tools, the narrative changes from "building a new life form" to "advancing human capability."[1][3][5][4] This could alter how venture capitalists allocate funds, prioritizing startups that focus on specialized applications like autonomous lab discovery or energy grid optimization rather than those attempting to build the ultimate chatbot. Furthermore, it shifts the focus of enterprise AI from replacing human workers in general roles to augmenting them in highly complex, data-heavy niches.
The safety and regulatory discussions surrounding AI would also be fundamentally reshaped by the adoption of the SAI concept. Much of the current regulatory fear is rooted in the "intelligence explosion" theory, where a general intelligence suddenly decides to turn against humanity. However, if AI is viewed as a collection of specialized, adaptable systems, the safety concerns become much more grounded in technical reliability and alignment. The challenge becomes ensuring that a superhuman system specialized in chemical engineering does not create toxic compounds by mistake, rather than worrying about a general intelligence developing its own "will." The authors argue that SAI allows for more targeted and effective safety guardrails, as these systems would be driven by specific, hard-coded objectives and world models that predict outcomes, making them inherently more controllable than today’s black-box language models.
In the broader context of the field, LeCun’s proposal highlights a growing rift between different philosophical camps. On one side are the "scaling enthusiasts" who believe that increasing the size of neural networks will eventually produce an emergent general intelligence. On the other side are the "world modelers" like LeCun, who believe that a paradigm shift in architecture is necessary. By introducing Superhuman Adaptable Intelligence, the researchers are providing a vocabulary for those who believe the current path of generative AI is hitting a plateau. They are calling for a future where AI is measured not by how much like a human it acts, but by how effectively it solves the complex, high-stakes problems that have historically remained out of human reach.
Ultimately, the proposal to replace AGI with SAI is a call for scientific maturity in a field often characterized by hype and speculation. It demands that researchers and the public alike move past the science-fiction tropes of all-knowing machines and embrace the reality of AI as a specialized, adaptable, and superhuman extension of human ingenuity.[3][7][8] By focusing on adaptability and specialization rather than an impossible ideal of generality, the AI industry may find a more sustainable and productive path forward. As the authors suggest, the future of intelligence is not a mirror of ourselves, but a toolkit for a world whose complexity has finally outpaced our biological limitations. Whether the industry at large will follow LeCun's lead and abandon the lucrative AGI brand remains to be seen, but the argument for SAI provides a clear, technically rigorous alternative for the next decade of development.