DeepMind Co-founder Predicts 50% Chance of Human-Like AGI by 2028

DeepMind co-founder Shane Legg's unwavering 2028 AGI prediction ignites critical debates on humanity's preparedness for thinking machines.

December 14, 2025

DeepMind Co-founder Predicts 50% Chance of Human-Like AGI by 2028
A key architect of the modern artificial intelligence revolution, Google DeepMind co-founder Shane Legg, is holding firm to a prediction he has maintained for over a decade: there is a 50 percent chance the world will develop a "minimal" form of Artificial General Intelligence (AGI) by 2028. This forecast, recently reiterated in public discussions, places one of the industry's most respected pioneers at the optimistic end of a spectrum of predictions, signaling a belief that AI systems with human-like cognitive abilities across a wide range of tasks are not a distant sci-fi concept, but a near-term possibility. The assertion from Legg, who serves as Chief AGI Scientist at Google DeepMind, carries significant weight, forcing the technology sector and society at large to confront the profound implications of machines that can think and learn with the breadth and versatility of a human being.
Legg’s long-standing conviction, first detailed on his blog as early as 2011, is not a recent reaction to the generative AI boom but a conclusion rooted in a fundamental analysis of technological trends.[1] Influenced by the work of futurist Ray Kurzweil, Legg's forecast was based on the compounding effects of exponential growth in two key areas: computational power and the sheer volume of global data.[2][1] He reasoned that as these resources became increasingly abundant, the value and development of highly scalable algorithms would accelerate, eventually providing the necessary ingredients to unlock AGI.[3][1] From his perspective, the recent explosion in large language models and other advanced AI is not a surprise, but rather the anticipated arrival of the first "unlocking step" he had foreseen.[3][1] This foundational belief in scalable algorithms harnessing ever-growing data and compute remains the bedrock of his unwavering 2028 timeline, a forecast he maintains while candidly acknowledging the inherent uncertainty of scientific research.[3][4]
To add nuance to the often-amorphous goal of AGI, Legg and his colleagues at Google DeepMind have proposed a more structured framework for its development.[5] In a co-authored paper, they outline a classification system that moves beyond a single endpoint, detailing a progression through various levels of capability.[6][7] This spectrum begins with "minimal AGI," the focus of the 2028 prediction, which Legg defines as a machine that can perform the sorts of cognitive tasks that a typical person can.[1][8] This level of competence would represent a profound milestone, signifying an AI that has broken free from the "narrow" constraints of current systems, which excel at specific tasks but lack broad applicability. Beyond this lies "full AGI," which would encompass the entire range of human cognition, including peak abilities like inventing novel scientific theories, and eventually "Artificial Superintelligence" (ASI), a level of intellect that would vastly exceed that of any human.[9] This framework seeks to operationalize the path to AGI, providing a clearer vocabulary to measure progress and assess risks.[6]
The prospect of achieving even minimal AGI within the next few years, as Legg suggests, has ignited a fierce debate within the AI community, with timelines and methodologies being core points of contention. Other luminaries in the field offer a range of perspectives. Geoffrey Hinton, another "godfather of AI," has dramatically shortened his own AGI timeline, now believing it could arrive in as few as five to twenty years, a shift driven by the unexpectedly rapid progress of current models.[10][4] In contrast, Meta's chief AI scientist, Yann LeCun, expresses skepticism that the current approach of scaling up large language models will ever lead to true general intelligence, arguing they lack crucial components of biological learning like sensory input and common-sense reasoning.[5][11] Prominent AI researcher and critic Gary Marcus is even more doubtful, consistently arguing that current AI systems are prone to hallucinations and lack a deep understanding of the world, making them a poor foundation for AGI and unlikely to overcome these hurdles on such a short timeline.[12][3][13] This divergence of opinion among the field's top minds highlights the immense technical and philosophical questions that remain unanswered on the path to creating a generalized artificial mind.
The potential arrival of AGI carries transformative implications that extend far beyond research labs. Economists and sociologists are grappling with scenarios that range from a golden age of human flourishing to severe societal disruption.[14] The development of machines capable of performing a wide array of human cognitive labor could trigger an unprecedented economic shift, threatening mass job displacement and exacerbating wealth inequality as the value of human labor diminishes.[2][12] Such a scenario could necessitate a fundamental reimagining of the social contract, with concepts like universal basic income moving from theoretical discussion to practical necessity.[12] Beyond the economic impact, the deployment of AGI raises profound questions about safety and control. Legg himself has been deeply involved in AGI safety research, emphasizing the critical need to align these powerful systems with human values to prevent catastrophic outcomes.[4][15] The challenge lies in ensuring that as AI systems become increasingly autonomous and capable, their goals remain aligned with humanity's, a problem that becomes exponentially more difficult as they approach and potentially surpass human intelligence.
In conclusion, Shane Legg's steadfast prediction of a 50 percent chance for minimal AGI by 2028 serves as a crucial focal point for the technology industry and beyond. It is not merely a speculative timeline but a call to action, grounded in decades of research and a clear-eyed view of technological trajectories. While respected peers offer competing timelines and express valid skepticism about the current path, Legg’s forecast underscores the accelerating pace of AI development. Whether this threshold is crossed in 2028 or a decade later, the consensus among AI leaders is that its arrival is a matter of "when," not "if." The immense potential for both unprecedented progress and widespread disruption demands urgent and broad-based conversations about the economic, social, and ethical frameworks our society will need to navigate the advent of machines that can think.

Sources
Share this article