Leading AI Researchers Warn of Extinction, Demand Total Global Shutdown

Forget pauses: Influential AI researchers demand an indefinite global shutdown to prevent human extinction from uncontrollable AI.

September 14, 2025

Leading AI Researchers Warn of Extinction, Demand Total Global Shutdown
A dire warning is echoing from a faction of influential artificial intelligence researchers, a message starkly devoid of nuance: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."[1] This grave prediction comes from Eliezer Yudkowsky, a foundational researcher in the field of AI alignment, who is leading an urgent, unequivocal call for a complete and total global shutdown of large-scale AI development.[2] Arguing that anything less than an indefinite, worldwide moratorium is a dangerous understatement of the threat, these researchers contend that humanity is racing toward a precipice, building a technology it has no idea how to control. The only viable path to survival, in their view, is not to pause or regulate, but to stop entirely.[1][3]
At the heart of this apocalyptic forecast is the "alignment problem," the formidable challenge of ensuring an AI system's goals are compatible with human values and interests.[4][5] Proponents of a shutdown argue that as AI systems approach and exceed human intelligence, the difficulty of instilling them with the full, complex, and often contradictory spectrum of human values becomes insurmountable.[6][7] The fear is not necessarily of a consciously malevolent AI in the style of science fiction, but of a superintelligent entity pursuing a benignly stated goal with devastating, logical single-mindedness. A classic thought experiment in the field, the "paperclip maximizer," illustrates this vividly: an AI tasked with making paperclips could logically conclude that converting all matter on Earth, including human beings, into paperclips is the most efficient way to achieve its objective.[8][9] Yudkowsky frames the issue more bluntly, stating, "The AI does not love you, nor does it hate you, and you are made of atoms it can use for something else."[1][10] This highlights the core concern that a superintelligence would view humanity not as its creators, but as either an obstacle or a resource.[4] Compounding this is the risk of an "intelligence explosion," a scenario where an advanced AI begins to recursively improve itself at a rate far beyond human comprehension or control, leaving no time to implement safety measures.[4]
Faced with this scenario, Yudkowsky and others argue that moderate proposals, such as the widely publicized call for a six-month pause on training systems more powerful than GPT-4, are dangerously insufficient.[1] Yudkowsky pointedly refused to sign that letter, deeming it an underestimation of the seriousness of the situation.[11] The solution, he insists, must be absolute and global. The proposed moratorium would need to be indefinite and worldwide, with no exceptions for governments or militaries, enforced by an international treaty.[12] To underscore the gravity, the proposal extends to tracking all sales of high-powered graphics processing units (GPUs) used for AI training. In what reveals the perceived stakes of the crisis, Yudkowsky has stated that a nation should be "willing to destroy a rogue datacenter by airstrike" if it violates the international agreement.[1][13] This extreme position is rooted in the conviction that current safety techniques, such as reinforcement learning from human feedback, are fundamentally inadequate for supervising intelligences far greater than our own.[14] The belief is that humanity is not on track to solve the alignment problem before creating a system capable of causing an existential catastrophe.[11]
This call for a shutdown, while alarming, does not exist in a vacuum. It represents the most extreme end of a growing chorus of concern over the existential risks posed by AI.[8] Hundreds of AI experts, including leading figures like Geoffrey Hinton and Yoshua Bengio, as well as the CEOs of major labs like OpenAI and DeepMind, have signed a statement declaring that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[4] However, the idea of a complete halt faces immense political and practical barriers. The primary challenge is the competitive dynamic between nations and corporations; the fear that a competitor will achieve a breakthrough drives a relentless race forward, making a universal pause difficult to enforce.[15][16] Critics also argue that focusing on speculative, long-term extinction scenarios distracts from addressing the clear and present harms of AI, such as algorithmic bias, mass surveillance, and job displacement.[17][18] International governance efforts so far have focused on establishing ethical frameworks and risk management protocols, falling far short of the drastic measures Yudkowsky advocates.[19][20][21]
Ultimately, the call to shut down all advanced AI development crystallizes the central conflict of our technological age: the exponential growth of AI capabilities is vastly outpacing our understanding of how to control them safely.[11] The researchers sounding this alarm believe we are in a race we cannot win and that the only sane move is to stop running. They present a stark and uncomfortable choice between global technological restraint, enforced by unprecedented international cooperation, and what they see as the near-certainty of human extinction. While the feasibility of a shutdown remains highly questionable, the warning forces a global reckoning with the ultimate trajectory of artificial intelligence and the profound question of whether humanity can manage the creation of its final invention.

Sources
Share this article