Influential Coalition Demands Outright Ban on Superhuman AI Development
From AI godfathers to royalty, over 1,000 luminaries demand a ban on superintelligent AI, fearing human obsolescence and control.
October 22, 2025

A renewed and urgent call to halt the development of superintelligent artificial intelligence has been issued by a diverse and influential coalition of over 1,000 experts, public figures, and scientists. Organized by the Future of Life Institute, the group is advocating for a prohibition on creating AI that surpasses human cognitive abilities until there is a "broad scientific consensus that it will be done safely and controllably" and with "strong public buy-in."[1] This new "Statement on Superintelligence" follows a previous open letter in 2023 which called for a temporary six-month pause on advanced AI experiments.[1][2] That earlier effort, while sparking widespread debate, ultimately failed to slow the rapid pace of innovation within the AI industry.[1] The new statement, however, is more direct in its aim, calling for an outright prohibition on the specific goal of creating AI smarter than humans.[3][4] The signatories warn that the race to build such systems poses profound risks to society, ranging from economic upheaval and the loss of human control to threats against civil liberties and even the potential for human extinction.[1][5]
The list of signatories is notably broad, spanning the political spectrum and various professional fields in an effort to demonstrate a growing mainstream concern over the trajectory of AI development.[6] Prominent names from the world of AI research, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, often referred to as "godfathers of AI," have endorsed the call for a ban.[1][7] They are joined by tech pioneers like Apple co-founder Steve Wozniak and Virgin founder Richard Branson.[7] The coalition also includes an array of public figures, from Prince Harry and Meghan, the Duke and Duchess of Sussex, to former US National Security Advisor Susan Rice and even conservative commentators Steve Bannon and Glenn Beck.[8][9] This unusual alliance underscores the belief that concerns about superintelligence transcend typical partisan divides.[5][10] Historian Yuval Noah Harari warned that superintelligence could dismantle the "operating system of human civilization," while Prince Harry stated, "The future of AI should serve humanity, not replace it."[1][11]
At the heart of the group's concern is the accelerating race among major tech companies to build what they term "superintelligence" or "artificial general intelligence" (AGI), a theoretical form of AI that could outperform humans in virtually all cognitive tasks.[12][9] Companies like OpenAI, Google, and Meta are investing billions in this pursuit, with some executives predicting the arrival of superintelligence within the next decade.[1][13][14] The signatories argue that these decisions, which could lead to "human economic obsolescence and disempowerment," are being made by unelected tech leaders without public consent.[2][15] Polling data released alongside the statement suggests significant public apprehension, with one survey showing that 64% of Americans believe superintelligence should not be developed until it can be proven safe and controllable.[11][8] Anthony Aguirre, the executive director of the Future of Life Institute, stated that the path AI corporations are taking is "wildly out of step with what the public wants."[12]
While the call for a prohibition is stark, some signatories have clarified the nuance in their position. Stuart Russell, a leading AI researcher and co-author of a foundational AI textbook, characterized the initiative not as a blanket ban but as a "proposal to require adequate safety measures."[1][9] The core demand is for a pause until a consensus on safety and public agreement can be reached.[1] However, the movement faces significant headwinds. The previous call for a six-month pause had little tangible impact on the industry's momentum.[1][16] Critics of the pause argue that it is impractical to enforce globally, could stifle innovation, and might even put Western nations at a disadvantage compared to rivals like China who are unlikely to halt their own AI development.[17][18][19][20] Some in the tech industry believe that the potential benefits of advanced AI in fields like medicine and science are too great to delay.[18][21] Furthermore, some AI researchers oppose a moratorium, suggesting that transparency and concurrent safety research are better approaches than a complete halt.[22][18]
The renewed push from the AI Pause group intensifies the global debate over how to govern a technology advancing at an exponential rate.[23][3] While the previous letter focused on a temporary pause for protocol development, this new statement directly challenges the ultimate goal of creating machines more intelligent than their creators.[2][3] The lack of signatures from the CEOs of leading AI labs like OpenAI and Microsoft AI is notable, highlighting the divide between those building the technology and those urging extreme caution.[8] The signatories are banking on the diverse and high-profile nature of their coalition to shift the conversation and apply public pressure for meaningful regulation before, as they warn, there is no second chance.[11][16] The central question they pose to the industry and policymakers is not whether superintelligence can be built, but whether it should be, and who gets to decide.[2][13]
Sources
[5]
[8]
[9]
[11]
[13]
[14]
[16]
[17]
[18]
[19]
[20]
[21]
[23]