A diverse coalition of over 800 public figures, including Nobel Prize-winning scientists, technology pioneers, and former military leaders, has signed a statement calling for a global prohibition on the development of artificial superintelligence. The group warns that the pursuit of AI systems far exceeding human intellect could pose a significant threat to humanity if not approached with extreme caution.
The statement, organized by the Future of Life Institute, argues that work on superintelligence should be halted until a broad scientific consensus on its safety is reached and there is strong public support for its creation. This call for a moratorium comes as major technology companies invest billions of dollars to accelerate AI advancement.
Key Takeaways
- Over 800 prominent individuals have signed a statement to prohibit superintelligence development.
 - The ban would remain until safety is scientifically proven and the public agrees to proceed.
 - Signatories include Nobel laureates, tech founders like Steve Wozniak, and former political figures.
 - The initiative aims to spark a global conversation about the direction and speed of AI research.
 
A Broad Coalition Sounds the Alarm
The list of signatories represents a wide spectrum of society, underscoring the growing concern about the trajectory of artificial intelligence. It includes influential figures from science, technology, politics, and the arts.
Among the notable names are AI researcher and Nobel laureate Geoffrey Hinton, Apple co-founder Steve Wozniak, and Virgin Group founder Richard Branson. The list also features former Chairman of the Joint Chiefs of Staff Mike Mullen, former U.S. National Security Advisor Susan Rice, and artist Will.i.am, highlighting the cross-disciplinary nature of the concerns.
The involvement of British royals Prince Harry and Meghan Markle, alongside figures from opposite ends of the political spectrum like Steve Bannon and Glenn Beck, demonstrates that worries about advanced AI transcend typical ideological divides. Researchers from the U.S., China, and other nations have also added their names to the call.
What is Superintelligence?
Superintelligence refers to a hypothetical stage of AI development where a system's cognitive abilities would dramatically surpass those of the brightest human minds in virtually every field. This is considered a step beyond Artificial General Intelligence (AGI), where an AI would match human-level intellectual capabilities.
The Push for a Global Pause
The statement is a direct response to the rapid pace of AI development, which organizers feel has outstripped public understanding and debate. Anthony Aguirre, a physicist and the executive director of the Future of Life Institute, emphasized that the public has not been adequately consulted on this technological path.
"We’ve, at some level, had this path chosen for us by the AI companies and founders and the economic system that’s driving them, but no one’s really asked almost anybody else, ‘Is this what we want?’" Aguirre stated in an interview.
The core proposal is a clear prohibition on superintelligence work. This is not a call to stop all AI research, but specifically to halt the race toward creating systems that could become uncontrollable. The organizers suggest that an international treaty, similar to those governing nuclear weapons or biotechnology, may eventually be necessary to manage the risks of advanced AI.
"It’s kind of taken as: Well, this is where it’s going, so buckle up, and we’ll just have to deal with the consequences," Aguirre added. "But I don’t think that’s how it actually is. We have many choices as to how we develop technologies, including this one."
Tech Industry's Pursuit Continues
While the statement has gathered significant support, top executives at the forefront of AI development have not signed on. Companies like OpenAI, Google, and Meta are continuing to pour vast resources into creating more powerful AI models, with some leaders openly stating that superintelligence is a near-term goal.
OpenAI CEO Sam Altman suggested last month that he would be surprised if superintelligence did not arrive by 2030. Similarly, Meta CEO Mark Zuckerberg said in July that the goal was "now in sight." These companies are not only developing the models but also the massive data centers required to power them.
Public Opinion Divided on AI
American sentiment on artificial intelligence is nearly split. According to a recent NBC News poll, 44% of U.S. adults believe AI will improve their lives, while 42% think it will make their futures worse. This division highlights the public uncertainty surrounding the technology's ultimate impact.
The push for a ban comes at a time of tension between AI developers and oversight groups. The Future of Life Institute recently reported that it had received subpoenas from OpenAI, which it characterized as a retaliatory measure for its advocacy work. OpenAI has stated the legal action was related to questions about the funding of nonprofit groups critical of its structure.
A Call for Public Dialogue
The ultimate goal of the statement, according to its organizers, is to shift the conversation about AI from a niche technical issue to a global societal one. By bringing together a diverse group of influential voices, the Future of Life Institute hopes to create "social permission" for a broader public debate.
Aguirre explained the strategy behind the diverse signatory list.
"We want to very much represent that this is not a niche issue of some nerds in Silicon Valley, who are often the only people at the table. This is an issue for all of humanity," he said.
The statement does not target a specific government or company. Instead, it aims to force a conversation that includes policymakers in the United States, China, and Europe, alongside the corporations driving the technology forward. The central question posed by the signatories is whether the relentless pursuit of superintelligence is a future that humanity has consciously chosen or one it is simply accepting without question.





