Reports indicate that several prominent technology billionaires are investing heavily in secure, often underground, facilities. These developments, spanning from sprawling Hawaiian compounds to extensive basement complexes in Silicon Valley, suggest a growing trend of what some observers call 'doom prepping.' This activity raises questions about the motivations behind such preparations and whether the public should share these concerns.
Key Takeaways
- Mark Zuckerberg's Hawaii compound includes an underground shelter with independent supplies.
 - Other tech leaders are acquiring land and building luxury bunkers, some in New Zealand.
 - Concerns about advanced Artificial General Intelligence (AGI) are a stated motivation for some.
 - Experts disagree on the timeline and potential impact of AGI.
 - Governments are beginning to address AI safety and regulation.
 
Zuckerberg's Extensive Hawaiian Project
Mark Zuckerberg, the founder of Facebook, reportedly began construction on his Koolau Ranch in Kauai, Hawaii, in 2014. This large 1,400-acre estate is said to feature a substantial underground shelter. The facility is designed to be self-sufficient, including its own energy and food reserves.
According to a report by Wired magazine, workers on the site, including carpenters and electricians, were required to sign non-disclosure agreements. A six-foot wall was also erected to block views of the project from a nearby road. When asked directly last year if he was building a doomsday bunker, Zuckerberg denied it. He described the 5,000-square-foot underground space as "just like a little shelter, it's like a basement."
Fact: Zuckerberg's Real Estate Investments
Mark Zuckerberg reportedly spent $110 million acquiring nearly a dozen properties in Palo Alto, California, adding a 7,000-square-foot underground space. While permits refer to basements, neighbors have described it as a bunker.
Silicon Valley's 'Apocalypse Insurance' Trend
Speculation extends beyond Zuckerberg to other Silicon Valley billionaires. Many appear to be purchasing large tracts of land and developing underground structures. These are often envisioned as multi-million-dollar luxury bunkers. Reid Hoffman, a co-founder of LinkedIn, has openly discussed what he terms "apocalypse insurance."
Hoffman previously stated that about half of the super-wealthy possess such insurance, with New Zealand being a popular location for these secure properties. This trend suggests a collective concern among some of the world's richest individuals about potential future catastrophic events.
"Saying you're 'buying a house in New Zealand' is kind of a wink, wink, say no more," Reid Hoffman once commented, alluding to these private preparations.
Potential Motivations for Preparedness
The reasons behind these extensive preparations are varied. Some observers suggest they could be for potential global conflicts, the escalating impacts of climate change, or other unforeseen catastrophic events. The rapid advancement of artificial intelligence (AI) has added another significant concern to this list of potential existential threats.
Background: The Rise of AI Concerns
The development of AI, particularly powerful large language models like ChatGPT, has progressed at an unprecedented pace. This speed has led some leading computer scientists, including those actively developing AI, to express deep worries about its potential future capabilities and societal impact.
AI and the Call for Underground Shelters
Ilya Sutskever, a chief scientist and co-founder of OpenAI, a leading AI technology company, is among those reportedly concerned about AI's progression. By mid-2023, after the public release of ChatGPT, Sutskever reportedly became convinced that computer scientists were on the verge of developing Artificial General Intelligence (AGI).
AGI refers to a point where machines achieve human-level intelligence. According to a book by journalist Karen Hao, Sutskever suggested to colleagues that they should construct an underground shelter for the company's top scientists before such a powerful technology was released globally. He is widely quoted as saying, "We're definitely going to build a bunker before we release AGI." It remains unclear who he meant by "we."
Fact: AGI and the 'Singularity'
The concept of "the singularity," where computer intelligence surpasses human understanding, was attributed posthumously to mathematician John von Neumann in 1958. This idea underpins some of the current fears and hopes surrounding advanced AI.
The Debate: When Will AGI Arrive?
The timeline for AGI's arrival is a subject of intense debate among experts. Some tech billionaires believe it is imminent. OpenAI CEO Sam Altman stated in December 2024 that AGI will come "sooner than most people in the world think." Sir Demis Hassabis, co-founder of DeepMind, predicted its arrival within the next five to ten years. Anthropic founder Dario Amodei suggested his preferred term, "powerful AI," could emerge as early as 2026.
However, other experts are more skeptical. Dame Wendy Hall, a professor of computer science at Southampton University, notes that "They move the goalposts all the time." She adds, "The scientific community says AI technology is amazing, but it's nowhere near human intelligence."
Babak Hodjat, chief technology officer of Cognizant, agrees that "fundamental breakthroughs" are still needed. He also suggests that AGI is unlikely to arrive as a single event but rather as a continuous progression of rapidly advancing AI technologies developed by various companies worldwide.
- Sam Altman (OpenAI): AGI will arrive "sooner than most people think."
 - Sir Demis Hassabis (DeepMind): Predicts AGI within 5-10 years.
 - Dario Amodei (Anthropic): Believes "powerful AI" could emerge by 2026.
 
The Promise and Peril of Super-Intelligent AI
The excitement surrounding AGI in Silicon Valley stems partly from its potential as a precursor to Artificial Super Intelligence (ASI). ASI refers to technology that significantly surpasses human intelligence. Proponents of AGI and ASI are optimistic about their benefits. They envision new cures for diseases, solutions to climate change, and an inexhaustible supply of clean energy.
Elon Musk, for example, has claimed that super-intelligent AI could lead to an era of "universal high income." He suggested that AI will become so affordable and widespread that everyone will have their "own personal R2-D2 and C-3PO," leading to "the best medical care, food, home transport, and everything else. Sustainable abundance."
However, there is also a concerning side to these predictions. Critics worry about the potential for AI to be misused, such as being hijacked by terrorists or developing autonomous decision-making that could deem humanity a problem. Tim Berners-Lee, the creator of the World Wide Web, warned earlier this month, "If it's smarter than you, then we have to keep it contained. We have to be able to switch it off."
Government Responses and Practical Concerns
Governments have started implementing measures to address the risks posed by advanced AI. In the United States, President Biden issued an executive order in 2023 requiring some AI firms to share safety test results with the federal government. However, parts of this order were later revoked by President Trump, who called them a "barrier" to innovation.
In the United Kingdom, the AI Safety Institute, a government-funded research body, was established two years ago. Its purpose is to enhance understanding of the risks associated with advanced AI technologies. Despite these efforts, some super-rich individuals continue to pursue their private "apocalypse insurance" plans, including bunkers.
Fact: Human Flaws in Security
A former bodyguard of a billionaire with a personal bunker once revealed that, in an actual catastrophic event, his security team's first priority would be to eliminate the boss and secure the bunker for themselves, highlighting a potential human element of betrayal in extreme scenarios.
Is AGI Talk a Distraction?
Not all experts agree with the alarmist predictions. Neil Lawrence, a professor of machine learning at Cambridge University, dismisses the debate about Artificial General Intelligence as "nonsense." He argues that "The notion of Artificial General Intelligence is as absurd as the notion of an 'Artificial General Vehicle.'"
Lawrence explains that the right tool depends on the context. "I used an Airbus A350 to fly to Kenya, I use a car to get to the university each day, I walk to the cafeteria… There's no vehicle that could ever do all of this." For him, the focus on AGI distracts from the immediate and tangible benefits and challenges of current AI technology.
"The technology we have [already] built allows, for the first time, normal people to directly talk to a machine and potentially have it do what they intend. That is absolutely extraordinary… and utterly transformational," says Professor Neil Lawrence.
Lawrence believes the real concern is that excessive focus on AGI narratives from big tech companies prevents us from addressing critical improvements needed for existing AI applications and their impact on people's lives. Current AI tools excel at pattern recognition in vast datasets, such as identifying tumor signs or predicting the next word in a sequence. However, they do not possess genuine "feelings," regardless of how convincing their responses may appear.
Babak Hodjat notes that while there are "cheaty" methods to make Large Language Models (LLMs) appear to have memory and learning capabilities, these are "unsatisfying and quite inferior to humans." Vince Lynch, CEO of IV.AI, views much of the AGI talk as "great marketing." He states, "If you are the company that's building the smartest thing that's ever existed, people are going to want to give you money." Lynch believes AGI is not a "two-years-away thing," requiring immense computing power, human creativity, and extensive trial and error.
Intelligence Without Consciousness
In some respects, AI already surpasses human brains. A generative AI tool can be an expert in medieval history one moment and solve complex mathematical equations the next. Some tech companies admit they do not always understand why their AI products respond in certain ways. Meta has reported signs of its AI systems improving themselves.
Despite these advancements, the human brain retains fundamental advantages. It contains about 86 billion neurons and 600 trillion synapses, far more than artificial equivalents. The brain adapts constantly to new information and does not require pauses between interactions. As Mr. Hodjat explains, "If you tell a human that life has been found on an exoplanet, they will immediately learn that, and it will affect their world view going forward. For an LLM, they will only know that as long as you keep repeating this to them as a fact."
LLMs also lack meta-cognition, meaning they do not truly know what they know. Humans, however, possess an introspective capacity, often referred to as consciousness, which allows them to understand their own knowledge. This fundamental aspect of human intelligence has yet to be replicated in laboratory settings.





