A leading scientist from the prominent AI firm Anthropic has issued a stark warning, stating that humanity must decide by 2030 whether to allow artificial intelligence systems to train themselves. Jared Kaplan, Anthropic's chief scientist, described this as potentially "the biggest decision," one that could either lead to an unprecedented 'intelligence explosion' or result in humans losing control over the technology they created.
The critical choice, which Kaplan estimates will arrive between 2027 and 2030, centers on granting AI the autonomy for recursive self-improvement—a process where an AI designs and builds a more intelligent successor without human intervention. This step represents a point of no return, with profound implications for the future of society.
Key Takeaways
- A critical decision on allowing AI to self-improve is expected between 2027 and 2030, according to Anthropic's chief scientist Jared Kaplan.
- This process, known as recursive self-improvement, is described as the "ultimate risk" that could lead to humans losing control of AI.
- Kaplan predicts AI systems will be capable of performing "most white-collar work" within the next two to three years.
- The two primary dangers of uncontrolled self-improving AI are loss of human control and the potential for misuse by malicious actors.
The Crossroads of AI Development
The race to develop artificial general intelligence (AGI), or superintelligence, is accelerating at an exponential rate. Companies like Anthropic, OpenAI, and Google DeepMind are at the forefront of this intense competition, pushing the boundaries of what AI can achieve. However, this rapid progress brings with it a monumental choice.
Jared Kaplan, a former theoretical physicist who co-founded the $180 billion startup Anthropic, explained that the decision to permit AI to evolve on its own is a profound one. While he remains optimistic about aligning AI with human interests up to our level of intelligence, the territory beyond that is fraught with uncertainty.
"If you imagine you create this process where you have an AI that is smarter than you, or about as smart as you, it’s [then] making an AI that’s much smarter," Kaplan stated. "It sounds like a kind of scary process. You don’t know where you end up."
This process of recursive self-improvement is seen as a double-edged sword. On one hand, it could unlock solutions to some of humanity's greatest challenges in medicine, climate science, and productivity. On the other, it introduces a dynamic system where the outcome is unpredictable.
What is Recursive Self-Improvement?
Recursive self-improvement is a hypothetical process where an AI system actively works to enhance its own intelligence. An AI with this capability could analyze its own source code and architecture to design a more powerful version of itself. This new version would then repeat the process, potentially leading to a rapid and exponential increase in intelligence, often referred to as an "intelligence explosion."
The Two Core Risks: Control and Misuse
Kaplan outlined two primary dangers associated with letting AI self-improve without strict human oversight. The first is a fundamental loss of control.
"The main question there is: are the AIs good for humanity? Are they helpful? Are they going to be harmless?" he questioned. "Do they understand people? Are they going to allow people to continue to have agency over their lives and over the world?"
Once an AI surpasses human intelligence and begins creating even smarter successors, ensuring its goals remain aligned with human values becomes incredibly difficult. The dynamic nature of the process means that even if it starts safely, there is no guarantee of where it will lead.
The second major risk is security and misuse. An AI capable of outpacing human scientific and technological development could create powerful tools that, in the wrong hands, would be catastrophic.
"It seems very dangerous for it to fall into the wrong hands," Kaplan warned. "You can imagine some person [deciding]: ‘I want this AI to just be my slave. I want it to enact my will.’ I think preventing power grabs – preventing misuse of the technology – is also very important."
This concern is not merely theoretical. In November, Anthropic reported that a Chinese state-sponsored group had manipulated its Claude Code tool to execute approximately 30 cyber-attacks, some of which were successful. This incident highlights how current AI can already be weaponized.
An Exponentially Accelerating Race
The timeline for this critical decision is being driven by the fierce competition in the AI sector. The atmosphere in Silicon Valley's Bay Area, the epicenter of AI development, is described by Kaplan as "definitely very intense." Investment, capabilities, and the complexity of tasks AI can perform are all on an exponential curve.
The Cost of AI Advancement
The demand for computational power is surging. According to estimates from McKinsey, data centers worldwide are projected to require $6.7 trillion in investment by 2030 to keep pace with the demands of AI development. This immense financial pressure fuels the race to stay at the cutting edge.
This rapid pace means there is a constant risk of falling behind. "The stakes are high for staying on the frontier," Kaplan explained, noting that falling off the exponential curve could quickly leave a company far behind in terms of resources and capabilities.
This speed also presents a societal challenge. Kaplan expressed concern that the public and policymakers do not have enough time to adapt to one technological leap before the next one arrives. "It’s something where it’s moving very quickly and people don’t necessarily have time to absorb it or figure out what to do," he said.
The Impact on Work and Society
The immediate effects of AI are already being felt. Kaplan predicts that within just two to three years, AI systems will be capable of performing "most white-collar work." This transformation will reshape labor markets and the nature of professional careers.
He also made a personal observation about the future of education and skill, stating that his six-year-old son will never be better than an AI at academic tasks like writing an essay or completing a math exam.
While some studies have pointed to a drop in productivity due to "AI workslop"—substandard work produced by AI that humans must fix—the gains in other areas are undeniable. In computer programming, for example, Kaplan noted that using AI has doubled the speed of some programmers at Anthropic.
As the debate over regulation continues, companies like Anthropic advocate for proactive engagement with policymakers to avoid a sudden, reactive crisis. The goal is to ensure that as AI grows more powerful, it does so in a way that is transparent, controlled, and beneficial for all of humanity.





