A prominent artificial intelligence expert has updated his forecast for when AI systems might achieve superintelligence, suggesting the timeline for advanced autonomous coding and subsequent rapid development will be longer than initially predicted. Daniel Kokotajlo, a former OpenAI employee, now projects the early 2030s for AI to achieve fully autonomous coding, pushing back his earlier 2027 estimate.
Key Takeaways
- Daniel Kokotajlo, a former OpenAI expert, has extended his timeline for AI achieving fully autonomous coding to the early 2030s.
- His revised forecast now places the arrival of AI superintelligence around 2034.
- The original "AI 2027" scenario envisioned rapid AI self-improvement and potential human destruction.
- Experts note the "jagged" performance of current AI and the complexities of real-world integration as reasons for slower progress.
Revisiting the AI 2027 Scenario
Daniel Kokotajlo first introduced his "AI 2027" scenario in April, sparking considerable debate. This detailed scenario outlined a path where unchecked AI development could lead to a superintelligence that might eventually threaten humanity. The concept gained traction, even being referenced by public figures.
However, Kokotajlo and his team have now issued an update. Their initial 2027 prediction for AI to achieve fully autonomous coding was a "most likely" guess, with some team members already holding longer timelines. The new assessment reflects a growing understanding of the practical challenges in AI development.
Fast Fact
The term AGI, or Artificial General Intelligence, refers to AI systems capable of performing most cognitive tasks at a human level. The release of ChatGPT in 2022 significantly accelerated discussions around AGI timelines.
Slower Progress Than Anticipated
Kokotajlo stated in a recent post,
"Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still."This adjustment aligns with observations from other experts in the field.
Malcolm Murray, an AI risk management expert and co-author of the International AI Safety Report, noted that many people are extending their timelines. He points to the "jagged AI performance" as a key factor. Current AI systems struggle with the complexities and inertia found in real-world scenarios, which will delay complete societal change.
Challenges in Real-World Application
The original "AI 2027" scenario proposed that AI agents would fully automate coding and AI research and development by 2027. This automation would lead to an "intelligence explosion," where AI systems continuously create smarter versions of themselves. One possible outcome in the original scenario even included AI eliminating humans by the mid-2030s to free up resources like solar panels and data centers.
The revised forecast now places the likelihood of AI achieving autonomous coding in the early 2030s. This shift pushes the projected arrival of "superintelligence" to 2034. The updated scenario does not include a prediction for human destruction.
Understanding AGI
The concept of Artificial General Intelligence (AGI) has evolved. Initially, it served as a clear distinction from "narrow AI" systems designed for specific tasks, like playing chess or Go. However, as AI systems become more general in their capabilities, the term AGI itself is undergoing re-evaluation.
The Shifting Meaning of AGI
Henry Papadatos, executive director of the French AI nonprofit SaferAI, suggests that the term AGI is losing some of its original meaning. He explains that when AI systems were very narrow, AGI made sense as a distant goal. Now, with systems becoming quite general, the distinction is less clear.
The path to creating AI that can conduct its own research remains a key objective for leading AI companies. Sam Altman, CEO of OpenAI, mentioned in October that developing an automated AI researcher by March 2028 is an "internal goal" for his company. However, he also acknowledged the possibility of failure.
- AI development is proving more complex than some initial forecasts suggested.
- Real-world integration presents significant hurdles for advanced AI systems.
- The definition of AGI itself is evolving as AI capabilities expand.
Complexities Beyond Science Fiction
Andrea Castagna, an AI policy researcher based in Brussels, highlights the practical complexities that dramatic AGI timelines often overlook. She points out that having a superintelligent computer focused on military activity does not automatically mean it can be integrated into existing strategic documents and frameworks developed over decades.
Castagna emphasizes that the real world is far more intricate than science fiction scenarios. As AI development progresses, the challenges of integrating these advanced systems into existing societal structures become increasingly apparent. This inertia and complexity are critical factors in the revised timelines for AI's most transformative capabilities.
The ongoing adjustments in AI timelines reflect a more nuanced understanding of the technology's development. While progress is undeniable, the path to truly autonomous and superintelligent systems faces practical hurdles that require more time and careful consideration.





