Andrej Karpathy, a co-founder of OpenAI, has stated that functional artificial intelligence (AI) agents are approximately a decade away from being fully realized. During a recent podcast appearance, the influential AI researcher expressed that current agent technologies are not yet capable of performing complex, autonomous tasks effectively.
Key Takeaways
- Andrej Karpathy, an OpenAI co-founder, believes it will take about 10 years to develop truly functional AI agents.
- He argues that current AI agents lack sufficient intelligence, multimodal capabilities, and the ability for continual learning.
- Karpathy advocates for a future where AI collaborates with humans, rather than replacing them with fully autonomous systems.
- Other experts have highlighted the high error rate in current AI models, which compounds with each step an agent takes.
Karpathy's Assessment of Current AI Agents
During a recent appearance on the Dwarkesh Podcast, Andrej Karpathy provided a candid evaluation of the AI industry's progress on autonomous agents. He stated plainly that the technology, despite significant industry excitement, is not yet meeting expectations.
"They just don't work," Karpathy said, highlighting fundamental deficiencies. He pointed to several key areas where current systems fall short.
What Are AI Agents?
In the field of artificial intelligence, an "agent" refers to a system capable of perceiving its environment and taking autonomous actions to achieve specific goals. Unlike simple chatbots that respond to prompts, an AI agent can independently break down a complex task, create a plan, and execute the necessary steps without continuous human intervention.
Karpathy elaborated on the specific limitations preventing agents from becoming reliable. "They don't have enough intelligence, they're not multimodal enough, they can't do computer use and all this stuff," he explained. This refers to the inability of current models to seamlessly process different types of information (text, images, sound) and interact with digital interfaces as a human would.
Another critical missing piece, according to Karpathy, is continual learning. He noted, "You can't just tell them something and they'll remember it." This lack of persistent memory and on-the-fly learning means agents cannot easily adapt or improve from user interactions.
A Decade-Long Development Timeline
Based on these challenges, Karpathy projects a lengthy development cycle before AI agents become practical tools. He offered a clear timeline for resolving these complex issues.
"It will take about a decade to work through all of those issues," he stated during the podcast.
This timeline contrasts with the more optimistic predictions often found within the tech industry, where some investors have labeled 2025 as "the year of the agent." Karpathy's more measured forecast suggests that the underlying technology requires substantial advancement before it can support the ambitious vision of autonomous AI.
The Compounding Error Problem
The challenge of agent reliability is magnified by compounding errors. According to Quintin Au, a growth lead at ScaleAI, large language models (LLMs) have a significant chance of error on any single action. He estimated this error rate to be around 20%. For a task requiring five sequential actions, the probability of the agent completing every step correctly is only about 32%.
Karpathy's decade-long estimate accounts for the need to solve these fundamental problems of intelligence, interaction, and reliability, which are far more complex than simply improving existing models.
The Case for Human-AI Collaboration
In a follow-up post on the social media platform X, Karpathy clarified his position, emphasizing that his critique is aimed at the industry's tendency to focus on tooling for a future that doesn't yet exist. He expressed concern about the pursuit of fully autonomous systems that could render human input obsolete.
"The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless," he wrote. Karpathy stated that he does not want to live in that future.
Instead, he envisions a collaborative model where AI serves as a powerful assistant to human operators, particularly in fields like software development. He described his ideal scenario:
- AI should pull API documentation and demonstrate correct usage to the human user.
- It should make fewer assumptions and actively collaborate with the user when uncertain.
- The process should enable the human to learn and improve their own skills, not just receive final code.
This approach, he argued, avoids the pitfalls of making humans "useless" and prevents the proliferation of low-quality, AI-generated content, which he referred to as "AI slop." His focus is on augmenting human capability, not replacing it entirely.
A Pessimistic Optimist
Despite his skepticism about the current state of AI agents, Karpathy clarified that he is not an AI denier. He positions his views as a realistic middle ground between unchecked hype and outright disbelief in the technology's potential.
"My AI timelines are about 5-10X pessimistic w.r.t. what you'll find in your neighborhood SF AI house party or on your twitter timeline," he said. This suggests he believes progress will be significantly slower than the most enthusiastic proponents claim.
However, he also noted that his timelines are "still quite optimistic w.r.t. a rising tide of AI deniers and skeptics." This places him firmly in the camp of believers in AI's long-term transformative power, but with a strong dose of engineering realism about the challenges that lie ahead. His perspective serves as a reminder that even in a rapidly advancing field, foundational research and development require time and patience.