Andrej Karpathy: AI Agents Won't Work for Another Decade

Andrej Karpathy, the renowned OpenAI cofounder and current leader developing an AI-native school at Eureka Labs, has thrown cold water on the AI industry’s enthusiasm for autonomous agents, predicting it will take approximately a decade before they become truly functional.

In a recent appearance on the Dwarkesh Podcast, Karpathy delivered a blunt assessment of current AI agent capabilities: “They just don’t work.” He outlined multiple fundamental limitations plaguing today’s agents, including insufficient intelligence, lack of multimodal capabilities, inadequate computer use functionality, and absence of continual learning. “You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working,” he explained.

This critique comes at a pivotal moment for the AI industry, as many investors have dubbed 2025 “the year of the agent.” AI agents are designed as virtual assistants capable of completing tasks autonomously—breaking down problems, outlining plans, and taking action without constant user prompts. However, Karpathy argues the industry is building tools for a future that doesn’t yet exist.

In a follow-up post on X (formerly Twitter), Karpathy clarified his position, criticizing the industry for “overshooting the tooling w.r.t. present capability.” He expressed concern that the AI sector is rushing toward a future where “fully autonomous entities collaborate in parallel to write all the code and humans are useless.”

Instead, Karpathy advocates for a collaborative human-AI model where artificial intelligence assists rather than replaces human workers. “I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and ask/collaborate with me when not sure about something,” he wrote, emphasizing his desire to learn and improve as a programmer rather than simply receiving “mountains of code.”

Karpathy isn’t alone in his skepticism. Quintin Au, growth lead at ScaleAI, highlighted the mathematical challenges facing AI agents in a LinkedIn post, noting that with a 20% error rate per action, an agent completing five sequential tasks has only a 32% chance of getting every step right—a compounding problem that severely limits reliability.

Despite his reservations about current agent technology, Karpathy clarified he’s not an AI pessimist overall, stating his timelines are “5-10X pessimistic” compared to Silicon Valley’s AI enthusiasts but “still quite optimistic” compared to AI deniers.

Key Quotes

They just don’t work. They don’t have enough intelligence, they’re not multimodal enough, they can’t do computer use and all this stuff. They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working.

Andrej Karpathy, OpenAI cofounder and founder of Eureka Labs, delivered this stark assessment of current AI agent capabilities on the Dwarkesh Podcast, outlining multiple fundamental limitations that prevent agents from functioning as advertised.

It will take about a decade to work through all of those issues.

Karpathy provided this timeline estimate for when AI agents might actually become functional, directly contradicting industry hype that has labeled 2025 as ’the year of the agent.'

My critique of the industry is more in overshooting the tooling w.r.t. present capability. The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless.

In a follow-up post on X, Karpathy clarified his concerns about the AI industry building for a hypothetical future rather than addressing current capabilities and needs.

Currently, every time an AI performs an action, there’s roughly a 20% chance of error. If an agent needs to complete 5 actions to finish a task, there’s only a 32% chance it gets every step right.

Quintin Au, growth lead at ScaleAI, explained the mathematical problem of error compounding in AI agents, demonstrating why reliability remains a major challenge for autonomous systems.

Our Take

Karpathy’s intervention represents a crucial reality check for an industry prone to hype cycles. His technical credibility makes this critique impossible to dismiss as mere skepticism. The gap between current AI agent capabilities and industry marketing reveals a pattern we’ve seen before in tech: premature commercialization of nascent technology.

What’s particularly noteworthy is Karpathy’s emphasis on human-AI collaboration over full automation. This philosophical stance could reshape product development if adopted more broadly. The compounding error problem Au identifies suggests fundamental architectural challenges that can’t be solved simply by scaling up models or adding more training data.

The decade timeline also implies that the current generation of AI agent startups may need to pivot toward more modest, achievable goals—or risk becoming cautionary tales of overpromising. For the broader AI industry, this serves as a reminder that genuine progress requires patience, not just venture capital.

Why This Matters

Karpathy’s assessment carries significant weight in the AI industry given his credentials as an OpenAI cofounder and respected AI researcher. His decade-long timeline for functional AI agents directly contradicts the prevailing narrative from investors and companies betting heavily on near-term agent deployment.

This matters because billions of dollars are being invested in AI agent startups and infrastructure based on assumptions of imminent capability breakthroughs. If Karpathy is correct, many of these investments may be premature, and businesses rushing to implement agent-based solutions could face disappointment and wasted resources.

The debate also highlights a fundamental tension in AI development: should the industry prioritize full automation that replaces human workers, or collaborative tools that augment human capabilities? Karpathy’s preference for the latter approach could influence how AI products are designed and marketed.

For workers and businesses, this news suggests that fears of immediate job displacement by AI agents may be overblown, providing a longer runway to adapt and upskill. However, it also means the productivity gains promised by autonomous agents will take longer to materialize than many have anticipated.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/andrej-karpathy-ai-agents-timelines-openai-2025-10