Daniela Amodei, president and cofounder of Anthropic, has challenged one of Silicon Valley’s most fundamental assumptions about artificial intelligence, suggesting that the concept of artificial general intelligence (AGI) may no longer be a useful framework for understanding AI’s trajectory. In a recent interview with CNBC, Amodei argued that the traditional definition of AGI—the point at which machines reach human-level intelligence—has become increasingly problematic as AI capabilities evolve unevenly across different domains.
Amodei explained that AGI was originally conceived as a benchmark to measure when artificial intelligence would become “as capable as a human.” However, she believes this framing is breaking down in practice. “By some definitions of that, we’ve already surpassed that,” Amodei stated, pointing to specific areas like software development where Anthropic’s Claude model can now write code at a level comparable to many professional engineers, including some working within Anthropic itself. She described this rapid advancement as “crazy,” highlighting how quickly these capabilities have materialized.
Yet Amodei was quick to acknowledge the limitations that persist. “Claude still can’t do a lot of things that humans can do,” she noted, emphasizing that AI systems continue to fall short in many areas that humans handle with ease. This contradiction—where AI simultaneously exceeds and falls short of human capabilities depending on the task—is precisely why Amodei believes the AGI construct may be outdated. “I think maybe the construct itself is now wrong — or maybe not wrong, but just outdated,” she said.
These comments come as Anthropic and its competitors invest tens of billions of dollars into developing increasingly powerful models and the massive data centers required to run them. While some critics argue that large language models won’t achieve true general intelligence without major breakthroughs, Amodei suggested that progress shows no signs of slowing. “We don’t know” what breakthroughs may still be needed, she said, adding, “Nothing slows down until it does.”
Rather than fixating on reaching a single end-state like AGI, Amodei emphasized that the more critical question is how increasingly capable AI systems are integrated into real organizations and how quickly humans and institutions can adapt. She noted that even if models continue improving at a steady pace, adoption can lag due to practical constraints including change management, procurement processes, and determining where AI actually adds value. In Amodei’s view, the future of AI won’t depend on meeting a textbook definition of AGI, but rather on understanding what these systems can do, where they fall short, and how society chooses to deploy them.
Key Quotes
AGI is such a funny term. Many years ago, it was kind of a useful concept to say, ‘When will artificial intelligence be as capable as a human?’
Daniela Amodei, Anthropic’s president and cofounder, explained how the concept of AGI has evolved from a useful benchmark to a potentially outdated framework for understanding AI progress.
By some definitions of that, we’ve already surpassed that. That’s crazy.
Amodei pointed to software development as an example where Anthropic’s Claude model can write code at a level comparable to professional engineers, demonstrating how AI has already exceeded human capabilities in specific domains.
I think maybe the construct itself is now wrong — or maybe not wrong, but just outdated.
Amodei articulated her core argument that the AGI framework may no longer be relevant given the uneven development of AI capabilities across different tasks and domains.
Nothing slows down until it does.
When asked about whether major breakthroughs are still needed for AI progress, Amodei expressed uncertainty but confidence that current momentum shows no signs of stopping, highlighting the unpredictable nature of AI development.
Our Take
Amodei’s perspective is particularly noteworthy because it comes from someone deeply invested in building advanced AI systems, not from a skeptic. Her willingness to question AGI as a goal suggests the industry is maturing beyond simplistic benchmarks toward a more nuanced understanding of machine intelligence. This shift could be healthy for the field, redirecting focus from an abstract finish line to practical questions about deployment, safety, and value creation. However, it also raises concerns about accountability—without clear milestones like AGI, how do we measure progress or establish guardrails? The emphasis on adoption challenges is refreshing and realistic, acknowledging that technological capability alone doesn’t guarantee impact. This pragmatic view may help temper unrealistic expectations while focusing attention on the real work of integrating AI responsibly into organizations and society.
Why This Matters
This perspective from one of AI’s leading executives represents a significant shift in how the industry thinks about its ultimate goals. For years, AGI has served as the North Star for AI research and development, guiding investment decisions and shaping public discourse about AI’s future. Amodei’s suggestion that this framework may be outdated could influence how companies prioritize research, how investors evaluate progress, and how policymakers approach AI regulation.
The implications extend beyond semantics. If AI capabilities continue to develop unevenly—superhuman in some domains while struggling with basic tasks in others—it challenges the notion of a single “intelligence” benchmark. This reality has profound consequences for businesses attempting to integrate AI, as they must navigate a landscape where AI tools excel at specific tasks but cannot be trusted as general-purpose problem solvers. For workers, this suggests a future where AI augments rather than replaces human capabilities, with the division of labor between humans and machines becoming increasingly nuanced. Amodei’s emphasis on adoption challenges also highlights that technological capability alone won’t determine AI’s impact—organizational readiness, change management, and thoughtful deployment strategies will be equally critical factors in shaping AI’s role in society.
Related Stories
- AI Pioneer Geoffrey Hinton Warns of Superintelligent AI by 2025
- TIME100 Talks: The Transformative Power of AI
- CEOs Express Insecurity About AI Strategy and Implementation
- The Future of Work in an AI World
- Microsoft AI CEO’s Career Advice for Young People in the AI Era
Source: https://www.businessinsider.com/anthropic-president-idea-of-agi-may-already-be-outdated-2026-1