Google DeepMind CEO: AI Consistency Issues Block AGI Progress

Google DeepMind CEO Demis Hassabis has identified a critical barrier preventing artificial intelligence from achieving Artificial General Intelligence (AGI): consistency. In a revealing interview on the “Google for Developers” podcast published Tuesday, Hassabis explained that even the most advanced AI models, including Google’s flagship Gemini system, struggle with basic problems that elementary school students can easily solve.

Hassabis highlighted a striking paradox in current AI capabilities. Models enhanced with DeepThink—a reasoning-boosting technique—can achieve gold medal performance at the International Mathematical Olympiad, widely considered the world’s most prestigious mathematics competition. Yet these same systems “still make simple mistakes in high school maths,” according to Hassabis. He characterized these systems as having “uneven intelligences” or “jagged intelligences,” where performance varies dramatically across different domains.

“It shouldn’t be that easy for the average person to just find a trivial flaw in the system,” Hassabis stated, emphasizing the vulnerability of current AI models. “Some dimensions, they’re really good; other dimensions, their weaknesses can be exposed quite easily,” he added.

This assessment aligns with Google CEO Sundar Pichai’s characterization of the current development stage as “AJI"—artificial jagged intelligence. Pichai introduced this term during a June appearance on Lex Fridman’s podcast to describe AI systems that excel in specific areas while failing in others.

According to Hassabis, solving AI’s consistency problems will require more than simply scaling up data and computing power. “Some missing capabilities in reasoning and planning in memory” still need to be addressed, he explained. The industry also needs improved testing methodologies and “new, harder benchmarks” to precisely identify where models excel and where they fall short.

The challenge of achieving AGI extends beyond Google. OpenAI CEO Sam Altman expressed similar concerns ahead of GPT-5’s launch last week. While calling his company’s latest model “a significant advancement,” Altman acknowledged it still falls short of true AGI. “This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we’re still missing something quite important, or many things quite important,” Altman said during a press call.

Altman specifically identified the inability to learn independently as a critical gap. “One big one is, you know, this is not a model that continuously learns as it’s deployed from the new things it finds, which is something that to me feels like AGI,” he explained.

Despite these challenges, Hassabis previously stated in April that AGI could arrive “in the next five to 10 years,” suggesting optimism about overcoming current limitations despite ongoing issues with hallucinations, misinformation, and basic errors.

Key Quotes

It shouldn’t be that easy for the average person to just find a trivial flaw in the system.

Demis Hassabis, Google DeepMind CEO, expressed frustration about the vulnerability of current AI systems to simple errors that ordinary users can easily identify, highlighting a fundamental weakness in even the most advanced models.

Some dimensions, they’re really good; other dimensions, their weaknesses can be exposed quite easily.

Hassabis described the uneven performance of AI systems, explaining why he characterizes current models as having ‘jagged intelligences’ that excel in some areas while failing in others.

Some missing capabilities in reasoning and planning in memory still need to be cracked.

Hassabis identified specific technical gaps that prevent AI from achieving consistency, suggesting that solving these problems will require more than simply scaling up data and computing power.

This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we’re still missing something quite important, or many things quite important.

OpenAI CEO Sam Altman acknowledged during GPT-5’s launch that despite significant advancements, his company’s latest model still falls short of true AGI, echoing Hassabis’s concerns about missing capabilities.

Our Take

The simultaneous admissions from both Google DeepMind and OpenAI leadership reveal an industry at an inflection point. The consistency problem isn’t just a technical bug—it’s a fundamental architectural challenge that questions the entire scaling paradigm that has driven AI development over the past decade. What’s particularly striking is how both Hassabis and Altman are publicly acknowledging limitations rather than overhyping capabilities, suggesting a maturation in how AI leaders communicate about their technology. The “jagged intelligence” concept is especially useful for understanding current AI limitations and sets more realistic expectations for businesses considering AI deployment. The five-to-ten-year AGI timeline from Hassabis now seems optimistic given these acknowledged gaps in reasoning, planning, and continuous learning. This honesty may actually benefit the industry long-term by focusing research efforts on genuine breakthroughs rather than incremental scaling, though it also suggests that transformative AGI applications may remain further away than recent hype cycles have suggested.

Why This Matters

This candid assessment from one of AI’s most influential leaders reveals a fundamental challenge facing the entire artificial intelligence industry. The consistency problem threatens to delay AGI development and has significant implications for businesses and organizations currently deploying AI systems in critical applications.

The “jagged intelligence” phenomenon means that companies cannot fully trust AI systems to perform reliably across all tasks, limiting their utility in high-stakes environments like healthcare, finance, and autonomous systems. This inconsistency also raises questions about the billions of dollars being invested in AI infrastructure and development, as scaling alone won’t solve these fundamental architectural issues.

For the broader technology sector, Hassabis’s comments suggest that achieving AGI will require breakthrough innovations in reasoning, planning, and memory—not just incremental improvements. This could reshape competitive dynamics in the AI race, potentially favoring companies that invest in fundamental research over those focused purely on scaling existing architectures. The acknowledgment from both Google and OpenAI leadership that current models fall short of AGI also helps temper unrealistic expectations while highlighting the genuine technical challenges that remain unsolved in the quest for human-level artificial intelligence.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/google-deepmind-ceo-demis-hassabis-agi-consistency-2025-8