Google DeepMind CEO: AI Scaling Must Be Pushed to Maximum for AGI

A fundamental debate is reshaping Silicon Valley’s approach to artificial intelligence development: Can scaling laws alone achieve artificial general intelligence (AGI)? Google DeepMind CEO Demis Hassabis has taken a definitive stance on this critical question, declaring that current AI systems must be scaled to their absolute limits. Speaking at the Axios AI+ Summit in San Francisco, Hassabis emphasized that “the scaling of the current systems, we must push that to the maximum, because at the minimum, it will be a key component of the final AGI system.” He went further, suggesting it “could be the entirety of the AGI system.”

This statement comes as Google DeepMind recently released Gemini 3 to widespread industry acclaim, positioning the company at the forefront of the AGI race. AGI represents a still-theoretical form of AI that can reason at human levels, and it remains the ultimate goal driving massive investments across leading AI companies. AI scaling laws propose that feeding models more data and computational power will continuously improve their intelligence and capabilities.

However, Hassabis acknowledged potential limitations, suggesting that one or two additional breakthroughs beyond pure scaling may be necessary to achieve AGI. The scaling approach faces significant challenges: publicly available data has finite limits, and expanding compute capacity requires building expensive data centers with substantial environmental impacts. Growing concerns suggest that AI companies investing heavily in scaling may be experiencing diminishing returns on their massive infrastructure investments.

Not everyone in the AI community agrees with the scaling-first approach. Yann LeCun, Meta’s chief AI scientist who recently announced his departure to launch his own startup, represents an alternative perspective. Speaking at the National University of Singapore in April, LeCun argued that “most interesting problems scale extremely badly” and cautioned against assuming “more data and more compute means smarter AI.”

LeCun is pursuing world models as an alternative to large-language models, focusing on collecting spatial data rather than language-based data. His new startup aims to “bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences,” as he wrote on LinkedIn in November. This fundamental disagreement between scaling advocates and alternative approaches will likely shape the future trajectory of AI development.

Key Quotes

The scaling of the current systems, we must push that to the maximum, because at the minimum, it will be a key component of the final AGI system. It could be the entirety of the AGI system.

Google DeepMind CEO Demis Hassabis made this statement at the Axios AI+ Summit in San Francisco, clearly positioning himself as a strong advocate for the scaling approach to achieving AGI, even as debates intensify about whether this strategy alone is sufficient.

Most interesting problems scale extremely badly. You cannot just assume that more data and more compute means smarter AI.

Yann LeCun, Meta’s chief AI scientist, delivered this counterargument at the National University of Singapore in April, representing a growing faction of researchers who believe alternative approaches beyond pure scaling are necessary for achieving true artificial general intelligence.

The goal of the startup is to bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.

LeCun wrote this on LinkedIn in November when announcing his departure from Meta to launch his own startup focused on world models, signaling his commitment to pursuing alternatives to the scaling-focused large-language model approach that currently dominates the industry.

Our Take

This philosophical and technical divide reveals that the AI industry is at a critical inflection point. While Hassabis’s confidence in scaling reflects Google DeepMind’s recent success with Gemini 3, his acknowledgment that “one or two” additional breakthroughs may be needed suggests even scaling advocates recognize limitations. The timing of LeCun’s departure to pursue world models is particularly significant—it indicates that top-tier researchers are willing to bet their careers on alternative approaches. What’s most intriguing is that both camps may be partially correct: scaling could provide the foundation while novel architectures like world models supply the missing components. The industry may be heading toward a hybrid future where multiple methodologies converge. The real question isn’t which approach wins, but how quickly these different paths can be integrated to accelerate AGI development while managing the substantial resource and environmental costs involved.

Why This Matters

This debate represents a pivotal moment for the AI industry’s future direction and the billions of dollars being invested in AGI development. The scaling versus alternative approaches question will determine how companies allocate resources, which technologies receive funding, and ultimately which path leads to AGI first. For businesses, the outcome affects strategic planning around AI adoption and infrastructure investments. If scaling proves insufficient, companies heavily invested in traditional large-language models may need to pivot strategies.

The environmental and economic implications are substantial—continued scaling requires massive data center construction with significant carbon footprints and energy consumption. If diminishing returns materialize, the industry may face a reckoning about sustainability and efficiency. For workers and society, the approach taken will influence the timeline to AGI and its capabilities, affecting job markets, education systems, and societal preparation for advanced AI. The split between industry leaders like Hassabis and LeCun also signals that multiple paths to AGI may emerge simultaneously, potentially accelerating breakthroughs through diverse methodologies rather than a single dominant approach.

Source: https://www.businessinsider.com/demis-hassabis-ai-scaling-pushed-to-maximum-data-2025-12