The artificial intelligence industry is embroiled in a heated debate about whether the era of AI scaling is coming to an end, with prominent figures taking opposing sides on this critical question. Geoffrey Hinton, widely known as the “Godfather of AI,” recently told Business Insider that he remains unconvinced that scaling’s effectiveness has completely run its course, directly responding to claims made by his former student Ilya Sutskever.
Sutskever, OpenAI’s cofounder who now runs his own AI startup, argued last month on the “Dwarkesh Podcast” that the pendulum of AI development is swinging back toward research and away from simply achieving breakthroughs through scaling—the practice of acquiring more compute power and chips. He questioned whether multiplying scale by 100x would truly transform AI capabilities, suggesting that “it’s back to the age of research again, just with big computers.”
Hinton counters this perspective by emphasizing the ongoing need for more data, addressing one of scaling’s key challenges: the finite amount of high-quality training data available. He predicts that large language models will begin generating their own data, similar to how Google DeepMind’s AlphaGo and AlphaZero programs create training data by playing against themselves to master the board game Go. Hinton envisions AI systems reasoning through their own beliefs, checking for consistency, and generating additional data through this self-reflective process.
The scaling debate sits at the heart of Big Tech’s massive capital expenditure spending spree, which is predicated on the belief that acquiring more compute power and training data will continue producing smarter, more advanced AI models. However, uncertainty is growing among AI leaders about whether this approach will continue delivering results.
Alexandr Wang, now head of Meta’s superintelligence division, identified scaling as “the biggest question in the industry” in 2024. Yann LeCun, who collaborated with Hinton on pioneering AI research and recently launched his own startup after serving as Meta’s chief AI scientist, has also challenged the scaling doctrine, stating in April that “you cannot just assume that more data and more compute means smarter AI.”
Meanwhile, Google DeepMind CEO Demis Hassabis remains firmly in the pro-scaling camp, arguing at Axios’ AI+ Summit in December that scaling laws could unlock artificial general intelligence (AGI)—the holy grail of AI development. Hassabis emphasized that pushing current systems to their maximum scale is essential, as it will be “at the minimum, a key component of the final AGI system” and “could be the entirety of the AGI system.”
Key Quotes
I’m not convinced it’s completely over
Geoffrey Hinton, the “Godfather of AI,” expressed skepticism about claims that the era of AI scaling has ended, maintaining that scaling still has potential despite growing doubts from other AI leaders.
Is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.
Ilya Sutskever, OpenAI cofounder and Hinton’s former student, argued that simply multiplying compute power won’t produce transformative results, signaling a shift back toward research-driven innovation rather than brute-force scaling.
You cannot just assume that more data and more compute means smarter AI
Yann LeCun, former Meta chief AI scientist who worked with Hinton on pioneering AI research, challenged the fundamental assumption underlying Big Tech’s massive infrastructure investments in April 2024.
The scaling of the current systems, we must push that to the maximum, because at the minimum, it will be a key component of the final AGI system. It could be the entirety of the AGI system.
Demis Hassabis, Google DeepMind CEO, defended scaling as potentially the complete path to artificial general intelligence, representing the most optimistic view on scaling’s continued effectiveness among major AI leaders.
Our Take
This debate reveals a critical moment of introspection within AI’s leadership as the industry confronts the limits of its dominant paradigm. What’s particularly striking is that this isn’t a disagreement between outsiders and insiders—these are the architects of modern AI questioning their own creation’s trajectory. The fact that multiple pioneers are launching independent startups suggests they see opportunities in approaches beyond pure scaling. Hinton’s prediction about self-generating data through reasoning represents a potential middle path: using scale to enable qualitatively different capabilities rather than just quantitative improvements. The financial stakes are enormous—if scaling plateaus without delivering AGI, the current infrastructure arms race may prove to be one of tech history’s most expensive miscalculations. Conversely, if Hassabis is correct, those who maintain faith in scaling will capture the ultimate prize. This uncertainty itself may be the story: the AI field has entered uncharted territory where even its greatest minds cannot confidently predict the path forward.
Why This Matters
This debate represents a fundamental inflection point for the AI industry with massive financial and strategic implications. Big Tech companies have invested hundreds of billions of dollars in compute infrastructure based on scaling assumptions, making this question existential for their AI strategies. If scaling’s effectiveness is diminishing, companies may need to pivot toward more research-intensive approaches, potentially reshaping competitive dynamics in the AI race.
The disagreement among AI’s most influential figures—Hinton, Sutskever, LeCun, and Hassabis—signals genuine uncertainty about the path to advanced AI capabilities and AGI. This uncertainty affects everything from capital allocation decisions to talent strategies and research priorities across the industry. For businesses adopting AI, understanding whether future improvements will come from scaling or algorithmic breakthroughs influences investment timing and technology choices. The outcome of this debate will determine whether AI progress continues at its current pace or requires fundamental innovations in approach, affecting timelines for transformative AI applications across every sector of the economy.
Related Stories
- AI Pioneer Geoffrey Hinton Warns of Superintelligent AI by 2025
- OpenAI Lost Nearly Half of Its AI Safety Team, Ex-Researcher Says
- The Rise of AI Distillation and Its Impact on Big Tech’s AI Dominance
- CEOs Express Insecurity About AI Strategy and Implementation
- Big Tech’s 2025 AI Plans: Meta, Apple, Tesla, Google Unveil Roadmap