Eric Schmidt Dismisses AI Scaling Law Concerns Amid Slowdown Debate

Former Google CEO Eric Schmidt has weighed in on the heated debate about artificial intelligence scaling laws, asserting there’s “no evidence” that the fundamental principles driving AI advancement are reaching their limits. In a recent episode of “The Diary of A CEO” podcast released Thursday, Schmidt pushed back against growing concerns in Silicon Valley about a potential AI development slowdown.

Schmidt’s optimistic outlook centers on large language models (LLMs), which he believes will continue improving dramatically over the next five years. “These large models are scaling with an ability that is unprecedented,” he stated, predicting “two or three more turns of the crank” in model capabilities. He emphasized that while AI scaling laws—the theoretical principles suggesting models improve with more training data and computing power—will eventually plateau, “we’re not there yet.”

The debate has intensified following recent reports about struggles at major AI companies. Earlier this month, The Information reported that OpenAI’s upcoming flagship model, Orion, showed only moderate improvements over GPT-4, representing a smaller advancement than previous version upgrades. The report indicated that OpenAI has resorted to additional performance-boosting measures, including post-training improvements based on human feedback, suggesting the company may be hitting development challenges.

Similar concerns have emerged at other AI giants. A Bloomberg report revealed that both Google and Anthropic are experiencing diminishing returns from their expensive model development efforts. Google’s next-generation Gemini model is reportedly falling short of internal expectations, while Anthropic’s Claude model timeline has slipped behind schedule.

The AI community remains divided on these developments. NYU professor emeritus Gary Marcus has interpreted these reports as evidence that LLMs have reached a point of diminishing returns. However, others, including OpenAI CEO Sam Altman, have pushed back against this narrative. Altman posted “There is no wall” on Thursday, apparently referencing the ongoing debate about AI development limits.

Representatives from OpenAI, Google, and Anthropic did not immediately respond to requests for comment from Business Insider. The discussion highlights fundamental questions about the future trajectory of AI development and whether current approaches can sustain their rapid pace of improvement.

Key Quotes

These large models are scaling with an ability that is unprecedented

Former Google CEO Eric Schmidt made this statement on “The Diary of A CEO” podcast, emphasizing his belief that AI development continues at an extraordinary pace despite recent concerns about slowdowns at major AI companies.

There’s no evidence that the scaling laws, as they’re called, have begun to stop. They will eventually stop, but we’re not there yet

Schmidt directly addressed the Silicon Valley debate about whether AI scaling laws are breaking down, acknowledging eventual limits while asserting that current development hasn’t reached those boundaries.

There is no wall

OpenAI CEO Sam Altman posted this brief but pointed statement on Thursday, apparently pushing back against reports suggesting his company and others are hitting performance plateaus in AI model development.

Our Take

Schmidt’s intervention carries significant weight given his deep industry experience and technical knowledge, but his optimism may reflect the perspective of someone invested in AI’s continued exponential growth. The reality likely lies somewhere between the extremes—scaling laws may not have completely stopped, but the diminishing returns reported at OpenAI, Google, and Anthropic suggest the easy gains are behind us. What’s particularly telling is that companies are now relying more heavily on post-training techniques and human feedback, indicating that simply throwing more compute and data at models isn’t delivering the same results. This could signal a transition from the “scaling era” to an “optimization era” in AI development, where innovation comes from architectural improvements, training efficiency, and novel approaches rather than pure scale. The industry may need to recalibrate expectations while still acknowledging substantial progress ahead.

Why This Matters

This debate represents a critical inflection point for the AI industry and its massive investments. Tech companies have poured billions of dollars into AI development based on the assumption that scaling laws would continue delivering exponential improvements. If these laws are breaking down, it could fundamentally reshape AI strategy, investment priorities, and timelines for achieving artificial general intelligence (AGI).

The implications extend far beyond Silicon Valley boardrooms. Businesses worldwide are making strategic decisions based on expectations of continued AI advancement. A slowdown could affect everything from enterprise AI adoption to workforce planning and competitive positioning. Conversely, if Schmidt is correct and scaling continues, we may see even more transformative AI capabilities emerging faster than anticipated.

The controversy also highlights the tension between AI optimists and skeptics, with significant consequences for policy, regulation, and public perception. Understanding whether current AI development approaches have fundamental limitations will shape how society prepares for AI’s impact on jobs, education, and economic structures in the coming years.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/eric-schmidt-google-ceo-ai-scaling-laws-openai-slowdown-2024-11