Nvidia CEO Jensen Huang has firmly rejected concerns that artificial intelligence development is hitting a scaling plateau, addressing growing industry worries during the company’s third-quarter earnings call on Wednesday. The debate centers on whether foundation models—the large language models powering generative AI applications—are slowing in their rate of improvement, a concern that could significantly impact Nvidia’s business model.
Recent reports suggested that OpenAI’s progress in improving its models has been slowing, raising questions about whether the traditional approach of scaling AI through more data and computing power has reached its limits. This matters enormously for Nvidia, whose entire value proposition depends on continued demand for increasingly powerful computing infrastructure.
Huang pushed back strongly against these concerns, stating that “foundation model pre-training scaling is intact and it’s continuing.” He argued that the concept of scaling has evolved beyond the narrow definition many industry observers use. While earlier AI development relied primarily on feeding models more data during pre-training, modern approaches have expanded to include multiple improvement strategies.
The Nvidia CEO highlighted several emerging techniques that are driving continued AI advancement. Synthetic data generation allows AI systems to create their own training data, though the article notes that concerns remain about running out of original data and the effectiveness of synthetic alternatives for pre-training purposes. Additionally, post-training improvements have evolved from early methods involving armies of human reviewers checking AI responses to more sophisticated automated approaches.
Huang specifically praised OpenAI’s o1 model (codenamed Strawberry), which employs advanced strategies like “chain of thought reasoning” and “multi-path planning.” These techniques encourage models to process information more deliberately and systematically. “The longer it thinks, the better and higher quality answer it produces,” Huang explained, suggesting that computational intensity rather than raw model size is becoming the new frontier.
This shift has direct implications for Nvidia’s business. The first generation of foundation models required approximately 100,000 Hopper chips to build, Huang noted, while “the next generation starts at 100,000 Blackwells”—Nvidia’s latest and most powerful chip architecture. The company announced that commercial shipments of Blackwell chips are just beginning, positioning them as essential infrastructure for the next wave of AI development that prioritizes reasoning time over simple scale.
Key Quotes
Foundation model pre-training scaling is intact and it’s continuing.
Jensen Huang, Nvidia’s CEO, made this statement during the company’s third-quarter earnings call in response to concerns about AI development hitting a plateau. This direct assertion aims to reassure investors and the industry that the fundamental drivers of AI progress remain strong.
The longer it thinks, the better and higher quality answer it produces.
Huang explained this principle when discussing modern AI reasoning strategies like those used in OpenAI’s o1 model. This statement highlights the shift from building bigger models to enabling more sophisticated inference-time computation, which requires more powerful chips like Nvidia’s Blackwell.
The next generation starts at 100,000 Blackwells.
Huang revealed this figure when comparing the computational requirements of successive AI model generations. This demonstrates the massive and growing demand for Nvidia’s latest chips, with each new generation of foundation models requiring at least as many next-generation chips as the previous generation needed of older hardware.
Our Take
Huang’s comments reveal a carefully crafted narrative that serves Nvidia’s interests while addressing legitimate industry concerns. The CEO is essentially redefining what “scaling” means to maintain the growth story that has driven Nvidia’s valuation to stratospheric heights. While his points about inference-time computing and reasoning strategies are technically valid, they also conveniently support continued chip demand regardless of whether traditional pre-training scaling continues.
The elephant in the room is whether these new approaches actually deliver proportional improvements to justify the exponentially increasing computational costs. If reasoning-time scaling shows diminishing returns similar to pre-training scaling, the industry could face a genuine plateau. However, Huang’s confidence and the immediate demand for Blackwell chips suggest that, at least for now, AI companies believe the investment is worthwhile. This moment represents a critical inflection point where the AI industry’s trajectory—and Nvidia’s dominance—will be determined by whether these new scaling strategies deliver on their promise.
Why This Matters
This statement from Jensen Huang carries enormous weight for the AI industry’s trajectory and investment landscape. If AI scaling has truly plateaued, it could trigger a reassessment of the hundreds of billions being invested in AI infrastructure, potentially deflating the current AI boom. Nvidia’s market valuation—which has soared to make it one of the world’s most valuable companies—depends entirely on continued demand for ever-more-powerful chips.
Huang’s response reveals an important evolution in AI development strategy. The shift from pure pre-training scale to inference-time computing represents a fundamental change in how AI systems improve, moving from “bigger models” to “smarter reasoning.” This transition actually benefits Nvidia, as it means more computational power is needed for each query rather than just during initial training.
For businesses investing in AI, this signals that the arms race isn’t slowing—it’s simply changing form. Companies will need cutting-edge hardware not just to build models, but to run them effectively. The broader implication is that AI development remains on an upward trajectory, just through different mechanisms than initially anticipated, which should reassure investors and enterprises planning long-term AI strategies.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Jensen Huang: TSMC Helped Fix Design Flaw with Nvidia’s Blackwell AI Chip
- OpenAI CEO Sam Altman Hints at Potential Restructuring in 2024
- OpenAI’s Valuation Soars as AI Race Heats Up
- The AI Hype Cycle: Reality Check and Future Expectations
- Sam Altman’s Bold AI Predictions: AGI, Jobs, and the Future by 2025