Nvidia CEO Jensen Huang: AI Reasoning Models Need 100x More Compute

Nvidia CEO Jensen Huang delivered a decisive response to concerns about AI computing demand during the company’s earnings call Wednesday night, asserting that reasoning models like DeepSeek’s R1 will actually drive exponential increases in computational requirements. Despite Nvidia beating top-range revenue expectations, investors showed muted enthusiasm amid ongoing questions about whether DeepSeek’s efficient open-source models might reduce demand for Nvidia’s AI chips.

Huang directly addressed the elephant in the room: “Reasoning models can consume 100x more compute. Future reasoning can consume much more compute,” he stated emphatically. This declaration came in response to concerns that emerged after Chinese AI firm DeepSeek launched its remarkably efficient open-source models last month, sparking industry-wide questions about whether training efficiency would diminish the need for powerful AI hardware.

The Nvidia CEO praised DeepSeek as an “excellent innovation” and highlighted that the company’s decision to open-source a world-class reasoning AI model has accelerated adoption across the industry. “Nearly every AI developer is applying R1, or chain of thought and reinforcement learning techniques like R1 to scale their model’s performance,” Huang explained during the call.

The shift toward resource-intensive reasoning models represents DeepSeek’s lasting impact, according to Synovus analyst Dan Morgan. These models require substantial chips and power for inference—the AI computing process that refines models and generates reasoning and query responses. Huang noted that inference computing has been steadily rising as AI applications mature, stating: “The vast majority of our compute today is actually inference, and Blackwell takes all of that to a new level,” referring to Nvidia’s newest chip generation.

However, competitive pressures are mounting. Third Bridge analyst Lucas Keh observed that “competition is starting to take its toll on Nvidia’s position, although it is not very material at this point.” Nvidia’s challengers have strategically targeted the inference market, which is expected to grow larger than training in the long run. Chip startup Tenstorrent secured nearly $700 million in funding, while Etched raised $120 million last year.

Investors are increasingly concerned about custom AI chips developed by cloud giants like Google and Amazon potentially eroding Nvidia’s dominance, particularly in inference computing. Keh told Business Insider that analysts have heard Nvidia’s market share in inference could decline to 50% as the competitive landscape evolves. Nvidia declined to comment on these projections.

Key Quotes

Reasoning models can consume 100x more compute. Future reasoning can consume much more compute.

Nvidia CEO Jensen Huang made this statement during Wednesday’s earnings call, directly addressing investor concerns that DeepSeek’s efficient models might reduce demand for AI computing power. This assertion is crucial because it reframes the efficiency narrative—suggesting that while training may become more efficient, inference and reasoning will drive exponential compute growth.

Nearly every AI developer is applying R1, or chain of thought and reinforcement learning techniques like R1 to scale their model’s performance.

Huang explained how DeepSeek’s open-source R1 model has accelerated industry-wide adoption of reasoning techniques. This matters because widespread adoption of these compute-intensive methods validates Nvidia’s position that demand for powerful chips will continue growing despite efficiency improvements in training.

The vast majority of our compute today is actually inference, and Blackwell takes all of that to a new level.

Huang highlighted the shift toward inference workloads while promoting Nvidia’s newest Blackwell chip generation. This statement is significant because it acknowledges the market transition while positioning Nvidia’s latest hardware as the solution for the inference-dominated future.

Competition is starting to take its toll on Nvidia’s position, although it is not very material at this point.

Third Bridge analyst Lucas Keh offered this assessment following Nvidia’s earnings call, suggesting that while Nvidia remains dominant, competitive pressures from custom chips and specialized startups are beginning to impact the company’s market position, particularly in the growing inference segment.

Our Take

Huang’s 100x compute claim is both a defense and a strategic repositioning. By embracing reasoning models as compute-hungry rather than efficiency-threatening, Nvidia is attempting to control the narrative around DeepSeek’s disruption. However, the analyst projection of 50% inference market share reveals the real story: Nvidia’s training monopoly won’t translate to inference dominance. The proliferation of inference-focused startups and custom cloud chips suggests a more competitive, fragmented future. What’s particularly telling is Nvidia’s refusal to comment on market share projections—a departure from their typically confident posture. The AI infrastructure gold rush is entering a new phase where efficiency and specialization matter as much as raw power. Companies betting solely on Nvidia’s continued dominance may need to reconsider their strategies as the inference market matures and diversifies.

Why This Matters

This story represents a critical inflection point for the AI infrastructure industry. Huang’s assertion that reasoning models require 100x more compute directly counters the narrative that emerged after DeepSeek’s launch—that efficiency gains might reduce demand for expensive AI hardware. This has massive implications for the estimated $1 trillion AI infrastructure buildout currently underway.

The shift from training to inference as the dominant workload fundamentally changes the competitive landscape. While Nvidia has maintained near-monopolistic control over AI training chips, the inference market is more fragmented, with custom chips from cloud providers and specialized startups gaining traction. The potential decline of Nvidia’s inference market share to 50% signals a maturing market where no single player may dominate.

For businesses investing in AI infrastructure, this means planning for significantly higher computational costs as reasoning models become standard. The 100x compute multiplier suggests that operational expenses for AI applications could skyrocket, potentially limiting which companies can afford to deploy advanced AI at scale. This could consolidate power among well-funded tech giants while creating barriers for smaller innovators, fundamentally shaping the future competitive dynamics of the AI industry.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/nvidia-ceo-jensen-huang-says-reasoning-models-require-more-compute-2025-2