Nvidia stock surged more than 3% on Monday, approaching record highs as investor enthusiasm builds around the company’s next-generation Blackwell GPU product cycle. Shares reached an intra-day high of $139.41, coming within 1% of the all-time high of $140.76 set in mid-June.
The rally was fueled by multiple positive Wall Street research reports and new pricing details for Nvidia’s highly anticipated Blackwell GPU chip. According to Wells Fargo, the initial list prices for the Blackwell GPU chip are approximately $500,000, representing a 40%+ premium over the previous DGX H100 systems. Wells Fargo analyst Aaron Rakers called this pricing an “incremental positive,” noting it should alleviate investor concerns about Nvidia’s gross margin dynamics.
Despite the stock’s impressive run, valuation metrics suggest room for growth. Nvidia is currently trading at a forward price-to-earnings multiple of about 35x, which Citi notes is below both its three-year and five-year averages, indicating the stock isn’t grossly overvalued.
The timing of Blackwell’s release coincides with an intensifying AI infrastructure buildout among cloud hyperscalers. Bank of America highlighted that Microsoft, Amazon, and Alphabet are engaged in an “AI arms race,” with capital expenditure estimates rising dramatically. Since March, for every $1 of upward revision in 2024 sales estimates for hyperscalers, there has been $19 of upward revision in capex estimates, underscoring the massive investment in AI infrastructure.
Goldman Sachs raised its price target for Nvidia to $150 from $135, arguing the company’s competitive moat is strengthening. The firm cited Nvidia’s large installed base, which creates a virtuous cycle attracting more developers, along with the company’s ability to innovate at both the chip and data center levels, plus its robust software offerings.
Concerns about competition from custom silicon chips (ASICs) being developed by Nvidia’s largest customers appear overblown, according to Citi. The firm maintains that “Nvidia still king,” noting that custom silicon may only be suited for large hyperscalers and doesn’t diminish the need for GPUs in both training and inference workloads. Investors will closely watch Nvidia’s third-quarter earnings release in late November for updates on Blackwell’s launch progress.
Key Quotes
We do think the 40%+ higher pricing of the DGX B200 vs DGX H100 systems could be considered higher than expectations
Wells Fargo analyst Aaron Rakers commented on the premium pricing of Nvidia’s new Blackwell systems, suggesting the higher prices should ease investor concerns about the company’s profit margins and demonstrate strong demand for next-generation AI chips.
Capex estimates continued to rise: since March, for every $1 of upward revision in 2024 sales estimates for hyperscalers, there was $19 of upward revision in capex estimates
Bank of America highlighted the dramatic acceleration in AI infrastructure spending among cloud providers, showing that capital expenditure growth is far outpacing revenue growth as companies race to build AI capabilities.
Nvidia still king. Due to the limitations in the hardware use cases, building custom silicon may be only suited for large hyperscalers. That nevertheless does not take away from the need for GPUs in both training and inference
Citi analysts dismissed concerns about competition from custom chips developed by Nvidia’s customers, arguing that GPUs remain essential for AI workloads and that Nvidia’s technological leadership remains intact despite efforts to develop alternatives.
Our Take
Nvidia’s near-record stock performance reflects a critical inflection point in AI infrastructure investment. The willingness of hyperscalers to pay 40%+ premiums for Blackwell chips demonstrates that performance gains in AI computing remain more valuable than cost optimization, a dynamic that favors Nvidia’s innovation-driven strategy.
What’s particularly noteworthy is the 19:1 ratio of capex-to-revenue revisions among cloud providers. This suggests we’re witnessing speculative infrastructure buildout where companies are investing heavily in AI capacity ahead of proven revenue streams—a bet that AI will fundamentally transform their businesses.
The competitive moat analysis from Goldman Sachs highlights an often-overlooked aspect: Nvidia’s ecosystem advantage extends beyond hardware. The combination of CUDA software, developer tools, and data center-level optimization creates switching costs that custom silicon alone cannot overcome. This positions Nvidia not just as a chip supplier but as the foundational platform for the AI era.
Why This Matters
This development is significant for the AI industry as Nvidia remains the dominant force powering the global AI revolution. The strong demand and premium pricing for Blackwell chips demonstrate that enterprise and cloud providers are willing to pay substantially more for cutting-edge AI infrastructure, validating the continued growth trajectory of AI investments.
The “AI arms race” among tech giants Microsoft, Amazon, and Alphabet signals that AI infrastructure spending is accelerating rather than plateauing, with capex revisions outpacing revenue revisions by a 19:1 ratio. This suggests the AI buildout is still in early innings, with massive capital deployment ahead.
Nvidia’s strengthening competitive moat has broader implications for the AI ecosystem. Despite efforts by major customers to develop custom chips, Nvidia’s combination of hardware performance, software ecosystem, and developer community creates high switching costs. This dominance means Nvidia will likely continue setting the pace for AI capability advancement, influencing what’s possible across industries from healthcare to autonomous vehicles to enterprise software. The company’s success also reflects growing confidence that AI investments will generate substantial returns.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources: