Amazon Web Services (AWS) has launched its most advanced AI chip to date, Trainium 2, but the company insists it’s not trying to compete directly with industry leader NVIDIA. In an exclusive interview at AWS’s re:Invent conference, Gadi Hutt, senior director of customer and product engineering at AWS’s chip-designing subsidiary Annapurna Labs, emphasized that the goal is to provide customers with lower-cost alternatives rather than unseating NVIDIA from its dominant position.
The Trainium 2 chip, unveiled this week alongside a new supercomputer cluster called Project Rainier, offers approximately 40% cost savings compared to NVIDIA’s GPUs. Despite this competitive pricing, Hutt maintains that “it’s not about unseating Nvidia” but rather “giving customers choices.” AWS has invested tens of billions of dollars in generative AI infrastructure, and the new chips represent the company’s most significant effort yet to provide alternatives in the AI hardware market.
Anthropic, the AI startup in which Amazon recently invested an additional $4 billion, will be Project Rainier’s first customer. The partnership has been instrumental in shaping AWS’s chip development, with Anthropic providing feedback on features and capabilities needed for building foundation models. According to Hutt, Anthropic’s expertise has helped AWS “home in on building chips that are really good at what they do.”
Hutt acknowledged that NVIDIA’s GPUs will remain dominant for the foreseeable future, particularly for advanced workloads and experimental research. He explained that GPUs are “more of a general-purpose processor of machine learning” and that all researchers and data scientists know how to use NVIDIA products well. Trainium chips, by contrast, are optimized for customers with large-scale deployments who want to control costs while maintaining high performance.
The AWS executive also addressed the company’s relationships with other chip partners. Despite Intel CEO Pat Gelsinger’s recent retirement, Hutt confirmed AWS would continue working with the struggling chip giant due to sustained customer demand for Intel’s server chips. However, he noted that AMD’s AI chips still haven’t been deployed on AWS because customers haven’t shown strong demand for them yet.
AWS CEO Matt Garman previously stated that the “vast majority of workloads will continue to be on Nvidia,” a statement Hutt confirmed as accurate. The Trainium chips target specific use cases where customers have large spending and want better cost control, rather than trying to capture all AI workloads. Hutt emphasized that AWS wants to “continue to be the best place for GPUs and, of course, Trainium when customers need it.”
Key Quotes
It’s not about unseating Nvidia. Nvidia is a very important partner for us. It’s really about giving customers choices.
Gadi Hutt, senior director at AWS’s Annapurna Labs, explained the company’s positioning strategy for Trainium chips, emphasizing collaboration over competition with NVIDIA despite launching a significantly cheaper alternative.
The market is very big, so there’s room for multiple vendors here. We’re not forcing anybody to use those chips, but we’re working very hard to ensure that our major tenets, which are high performance and lower cost, will materialize to benefit our customers.
Hutt articulated AWS’s market philosophy, suggesting the AI chip market is expansive enough to accommodate multiple players without direct zero-sum competition, focusing instead on customer value through performance and cost optimization.
Because they’re such experts in building foundation models, this really helps us home in on building chips that are really good at what they do.
Discussing the partnership with Anthropic, Hutt revealed how the AI startup’s expertise has directly influenced Trainium chip development, demonstrating the collaborative approach between cloud providers and AI companies in hardware innovation.
Usually the customers we get are the ones that are seeing increased costs as an issue and are trying to look for alternatives.
Hutt identified the target customer profile for Trainium chips, clarifying that AWS is focusing on cost-conscious customers with large-scale deployments rather than trying to capture all AI workloads from NVIDIA.
Our Take
AWS’s positioning strategy reveals a sophisticated understanding of market dynamics that goes beyond simple competition. By explicitly stating they’re not competing with NVIDIA, AWS is actually carving out a sustainable niche while avoiding direct confrontation with an entrenched market leader. The 40% cost advantage is significant enough to attract price-sensitive customers without claiming technical superiority that would be difficult to prove. The Anthropic partnership is particularly strategic—having a high-profile AI company as the first customer provides crucial validation and creates a development feedback loop. However, the admission that AMD’s AI chips haven’t launched due to lack of customer demand suggests AWS is being selective about which battles to fight. This pragmatic approach, combined with continued investment in NVIDIA partnerships, positions AWS to benefit regardless of which chip architecture ultimately dominates, while building optionality for customers concerned about vendor lock-in and rising AI infrastructure costs.
Why This Matters
This story reveals the evolving dynamics of the AI chip market and challenges the narrative of direct competition between cloud providers and NVIDIA. AWS’s strategy of positioning Trainium as a complementary option rather than a replacement demonstrates the complexity of the AI infrastructure landscape. The 40% cost savings offered by Trainium 2 could significantly impact AI development economics, potentially democratizing access to large-scale AI training and inference for companies concerned about escalating costs.
The partnership with Anthropic as Project Rainier’s first customer signals how cloud providers are working closely with leading AI companies to co-develop optimized hardware solutions. This collaboration model could accelerate AI innovation by creating feedback loops between chip designers and AI researchers. Additionally, AWS’s continued investment in multiple chip partnerships—including NVIDIA, Intel, and AMD—reflects the reality that the AI market is large enough to support multiple vendors, each serving different customer needs and workloads. This diversification strategy may prove crucial as AI adoption continues to expand across industries.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Jensen Huang: TSMC Helped Fix Design Flaw with Nvidia’s Blackwell AI Chip
- Biden hails $20B investment by computer chip maker in Arizona plant
- EnCharge AI Secures $100M Series B to Revolutionize Energy-Efficient AI Chips
- Pitch Deck: TensorWave raises $10M to build safer AI compute chips for Nvidia and AMD
- Amazon to Invest Additional $4 Billion in AI Startup Anthropic
Source: https://www.businessinsider.com/aws-exec-explains-why-nvidia-is-not-competitor-trainium-chip-2024-12