Amazon Web Services (AWS) has achieved a significant milestone with its custom-designed Graviton chips, now powering more than 90% of its 1,000 largest elastic-compute-cloud (EC2) customers, according to Rahul Kulkarni, AWS’s director of compute and AI/ML. This represents a dramatic expansion since the chip line’s 2018 launch and underscores the growing importance of custom silicon in cloud computing infrastructure.
Graviton’s success stems from multiple competitive advantages. The chips utilize Arm-based designs, which deliver superior cost and energy efficiency compared to conventional x86-powered processors from Intel and AMD. By designing its own chips, AWS reduces data-center operating costs while offering customers better price-performance ratios. Major enterprises including Epic Games, Databricks, and Pinterest have become significant Graviton adopters, drawn by the combination of performance, efficiency, and value.
The strategic importance of custom silicon dates back to 2013, when James Hamilton, a senior vice president and distinguished engineer at AWS, authored an internal six-page strategy document advocating for chip design. AWS accelerated this initiative by acquiring Israel-based chip designer Annapurna Labs in 2015. According to Bernstein Research, Amazon has become the “most successful” designer of Arm-based server chips, supplying over 50% of such chips worldwide, with Graviton accounting for roughly 20% of AWS’s CPU usage as of mid-2022.
A crucial development is Graviton’s expanding role in AI workloads. While originally designed for general computing purposes, Kulkarni revealed that a growing number of customers are now using Graviton for CPU-based AI inference and machine-learning frameworks. The fourth-generation Graviton chips have added features and capabilities that enable AI inference use cases without requiring dedicated machine-learning processors. “It’s a new revenue channel that goes beyond what AWS had anticipated Graviton to be used for,” Kulkarni explained.
This expansion comes as AWS doubles down on custom chip development despite broader cost-cutting measures across Amazon. CEO Andy Jassy called Graviton “very successful” during an August analyst call and suggested AWS’s dedicated AI chips—Inferentia and Trainium—could follow a similar growth trajectory, though they face stiff competition from Nvidia’s established position in the AI chip market. Kulkarni emphasized that custom silicon remains “one of the most strategic areas” for AWS, with continued aggressive investment planned.
Key Quotes
It just continues to show how Graviton is gaining traction
Rahul Kulkarni, AWS’s director of compute and AI/ML, emphasized the momentum behind Graviton adoption, highlighting that over 90% of the top 1,000 EC2 customers now use the custom chips—a testament to their market acceptance and performance advantages.
It’s a new revenue channel that goes beyond what AWS had anticipated Graviton to be used for
Kulkarni revealed that Graviton’s expansion into AI inference and machine-learning workloads represents an unexpected but valuable evolution, opening new business opportunities beyond the chips’ original general computing purpose.
It’s one of the most strategic areas for us. We will absolutely continue to drive innovation in custom silicon as we have been doing for the past 10-plus years
Kulkarni underscored AWS’s long-term commitment to custom chip development, indicating that despite Amazon’s broader cost-cutting initiatives, the custom silicon business remains protected and prioritized for continued aggressive investment.
We have a lot of engagement going on in custom silicon, and that’s an area that we will continue to invest in at a very aggressive pace going forward
This statement from Kulkarni signals AWS’s strategic intent to expand its custom chip portfolio, suggesting that Graviton’s success has validated the business model and will drive further innovation in specialized processors for cloud workloads.
Our Take
AWS’s Graviton success story reveals a critical inflection point in cloud computing and AI infrastructure. The 90% adoption rate among top customers isn’t just impressive—it’s transformative, indicating that custom silicon has become table stakes for hyperscale cloud providers. What’s particularly noteworthy is the organic expansion into AI inference workloads, which suggests that the line between general computing and AI-specific hardware is blurring.
This positions AWS strategically against competitors who rely more heavily on third-party chip vendors. While Nvidia dominates AI training, the inference market remains more fragmented and cost-sensitive—exactly where Graviton’s efficiency advantages matter most. The fact that major enterprises like Epic Games and Databricks have embraced Graviton for production workloads validates its reliability and performance.
Looking ahead, this could reshape the semiconductor landscape, pressuring Intel and AMD while accelerating Arm’s data center penetration. For businesses, it signals that AI deployment costs may decrease significantly, potentially accelerating enterprise AI adoption.
Why This Matters
This development signals a fundamental shift in cloud computing infrastructure and has major implications for the AI industry. AWS’s success with Graviton demonstrates that hyperscalers can effectively compete with traditional chip manufacturers, potentially disrupting the semiconductor industry’s established order. The fact that 90% of AWS’s largest customers have adopted Graviton indicates that custom silicon has moved from experimental to mission-critical status.
The expansion into AI inference workloads is particularly significant as enterprises seek cost-effective alternatives to expensive GPU-based solutions for deploying AI models at scale. With AI inference representing a massive and growing market, Graviton’s ability to handle these workloads on general-purpose CPUs could democratize AI deployment and reduce barriers to entry for businesses.
For the broader tech industry, this validates the Arm architecture’s viability in data centers and AI applications, potentially accelerating the shift away from x86 dominance. As companies face pressure to reduce energy consumption and operational costs while scaling AI capabilities, AWS’s integrated approach of custom chips optimized for both traditional computing and AI workloads may become the new competitive standard in cloud services.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Biden hails $20B investment by computer chip maker in Arizona plant
- Jensen Huang: TSMC Helped Fix Design Flaw with Nvidia’s Blackwell AI Chip
- EnCharge AI Secures $100M Series B to Revolutionize Energy-Efficient AI Chips
- Pitch Deck: TensorWave raises $10M to build safer AI compute chips for Nvidia and AMD
Source: https://www.businessinsider.com/aws-graviton-chips-used-by-90-percent-top-1000-customers-2024-10