AMD CEO Lisa Su made headlines at CES 2026 by revealing that the world will need more than 10 yottaflops of computing power over the next five years to sustain AI’s explosive growth. Speaking at her keynote address in Las Vegas on Tuesday, Su introduced the audience to a unit of measurement most had never encountered before.
What is a yottaflop? Su explained that a yottaflop represents “a one followed by 24 zeros” in computing calculations per second. To put this in perspective, 10 yottaflops equals 10,000 times more compute power than existed in 2022. In computing terminology, a flop (floating-point operation) represents a single basic mathematical calculation. A yottaflop is equivalent to one septillion calculations per second—enough theoretical power to run complex, atom-level simulations for entire planets.
The scale of AI’s growth is unprecedented. In 2022, global AI compute capacity stood at approximately one zettaflop (a one followed by 21 zeros). By 2025, that figure has already surged to more than 100 zettaflops—a 100-fold increase in just three years. “There’s just never, ever been anything like this in the history of computing,” Su emphasized during her presentation.
To contextualize Su’s prediction, 10 yottaflops would be approximately 5.6 million times faster than today’s most powerful supercomputer—the US Department of Energy’s El Capitan system. This astronomical leap in computing capability underscores the massive infrastructure investments required to support AI advancement.
However, significant challenges loom on the horizon. Powering today’s AI compute infrastructure is already straining the US power grid, and the build-out of energy infrastructure represents a major bottleneck in scaling up AI computing power. The energy demands of AI data centers have become a critical concern for the industry’s sustainable growth.
During her keynote, Su also unveiled AMD’s next generation of AI chips, including the MI455 GPU, as the company intensifies its competition in the data-center hardware market. AMD is positioning itself as a key supplier for major AI companies, with customers including OpenAI, as it challenges NVIDIA’s dominance in the AI chip sector.
Key Quotes
A yottaflop is a one followed by 24 zeros. So 10 yottaflop flops is 10,000 times more compute than we had in 2022.
AMD CEO Lisa Su explained the massive scale of computing power needed for AI’s future at CES 2026, introducing the audience to a measurement unit so large that no one in attendance appeared familiar with it when she asked for a show of hands.
There’s just never, ever been anything like this in the history of computing.
Su emphasized the unprecedented nature of AI’s growth trajectory, noting that global AI compute capacity has already increased 100-fold from one zettaflop in 2022 to more than 100 zettaflops by 2025—a rate of expansion never before seen in the computing industry.
Our Take
Su’s yottaflop prediction reveals both the ambition and vulnerability of the AI industry. While the 10,000-fold compute increase sounds impressive, the energy infrastructure challenge she acknowledged could become AI’s Achilles heel. The industry is essentially in a race between Moore’s Law and the laws of thermodynamics—can chip efficiency improve fast enough to make yottaflop-scale computing sustainable?
AMD’s timing is strategic. By framing the conversation around massive future compute needs while unveiling new AI chips, Su positions AMD as essential to AI’s future. This narrative benefits AMD as it challenges NVIDIA’s market dominance. However, the real story may be that the AI industry’s appetite for computing power is outpacing our ability to power it sustainably—a constraint that could fundamentally reshape which AI applications prove viable and which remain theoretical.
Why This Matters
This announcement highlights the staggering scale of infrastructure investment required to sustain AI’s rapid evolution. Su’s 10 yottaflop prediction isn’t just a technical milestone—it represents a fundamental challenge to the AI industry’s growth trajectory. The 10,000-fold increase in computing power needed by 2030 will require unprecedented coordination between chip manufacturers, data center operators, and energy providers.
The energy bottleneck Su identified poses existential questions for AI development. As AI compute already strains power grids, scaling to yottaflop-level performance will demand revolutionary advances in energy efficiency and infrastructure. This could reshape where data centers are built, accelerate nuclear and renewable energy adoption, and potentially slow AI deployment if energy solutions don’t materialize.
For the semiconductor industry, this represents a massive market opportunity. AMD’s aggressive push into AI chips with the MI455 GPU signals intensifying competition with NVIDIA, potentially benefiting customers through innovation and competitive pricing. The race to deliver yottaflop-scale computing will drive chip design breakthroughs and determine which companies dominate the next era of technology.
Related Stories
- Nvidia CEO Jensen Huang Reveals Public Speaking Struggles Despite AI Success
- Meta and Nvidia Billionaires’ Wealth Soars $152B in AI Boom
- Groq Investor Warns of Data Center Crisis Threatening AI Industry
- Big Tech’s 2025 AI Plans: Meta, Apple, Tesla, Google Unveil Roadmap
Source: https://www.businessinsider.com/amd-ceo-lisa-su-ai-10-yottaflops-compute-definition-2026-1