The AI industry is witnessing an unprecedented acceleration in innovation as OpenAI’s groundbreaking o1 model, unveiled in September 2024, has already spawned multiple competitors using similar reasoning techniques. Within just two months, Chinese company DeepSeek released a rival model, and by December, Google launched Gemini 2.0 Flash Thinking, both employing inference-time compute approaches that mirror o1’s capabilities.
OpenAI’s o1 model introduced a revolutionary approach called inference-time compute, which tackles complex problems by breaking them into manageable tasks. The model creates a “chain of thought” or “chain of reasoning” where each component is addressed sequentially, with the ability to backtrack, check previous steps, correct errors, and even try solutions that fail before finding alternatives—mimicking human problem-solving behavior.
DeepSeek’s rapid emergence surprised the AI community. Released on November 20, the Chinese model not only replicated o1’s capabilities but added transparency by showing users every step of its thought process through “DeepThink mode.” Charlie Snell, an AI researcher at UC Berkeley who coauthored a Google DeepMind paper on inference-time compute, confirmed DeepSeek performs well on complex mathematical problems. When he asked OpenAI employees about DeepSeek, they acknowledged it “looks like the same thing” but were puzzled about how it was developed so quickly.
Google’s Gemini 2.0 Flash Thinking followed shortly after, also displaying its reasoning steps to users—a feature praised by OpenAI cofounder Andrej Karpathy, who noted that “unlike o1 the reasoning traces of the model are shown.” He emphasized that seeing the model “actively think through different possibilities, ideas, debate themselves” adds significant value for users.
This rapid commoditization raises critical questions about AI economics. Rahul Sonwalkar, CEO of Julius AI, observed: “Companies spend massive amounts building these new models, and within a few months they become a commodity.” The proliferation of similar capabilities has driven AI model pricing down dramatically over the past year, potentially undermining the justification for spending hundreds of millions or billions on next-generation models.
OpenAI responded to the competition by previewing o3, an o1 successor, on Friday. Francois Chollet, a respected AI expert, called the update a “significant breakthrough,” suggesting the reasoning race continues to accelerate despite the commoditization concerns.
Key Quotes
It’s amazing how quickly AI model improvements get commoditized. Companies spend massive amounts building these new models, and within a few months they become a commodity.
Rahul Sonwalkar, CEO of startup Julius AI, highlighted the economic challenge facing AI companies as innovations are rapidly replicated, potentially undermining the business case for massive R&D investments.
They were probably the first ones to reproduce o1. I’ve asked people at OpenAI what they think of it. They say it looks like the same thing, but they don’t how DeepSeek did this so fast.
Charlie Snell, an AI researcher at UC Berkeley who coauthored a Google DeepMind paper on inference-time compute, revealed that even OpenAI employees were surprised by how quickly DeepSeek replicated their breakthrough technology.
The prominent and pleasant surprise here is that unlike o1 the reasoning traces of the model are shown. As a user I personally really like this because the reasoning itself is interesting to see and read — the models actively think through different possibilities, ideas, debate themselves, etc., it’s part of the value add.
Andrej Karpathy, an OpenAI cofounder, praised Google’s Gemini 2.0 Flash Thinking model for its transparency in showing reasoning steps, suggesting this feature provides additional value that OpenAI’s o1 lacks.
This is getting really time-consuming. Maybe I need to consider a different strategy. Instead of combining two numbers at a time, perhaps I should look for a way to group them differently or use operations in a nested manner.
This excerpt from DeepSeek’s chain of thought demonstrates the model’s human-like problem-solving approach, showing self-awareness about its process and the ability to pivot strategies when encountering difficulties.
Our Take
The rapid commoditization of OpenAI’s o1 reasoning breakthrough reveals a fundamental tension in AI development: the massive capital required to innovate versus the impossibility of maintaining competitive moats. This dynamic may accelerate consolidation, favoring tech giants with diversified revenue streams who can sustain losses on model development while monetizing through integrated ecosystems. DeepSeek’s achievement is particularly noteworthy—a Chinese company matching OpenAI’s capabilities in two months challenges assumptions about Western AI dominance and suggests that architectural innovations, once published or reverse-engineered, spread faster than ever. The transparency advantage offered by DeepSeek and Google’s models, showing reasoning traces that o1 hides, could become a differentiator as users increasingly value explainability. OpenAI’s quick o3 preview suggests they’re feeling competitive pressure, potentially accelerating their release cycles. This reasoning race ultimately benefits users through rapid capability improvements and falling prices, but raises questions about long-term sustainability for pure-play AI model companies.
Why This Matters
This development represents a pivotal moment in AI’s competitive landscape, revealing how quickly technological advantages evaporate in the modern AI race. The rapid replication of OpenAI’s o1 reasoning capabilities by DeepSeek and Google demonstrates that even breakthrough innovations face immediate commoditization, fundamentally challenging the economics of AI development.
The inference-time compute approach marks a significant evolution in AI capabilities, enabling models to tackle complex, multi-step problems more like humans do. This advancement has profound implications for applications requiring sophisticated reasoning—from scientific research and mathematical problem-solving to complex business analytics and strategic planning.
For businesses and investors, this trend raises critical questions about sustainable competitive advantages in AI. If billion-dollar innovations become commodities within months, companies may need to rethink their massive infrastructure investments and focus instead on application-layer differentiation, data advantages, or integration capabilities. The plummeting prices for AI model access could democratize advanced AI capabilities but may also squeeze margins for model developers, potentially reshaping the industry’s economic structure and consolidating power among companies with the deepest pockets to sustain prolonged investment cycles.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources: