Gruve, an AI infrastructure startup, has secured $50 million in Series A follow-on funding to address the critical power shortage facing the artificial intelligence industry as it transitions from training to inference workloads. The round, led by Xora Innovation (backed by Singapore’s Temasek), brings Gruve’s total funding to $87.5 million and included participation from Mayfield, Cisco Investments, Acclimate Ventures, and AI Space.
Founded in 2024 by serial entrepreneur Tarun Raisoni, who previously built and sold data center startups Rahi and ZPE in nine-figure deals, Gruve tackles what Raisoni identifies as “the biggest challenge today in AI”—insufficient power infrastructure. The company’s innovative approach involves partnering with data center and colocation providers like Lineage and OpenColo to access their unused power capacity and space.
Gruve now commands access to approximately 500 megawatts of power across a network of data centers strategically positioned in major US cities. Currently, the company offers 30 megawatts available for immediate deployment across four operational sites in California, New Jersey, Texas, and Washington, with active customer data already running in California and New Jersey locations.
The company’s geographic distribution strategy is central to its value proposition. Gruve’s proprietary software intelligently routes AI inference requests to the nearest server location, delivering faster transmission speeds and reduced operational costs—critical factors as AI applications scale globally.
Unlike traditional cloud giants, Gruve provides hands-on engineering support for companies lacking in-house machine learning and data science expertise. The company typically collaborates with neoclouds that supply hardware, while Gruve manages setup, operations, and day-to-day management. Its diverse customer base includes neoclouds, AI startups, and major corporations such as Bio-Rad, PayPal, Cisco, and Stanford Health Care.
With approximately 600 employees (70% based in India focusing on security operations), Gruve plans to deploy the new capital toward hiring engineers and machine learning researchers to enhance its inferencing software capabilities. The company has also announced expansion plans for Japan and Western Europe, positioning itself as a global solution to AI’s infrastructure challenges.
Key Quotes
The biggest challenge today in AI is we don’t have enough power
Tarun Raisoni, Gruve’s CEO and cofounder, identified the fundamental infrastructure problem his company aims to solve. This statement underscores the industry-wide recognition that power availability—not just computational capability—has become the primary constraint on AI deployment.
We have found the stranded power, and we are bringing the software to stitch it together
Raisoni explained Gruve’s core innovation: identifying unused power capacity across existing data centers and creating software infrastructure to make it accessible for AI workloads. This approach offers a faster path to scaling AI infrastructure than building new facilities from scratch.
Our Take
Gruve’s rapid rise—from founding in 2024 to controlling 500 megawatts and securing major enterprise clients—reveals how desperate the market is for AI inference infrastructure. The company’s success also validates a crucial insight: the AI infrastructure problem isn’t just about building more capacity, but intelligently utilizing what already exists. Raisoni’s track record of successful exits in the data center space adds credibility, but the real test will be whether Gruve’s distributed model can maintain performance and security at scale. The heavy investment in India-based security operations (70% of staff) suggests the company understands the compliance and trust challenges inherent in distributed AI infrastructure. As AI inference workloads explode—potentially surpassing training costs within years—Gruve is positioning itself at the center of a multi-billion dollar infrastructure transformation. The question isn’t whether this market will grow, but whether Gruve can maintain its first-mover advantage as cloud giants inevitably respond.
Why This Matters
This funding round highlights a critical inflection point in the AI industry: the shift from model training to inference deployment. As AI moves from research labs to real-world applications, the infrastructure demands fundamentally change. While training requires massive computational bursts, inference requires sustained, distributed power—a challenge traditional data centers weren’t designed to solve.
Gruve’s approach of unlocking “stranded” power capacity represents an innovative solution to what many consider AI’s biggest bottleneck. With major tech companies struggling to secure sufficient energy for AI operations, Gruve’s 500-megawatt network offers immediate relief. The company’s success attracting enterprise clients like PayPal and Stanford Health Care demonstrates that AI infrastructure is no longer just a tech company concern—it’s becoming essential across industries.
The geographic distribution strategy also addresses latency and cost issues that could limit AI adoption. As regulations around AI deployment tighten globally, having distributed infrastructure in multiple jurisdictions becomes increasingly valuable. Gruve’s expansion plans into Japan and Europe position it ahead of this trend, potentially making it a key enabler of global AI deployment at a time when power constraints threaten to slow AI innovation.
Related Stories
- Groq Investor Warns of Data Center Crisis Threatening AI Industry
- Blackstone’s AI Data Center Bets Drive Record Growth in 2025
- Meta Q4 Earnings: Zuckerberg Bets Big on AI with $135B Capex Plan
- Big Tech’s 2025 AI Plans: Meta, Apple, Tesla, Google Unveil Roadmap
Source: https://www.businessinsider.com/gruve-raises-50m-ai-power-infrastructure-2026-1