The article discusses the emerging competition for Nvidia in the field of artificial intelligence (AI) accelerators. Startups like Sambanova, Groq, and Cerebras are developing new chips that claim to offer faster inference speed than Nvidia’s offerings by 2024. Inference, the process of using a trained AI model to make predictions, is a crucial aspect of AI deployment. These startups are challenging Nvidia’s dominance by designing chips specifically optimized for AI workloads. Sambanova’s chips use a novel architecture that promises higher performance and efficiency. Groq’s tensor streaming processors aim to outperform GPUs for AI inference. Cerebras boasts the largest chip ever made, designed for AI training and inference. While Nvidia remains a formidable player, these startups’ claims of faster inference speed by 2024 could disrupt the AI accelerator market and pose a significant challenge to Nvidia’s leadership.