Snowflake CEO Demands AI Transparency on Hallucination Rates

Snowflake CEO Sridhar Ramaswamy is calling for greater transparency in the AI industry, specifically regarding AI hallucination rates—instances where artificial intelligence models generate completely fictitious or inaccurate information. Speaking on “The Logan Bartlett Show,” the former Google executive criticized tech companies for failing to disclose how often their AI models produce false outputs.

Modern large language models (LLMs) can hallucinate at rates ranging from 1% to nearly 30%, according to third-party estimates, yet no major AI companies publicly share these metrics. “If you look, no one publishes hallucination rates on these on their models or on their solution,” Ramaswamy stated. “It’s like, ‘Look, we’re so cool, you should just use us.’”

The issue has divided AI industry leaders. OpenAI CEO Sam Altman has defended AI hallucinations, arguing that models constrained to only answer when absolutely certain would lose their “magic” and appeal. Similarly, Anthropic cofounder Jared Kaplan described occasional chatbot errors as a necessary “tradeoff,” noting that systems trained to never hallucinate become overly cautious and frequently respond with “I don’t know.”

However, Ramaswamy emphasized that for critical applications like financial data analysis, AI tools “can’t make mistakes.” The Snowflake CEO highlighted the core problem: “The insidious thing about hallucinations is not that the model is getting 5% of the answers wrong, it’s that you don’t know which 5% is wrong, and that’s like a trust issue.”

AI hallucinations have already created legal problems, including a lawsuit against OpenAI last year when the company’s AI generated a false legal complaint about a radio host. Baris Gultekin, Snowflake’s head of AI, told Business Insider that hallucinations are the “biggest blocker” preventing generative AI deployment to front-end users, with many organizations limiting AI to internal use cases only.

Despite these challenges, Gultekin expressed optimism about AI accuracy improvements. Companies can now implement guardrails to control model outputs, restrict tone and content, and protect against bias. With access to more diverse data sources, he believes the “backlash” will be “mitigated one successful use case at a time.”

Ramaswamy acknowledged that acceptable error rates vary by use case. While demanding 100% accuracy for financial chatbots, he noted that for tasks like article summarization, occasional inaccuracies are acceptable given the time-saving benefits.

Key Quotes

If you look, no one publishes hallucination rates on these on their models or on their solution. It’s like, ‘Look, we’re so cool, you should just use us.’

Snowflake CEO Sridhar Ramaswamy criticized the AI industry’s lack of transparency, highlighting how companies avoid disclosing accuracy metrics while promoting their products. This statement underscores the growing demand for accountability in enterprise AI deployment.

The insidious thing about hallucinations is not that the model is getting 5% of the answers wrong, it’s that you don’t know which 5% is wrong, and that’s like a trust issue.

Ramaswamy identified the core problem with AI hallucinations—the unpredictability of errors makes it impossible for users to trust outputs, especially in critical business applications. This insight explains why hallucinations pose such a significant barrier to enterprise AI adoption.

If you just do the naive thing and say ’never say anything that you’re not 100% sure about’, you can get them all to do that. But it won’t have the magic that people like so much.

OpenAI CEO Sam Altman defended AI hallucinations as necessary for maintaining the models’ appeal and usefulness. This quote represents the opposing viewpoint in the industry debate, prioritizing user experience over absolute accuracy.

Right now, a lot of generative AI is being deployed for internal use cases only, because it’s still challenging for organizations to control exactly what the model is going to say and to ensure that the results are accurate.

Baris Gultekin, Snowflake’s head of AI, explained how hallucination concerns are limiting AI deployment to low-risk internal applications. This reveals the practical business impact of the accuracy problem and why transparency matters for broader adoption.

Our Take

Ramaswamy’s call for transparency represents a strategic positioning as much as a technical concern. As enterprise data platforms like Snowflake compete with consumer AI giants, emphasizing reliability and accountability creates differentiation in a crowded market. The debate reveals a fundamental tension in AI development: the tradeoff between capability and predictability.

What’s particularly noteworthy is the emerging bifurcation of the AI market—consumer applications where “magic” and creativity matter versus enterprise tools where accuracy is paramount. This suggests we may see divergent development paths, with different models optimized for different risk profiles.

The hallucination problem also exposes AI’s current limitations as a reasoning system. Until models can reliably distinguish between confident knowledge and uncertain speculation, they’ll remain assistive tools rather than autonomous decision-makers. Ramaswamy’s transparency push could accelerate the development of better evaluation frameworks and potentially industry-wide standards, ultimately benefiting the entire AI ecosystem by building appropriate trust and managing expectations.

Why This Matters

This debate over AI transparency represents a critical inflection point for the artificial intelligence industry as it matures from experimental technology to enterprise-critical infrastructure. The lack of standardized hallucination rate disclosure creates significant risks for businesses deploying AI in high-stakes environments like finance, healthcare, and legal services, where errors can have serious consequences.

The tension between AI capability and reliability reflects broader questions about responsible AI deployment. While tech leaders like Altman prioritize user experience and model “magic,” enterprise customers increasingly demand accountability and measurability. This divide could shape regulatory approaches, with governments potentially mandating transparency standards similar to those in other industries.

For businesses, this discussion highlights the urgent need for AI governance frameworks that assess risk tolerance by use case. Organizations must develop strategies to determine where AI hallucinations are acceptable tradeoffs versus where they represent unacceptable risks. As Snowflake’s position demonstrates, enterprise software companies are positioning themselves as the responsible alternative to consumer-focused AI providers, potentially creating a market differentiation opportunity around reliability and transparency. The outcome of this debate will fundamentally influence how AI integrates into critical business operations.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/snowflake-ceo-sridhar-ramaswamy-ai-hallucination-rates-2024-11