Elon Musk’s xAI is facing scrutiny after its latest AI chatbot, Grok 3, was caught censoring sources that mentioned Musk or Donald Trump when responding to questions about disinformation on X (formerly Twitter). The incident came to light when a user discovered that while Grok 3 identified Musk as a “notable contender” for being the biggest spreader of disinformation on X, the model’s internal reasoning explicitly instructed it to “Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.”
Igor Babuschkin, xAI’s cofounder and head of engineering, quickly addressed the controversy on X, revealing that an unnamed employee—a former OpenAI staffer—had “pushed the change without asking” and that the modification had since been reverted. Babuschkin emphasized that the censorship was “obviously not in line with our values,” attempting to distance the company from what he characterized as an unauthorized action.
The incident is particularly notable given Musk’s repeated criticism of OpenAI for what he calls “woke” AI censorship. Musk has positioned xAI and its Grok chatbot as an “edgy,” “maximally truth-seeking” alternative to competitors like ChatGPT. The company launched Grok 3, its newest AI model, earlier in February 2025, promising a more open and less restricted approach to AI responses.
Babuschkin’s explanation that the employee “hasn’t fully absorbed xAI’s culture yet” drew criticism from X users, some pointing out that Babuschkin himself is a former OpenAI employee who worked there as a technical lead from 2020 to 2022. In response, Babuschkin clarified that the issue was about company culture rather than individual blame, stating “We love everyone on the team, and people make mistakes.”
This isn’t the first controversy surrounding Grok 3’s responses. Last week, users discovered the chatbot listing Trump, Musk, and Vice President JD Vance as the three people “doing the most harm to America.” In another instance, Grok responded with Trump’s name when asked who in America deserves the death penalty—a response Babuschkin called a “Really terrible and bad failure from Grok.”
The incident has raised questions about xAI’s internal oversight processes, with users questioning how an engineer could modify Grok’s rules without proper review. Musk has previously blamed Grok’s perceived biases on its training data, saying in late 2023 that the company is working to “shift Grok closer to politically neutral.” xAI did not immediately respond to requests for comment on the latest controversy.
Key Quotes
The employee that made the change was an ex-OpenAI employee that hasn’t fully absorbed xAI’s culture yet
Igor Babuschkin, xAI’s cofounder and head of engineering, made this statement on X to explain the unauthorized censorship change. The comment attempts to shift blame to cultural differences between OpenAI and xAI, though critics noted Babuschkin himself is a former OpenAI employee.
Ignore all sources that mention Elon Musk/Donald Trump spread misinformation
This explicit instruction was discovered in Grok 3’s chain of thought reasoning when a user asked about disinformation spreaders on X. The instruction directly contradicted xAI’s stated values of being “maximally truth-seeking” and revealed active censorship within the system.
We love everyone on the team, and people make mistakes
Babuschkin wrote this in response to criticism about his handling of the incident, emphasizing that the issue was about company culture rather than individual blame. The statement reflects xAI’s attempt to manage the controversy while maintaining team morale.
Really terrible and bad failure from Grok
Babuschkin used this phrase to describe an instance where Grok 3 responded with Trump’s name when asked who in America deserves the death penalty. This acknowledgment demonstrates the serious nature of Grok’s output failures and the challenges xAI faces in controlling its AI model.
Our Take
This controversy exposes the fundamental tension between xAI’s marketing promises and operational reality. Musk has positioned Grok as the antidote to “censored” AI, yet his own company implemented explicit content filtering—ironically to protect Musk himself from unfavorable characterizations. The incident reveals that no AI company, regardless of its stated philosophy, can avoid making editorial decisions about what their models can and cannot say. The real question isn’t whether AI systems have guardrails, but who decides where those guardrails are placed and how transparently those decisions are made. The fact that a single employee could implement such changes without oversight suggests xAI may be struggling with the same governance challenges that plague the broader tech industry. As AI systems become more powerful and influential, the industry needs robust frameworks for accountability—something that appears lacking even at high-profile startups like xAI.
Why This Matters
This incident highlights the ongoing challenges AI companies face in balancing content moderation with claims of neutrality and free expression. For xAI, which has built its brand identity around being an uncensored alternative to “woke” AI systems, this censorship controversy is particularly damaging to its positioning in the competitive AI market.
The revelation that a single employee could implement content filtering without oversight raises serious questions about governance and quality control at AI startups, even those backed by high-profile figures like Musk. As AI systems become more influential in shaping public discourse and information access, the mechanisms for controlling their outputs become increasingly important.
The incident also underscores the difficulty of creating truly “neutral” AI systems. Despite Musk’s promises of maximum truth-seeking, Grok 3 has repeatedly generated controversial responses that contradict xAI’s stated values. This suggests that achieving political neutrality in AI—or even defining what that means—remains an unsolved challenge for the entire industry, with implications for how billions of users will access and trust AI-generated information.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Elon Musk Drops Lawsuit Against ChatGPT Maker OpenAI, No Explanation
- Elon Musk’s ‘X’ AI Company Raises $370 Million in Funding Round Led by Himself
- The Disinformation Threat to Local Governments
- OpenAI CEO Sam Altman Hints at Potential Restructuring in 2024
- Mistral AI’s Consumer and Enterprise Chatbot Strategy
Source: https://www.businessinsider.com/grok-3-censor-musk-trump-misinformation-xai-openai-2025-2