AI Search Tools Show Mixed Results During 2024 Presidential Election

The 2024 presidential election marked a significant milestone as voters could track results and information through AI-enabled search tools and chatbots for the first time. Business Insider conducted a comprehensive examination of how major AI platforms—including ChatGPT-4o, Perplexity AI, Google Gemini, Microsoft Copilot, and X’s Grok—responded to election-related queries throughout Election Day and the following morning.

Perplexity AI took an aggressive approach by launching a dedicated “Election Information Hub” that provided voters with information about voting logistics, ballot measures, candidate stances, and real-time results tracking. The platform used a heavily sourced, bulleted format that mirrored Google’s search design, complete with an Electoral College ticker and state map. Perplexity’s responses included educational context about voting trends, battleground states, and polling limitations, all backed by footnotes citing news articles and government websites.

OpenAI’s ChatGPT and Microsoft Copilot took the most conservative approach, declining to answer election questions directly. ChatGPT consistently redirected users to authoritative sources like the Associated Press and Reuters, while Copilot attempted to change the subject entirely, stating it wasn’t “equipped to handle” election discussions. Both companies cited commitments to election integrity and providing information from authoritative sources.

X’s Grok chatbot showed the most partisan behavior, analyzing X’s content and prioritizing Trump-related posts, potentially reflecting the platform’s increasingly right-wing user base. While Grok’s “Beta” mode remained neutral, its “fun” mode included cultural references and humor while displaying trending posts predominantly skewed toward Trump’s reelection—even before results were finalized. The “fun” mode also prioritized tweets by Elon Musk.

Google Gemini refused to engage with election queries, acknowledging that “AI could hallucinate” and directing users to Google Search instead. A Google spokesperson confirmed the company restricts responses for election-related queries across its AI applications.

LLM-based AI character creator Kento Morita noted that Google faces a “cannibalizing” market problem, questioning what Gemini means for their core search business. He suggested that companies like Perplexity have an advantage because they focus on creating “one product really well,” while also noting that Grok’s “fun” and “beta” modes provide a “safe space to experiment, regardless of whether the experiment is harmful or not.”

Key Quotes

I can’t help with responses on elections and political figures right now. While I would never deliberately share something that’s inaccurate, I can make mistakes.

Google Gemini’s response when asked about election results, representing a rare acknowledgment from an AI system about its potential to hallucinate or provide inaccurate information, particularly on sensitive political topics.

When you provide Gemini, what does that mean for the search business? It’s an ongoing existential question for companies like Google. Companies like Perplexity can actually have an advantage here because they’re in it to create one product and do one product really well.

LLM-based AI character creator and former Google Gemini conversation designer Kento Morita explained the strategic dilemma facing established tech companies as AI search tools potentially cannibalize their core business models.

I know elections are important to talk about, and I wish we could, but there’s a lot of nuanced information that I’m not equipped to handle right now. It’s best that I step aside on this one and suggest that you visit a trusted source.

Microsoft Copilot’s response when repeatedly prompted about voting, demonstrating the cautious approach some AI companies took to avoid providing potentially inaccurate election information.

We curated an authoritative set of sources to respond to election-related questions, prioritizing domains that are non-partisan and fact-checked.

A Perplexity spokesperson explained their approach to the Election Information Hub, highlighting their strategy of aggressive engagement with election content while emphasizing source credibility and non-partisanship.

Our Take

The 2024 election exposed a fundamental divide in AI industry philosophy: engage or retreat. Perplexity’s bold approach suggests confidence in their sourcing methodology, while OpenAI and Microsoft’s avoidance reveals deep concerns about AI’s reliability for consequential information. Most troubling is Grok’s partisan skew, which demonstrates how AI systems can become echo chambers, amplifying rather than moderating platform biases.

The “fun mode” excuse for Grok’s behavior is particularly concerning—it creates plausible deniability for harmful outputs. This election served as a real-world stress test, and the results suggest we’re far from consensus on how AI should handle democratic processes. As these tools gain market share, the industry must develop standards beyond simply disclaiming responsibility. The stakes are too high for experimentation without guardrails, and regulatory intervention seems increasingly inevitable if companies cannot self-regulate effectively.

Why This Matters

This analysis reveals the critical challenges AI companies face in handling sensitive, real-time information during major democratic events. The divergent approaches—from Perplexity’s aggressive information provision to ChatGPT’s complete avoidance—highlight the industry’s struggle to balance innovation with responsibility.

The potential for AI misinformation during elections poses serious threats to democratic processes. Grok’s partisan skew demonstrates how AI systems can amplify existing biases in their training data or source material, while Google and OpenAI’s cautious approaches show awareness of reputational and societal risks.

This matters for the future of information access and trust. As AI search tools increasingly compete with traditional search engines, their handling of politically sensitive topics will determine public trust and regulatory responses. The “hallucination” acknowledgment by Google Gemini represents a rare admission of AI limitations that could shape user expectations.

For businesses and policymakers, these varied approaches signal that AI election coverage remains an unsolved problem, requiring continued oversight, transparency standards, and potentially new regulations as these tools become more prevalent in civic life.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/ai-search-2024-presidential-election-mixed-results-2024-11