As the 2024 presidential election unfolds, major AI companies are taking dramatically different approaches to handling election-related queries, marking the first major electoral test for generative AI chatbots. ChatGPT didn’t exist during the 2020 election, but its launch two years ago sparked a wave of AI tools that are now deeply integrated into consumer products like Google Search.
Perplexity is aggressively embracing election coverage with a dedicated “Election Information Hub” that uses AI to provide voting information, polling locations, and AI-summarized analysis of ballot measures and candidates. The AI search engine employs Retrieval-Augmented Generation (RAG) to identify and summarize relevant information from a curated set of non-partisan, fact-checked sources. According to spokesperson Sara Platnick, the hub doesn’t rely on stored training data, which helps minimize AI hallucinations. Starting Tuesday, Perplexity will offer live election updates using data from The Associated Press, along with information from Democracy Works, Ballotpedia, and other non-partisan sources.
OpenAI is taking a more cautious approach with ChatGPT, maintaining election-related responses while adding safeguards. The chatbot provides in-line citations and relevant links, and starting November 5th, users asking about election results will see messages encouraging them to check authoritative sources like the Associated Press, Reuters, and state election boards. OpenAI is actively testing safeguards and monitoring for issues, while directing procedural voting questions to CanIVote.org.
Anthropic’s Claude implemented a pop-up feature redirecting users to TurboVote for voting information, with guardrails preventing the chatbot from promoting specific candidates or generating election misinformation. The company limits outputs to text-only to eliminate deepfake risks.
Google has taken the most restrictive stance, blocking its Gemini AI chatbot from answering election questions “out of an abundance of caution.” The company also doesn’t trigger AI Overview summaries for election-related searches on its main search product.
Experts warn of significant risks. Alon Yamin, CEO of AI text analysis platform Copyleaks, noted that while AI can provide real-time updates and identify trends, the chance of hallucinations and accuracy issues presents serious dangers. AI can spread biased information, misinterpret data, and create false narratives, with models only as good as their training data. Brad Carson from Americans for Responsible Innovation called for government legislation requiring AI companies to clearly label AI-generated information.
Key Quotes
Perplexity uses a process called Retrieval-Augmented Generation to identify relevant information and summarize it in a way that’s tailored to a user’s query
Sara Platnick, Perplexity spokesperson, explained the technical approach behind their election hub. This matters because RAG technology represents a potential solution to AI hallucination problems by grounding responses in verified sources rather than relying solely on training data.
I feel that other products will probably fill the gap that Google is vacating, but I think it is responsible of Google to try to step back a bit from this
Brad Carson, cofounder and president of Americans for Responsible Innovation, commented on Google’s restrictive approach. This highlights the competitive dynamics at play—while Google exercises caution, other companies may capture market share by offering election information, creating a race-to-the-bottom risk.
AI can spread biased information, misinterpret data, and create false narratives… these models are only as good as the data they are trained on
Alon Yamin, CEO of Copyleaks, outlined the fundamental risks of using AI for election information. This underscores that even well-intentioned AI systems can amplify biases and errors, especially in fast-moving election environments where accuracy is critical.
Our Take
The stark contrast in approaches reveals a fundamental dilemma facing the AI industry: how to balance innovation with responsibility when the stakes are democracy itself. Perplexity’s aggressive strategy is a calculated bet that proper sourcing and RAG technology can mitigate hallucination risks, while Google’s retreat suggests the reputational and regulatory risks may outweigh potential benefits.
What’s particularly concerning is the lack of industry-wide standards. Each company is essentially conducting a live experiment on millions of users during a consequential election. The call for government legislation is warranted—voluntary corporate guardrails have proven insufficient in other tech domains.
The real test comes Tuesday when live results flow in. If AI systems misreport outcomes or spread misinformation during this critical window, the backlash could reshape AI regulation for years. This election may ultimately determine whether AI becomes a trusted information source or faces significant restrictions in high-stakes domains.
Why This Matters
This story represents a critical inflection point for AI’s role in democracy and public information. The 2024 election is the first major test of how generative AI handles high-stakes, real-time information where accuracy is paramount. The divergent approaches—from Perplexity’s aggressive integration to Google’s complete restriction—reveal the industry’s uncertainty about balancing innovation with responsibility.
The risks are substantial: AI hallucinations could spread election misinformation at unprecedented scale, potentially influencing voter behavior and undermining democratic processes. With millions relying on AI chatbots for quick answers, inaccurate election information could have immediate real-world consequences. The industry’s response will likely shape future AI regulation, particularly around high-stakes information domains like healthcare, finance, and civic engagement.
This also highlights the tension between AI companies’ commercial interests and societal responsibility. Companies that successfully navigate election coverage could gain competitive advantages, while those that experience failures may face regulatory backlash and reputational damage. The outcomes of these different approaches will inform how AI handles future elections globally and establish precedents for AI’s role in democratic processes.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Tech Tip: How to Spot AI-Generated Deepfake Images
- The Disinformation Threat to Local Governments
- Intelligence Chairman: US Prepared for Election Threats Years Ago
- Perplexity CEO Predicts AI Will Automate Two White-Collar Roles by 2025
- Google’s Gemini: A Potential Game-Changer in the AI Race
Source: https://www.businessinsider.com/openai-chatgpt-perplexity-election-hub-ai-information-risks-2024-11