OpenAI has revealed that its AI systems blocked approximately 250,000 requests to generate fake images of political candidates in the month leading up to the 2024 presidential election. The artificial intelligence company’s disclosure highlights both the scale of attempted election-related AI abuse and the effectiveness of its implemented safeguards.
The blocked requests specifically targeted images of President-elect Trump, Vice President Harris, Vice President-elect Vance, President Biden, and Governor Walz using DALL-E, OpenAI’s AI art generator integrated into ChatGPT. This represents a significant volume of attempted deepfake creation that could have potentially spread misinformation during the critical final weeks of the campaign.
OpenAI implemented multiple “guardrails” to prevent election-related abuse of its AI products throughout the campaign season. These measures came in response to widespread concerns that AI technology would be weaponized during the election to create deepfakes and spread conspiracy theories. Those fears were not unfounded—in January, New Hampshire voters received deepfake robocalls featuring a fake President Biden voice discouraging them from voting in the state’s presidential primary.
Beyond blocking image generation, ChatGPT provided approximately 1 million responses directing users to CanIVote.org, a voting information site run by the National Association of Secretaries of State, when asked logistical questions about voting in the month before November 5. This partnership ensured users received accurate, non-partisan voting information from authoritative sources.
On Election Day and the following day, ChatGPT delivered around 2 million responses that referred users to established news organizations like the Associated Press and Reuters for election results, rather than providing its own analysis or potentially premature calls. This approach contrasted sharply with other AI chatbots—OpenAI noted that ChatGPT avoided expressing political opinions on candidates, unlike Elon Musk’s Grok AI, which openly expressed excitement about Trump’s victory.
The company’s Friday blog post detailing these statistics provides the first comprehensive look at how AI companies actively worked to prevent their technologies from being exploited for election interference during the 2024 campaign.
Key Quotes
In the month leading up to Election Day, we estimate that ChatGPT rejected over 250,000 requests to generate DALL·E images of President-elect Trump, Vice President Harris, Vice President-elect Vance, President Biden, and Governor Walz
OpenAI disclosed this statistic in a Friday blog post, providing the first concrete data on the scale of attempted election-related deepfake creation using its AI tools during the critical final month of the 2024 presidential campaign.
Around 2 million ChatGPT responses included this message on Election Day and the day following
OpenAI reported this figure regarding responses that directed users to news organizations like the Associated Press and Reuters for election results, demonstrating the company’s effort to ensure users received information from authoritative journalistic sources rather than AI-generated analysis.
Our Take
OpenAI’s transparency in releasing these statistics is commendable and sets an important standard for the AI industry. The 250,000 blocked requests represent just one company’s experience with one AI tool—the actual scale of attempted AI-driven election manipulation across all platforms is likely far larger. What’s particularly noteworthy is the contrast between OpenAI’s cautious approach and competitors like Grok AI, which openly expressed political opinions. This divergence suggests the industry lacks consensus on appropriate AI behavior during elections. The effectiveness of OpenAI’s guardrails is encouraging, but the high volume of blocked requests proves that demand for election-related deepfakes is substantial. As AI technology becomes more accessible through open-source models and competing services with fewer restrictions, the challenge of preventing election-related AI abuse will only intensify, making industry-wide standards and potential regulation increasingly necessary.
Why This Matters
This disclosure marks a critical milestone in understanding AI’s role in democratic processes and demonstrates that proactive safeguards can effectively mitigate election-related AI abuse. The 250,000 blocked image requests reveal the enormous scale of attempted deepfake creation, validating concerns about AI-generated misinformation while proving that technical guardrails can work.
The story establishes important precedents for AI governance during elections. OpenAI’s approach—blocking candidate image generation, redirecting to authoritative sources, and maintaining political neutrality—provides a blueprint for other AI companies. This becomes increasingly critical as AI tools become more accessible and sophisticated.
For businesses and policymakers, this highlights the ongoing tension between AI capabilities and responsible deployment. While the technology exists to create convincing deepfakes, companies can implement effective restrictions. However, the sheer volume of blocked requests suggests bad actors are actively seeking to exploit AI for political manipulation, underscoring the need for continued vigilance, industry standards, and potentially regulatory frameworks to ensure AI doesn’t undermine democratic institutions in future elections.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources: