As the world braced for a potential tsunami of AI-generated disinformation during the massive 2024 election cycle, AI expert Oren Etzioni prepared for the worst. With major democracies including India, Indonesia, and the United States heading to the polls, and generative AI having been publicly available for about a year, concerns about AI deepfakes disrupting democratic processes reached fever pitch.
Etzioni, who has studied and worked on artificial intelligence for over a decade, responded by founding TrueMedia.org—a nonprofit organization that leverages AI-detection technologies to help users determine the authenticity of online videos, images, and audio content. The platform launched its early beta version in April 2024, positioning itself as a critical defense against the anticipated wave of synthetic media manipulation.
Surprisingly, the expected barrage never materialized. “It really wasn’t nearly as bad as we thought,” Etzioni admitted, though he remains cautiously optimistic about this outcome. He attributes the relatively quiet election cycle to two primary factors: traditional disinformation methods remain effective without requiring sophisticated AI tools, and current generative AI technology hasn’t quite reached the sophistication needed to create truly convincing deepfake videos at scale.
“Out-and-out lies and conspiracy theories were prevalent, but they weren’t always accompanied by synthetic media,” Etzioni explained. He notes that creating the most realistic and egregious deepfake videos remains technically challenging, and the knowledge of how to produce them hasn’t fully penetrated malicious online communities.
However, Etzioni is certain that high-end AI video-generation capabilities will eventually arrive, whether in the next major election cycle or the one following. This inevitability has prompted TrueMedia to share critical learnings from 2024: democracies remain unprepared for worst-case AI deepfake scenarios, purely technical solutions are insufficient, AI regulation is necessary, and social media platforms must play a more active role.
TrueMedia currently achieves approximately 90% accuracy in detecting synthetic media, though users frequently request higher precision. Etzioni acknowledges that 100% accuracy is impossible, necessitating human analysts for edge cases where users question the technology’s decisions. The organization plans to publish research on its detection efforts and is exploring licensing deals for its AI models, which have been refined through analyzing numerous uploads and deepfakes throughout the election season.
Key Quotes
We’re going into the jungle without bug spray
Oren Etzioni used this metaphor to describe his concerns heading into the 2024 election cycle, illustrating how unprepared democracies were for potential AI-powered disinformation campaigns despite knowing the threat existed.
It really wasn’t nearly as bad as we thought. That was good news, period.
Etzioni reflected on the surprisingly muted impact of AI deepfakes during the 2024 elections, acknowledging relief while remaining cautious about future threats as the technology continues to evolve.
Some of the most egregious videos that are truly realistic — those are still pretty hard to create. There’s another lap to go before people can generate what they want easily and have it look the way they want.
Etzioni explained why AI deepfakes didn’t flood the 2024 elections, noting that current generative AI technology hasn’t yet reached the sophistication needed to create highly convincing fake videos at scale, though he expects this capability to arrive soon.
Out-and-out lies and conspiracy theories were prevalent, but they weren’t always accompanied by synthetic media
This observation from Etzioni highlights that traditional disinformation tactics remain effective without requiring sophisticated AI tools, suggesting that when deepfake technology does mature, the threat will compound existing problems rather than replace them.
Our Take
Etzioni’s experience with TrueMedia offers a sobering preview of democracy’s AI future. The 2024 “near-miss” shouldn’t breed complacency—it’s a warning shot. What’s particularly concerning is the inevitability Etzioni describes: high-quality deepfake capabilities will arrive, and when they do, our institutions remain fundamentally unprepared. The 90% detection accuracy, while impressive, reveals a critical vulnerability—in elections, even 10% of undetected deepfakes could swing outcomes or erode trust irreparably. The acknowledgment that purely technical solutions are insufficient is crucial; it demands regulatory frameworks and platform accountability that currently don’t exist. Most troubling is the recognition that traditional lies already work without AI enhancement, suggesting deepfakes will amplify rather than replace existing disinformation ecosystems. The licensing interest in TrueMedia’s models indicates growing awareness, but awareness without action is insufficient. The clock is ticking on implementing comprehensive safeguards before the next election cycle.
Why This Matters
This story represents a critical inflection point in understanding AI’s role in democratic processes. While 2024 provided a temporary reprieve from AI-powered election interference, Etzioni’s warnings underscore that this is merely a grace period before the technology matures. The 90% detection accuracy achieved by TrueMedia highlights both the promise and limitations of AI-powered countermeasures—a reality that demands urgent attention from policymakers, tech platforms, and civil society.
The article reveals a troubling truth: democracies are structurally unprepared for sophisticated AI deepfakes, and purely technical solutions won’t suffice. This necessitates a multi-stakeholder approach combining regulation, platform accountability, and public awareness. As generative AI capabilities continue advancing rapidly, the window for implementing robust safeguards is narrowing. The fact that traditional disinformation remains effective without AI suggests that when bad actors do master deepfake technology, the impact could be exponentially more damaging. Future elections face an unprecedented threat that requires immediate, coordinated action across government, technology, and media sectors.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Tech Tip: How to Spot AI-Generated Deepfake Images
- Intelligence Chairman: US Prepared for Election Threats Years Ago
- The Disinformation Threat to Local Governments
- Jenna Ortega Speaks Out Against Explicit AI-Generated Images of Her
Source: https://www.businessinsider.com/ai-deepfakes-election-year-truemedia-oren-etzioni-2024-12