Despite months of dire warnings from experts about artificial intelligence disrupting elections worldwide, the anticipated chaos has largely failed to materialize in 2024. Instead of sophisticated deepfakes fooling voters and creating widespread misinformation, AI’s most significant electoral impact may have been inadvertently pushing Taylor Swift to endorse Democratic presidential nominee Kamala Harris.
In an Instagram post announcing her support for Harris, Swift revealed that her decision was influenced by an AI-generated image posted by Donald Trump showing her in an oversized American flag hat with the phrase “Taylor Wants You To Vote For Donald Trump.” The pop megastar stated that the fake image “really conjured up my fears around AI, and the dangers of spreading misinformation,” leading her to conclude she needed to be “very transparent” about her actual voting plans.
The reality of AI’s election interference has been far less dramatic than predicted. While there have been isolated incidents—including a phony Joe Biden robocall in New Hampshire and a deepfaked Kamala Harris campaign video—these attempts haven’t successfully fooled voters. Most AI-generated election content has taken the form of obvious memes and satirical videos on social media, with fact-checkers and platforms like X’s Community Notes quickly debunking any remotely convincing AI content.
Even foreign disinformation campaigns using AI have proven less effective than feared. Meta’s latest Adversarial Threat Report noted that while Russian, Chinese, and Iranian operations have incorporated AI, their “GenAI-powered tactics” have provided “only incremental productivity and content-generation gains.” Similarly, Microsoft’s August Threat Intelligence Report found that Russian and Chinese influence operations “have employed generative AI—but with limited to no impact.”
Microsoft observed that many actors have “pivoted back to techniques that have proven effective in the past—simple digital manipulations, mischaracterization of content, and use of trusted labels or logos atop false information.”
International elections have shown similar patterns. The Australian Strategic Policy Institute’s analysis of the UK’s July election found voters never faced the feared “tsunami of AI fakes targeting political candidates,” with only a handful of viral examples during the campaign. The UK’s Alan Turing Institute reported that of 112 national elections since early 2023, only 19 showed AI interference, with researchers warning that media amplification “risks amplifying public anxieties and inflating the perceived threat of AI to electoral processes.”
Key Quotes
It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.
Taylor Swift explained in her Instagram post why an AI-generated image falsely showing her support for Trump prompted her to publicly endorse Kamala Harris. This quote illustrates how AI misuse can backfire and create unintended political consequences.
In total, We’ve seen nearly all actors seek to incorporate AI content in their operations, but more recently many actors have pivoted back to techniques that have proven effective in the past—simple digital manipulations, mischaracterization of content, and use of trusted labels or logos atop false information.
Microsoft’s Threat Intelligence Report reveals that foreign influence operations have largely abandoned AI tactics in favor of traditional disinformation methods, suggesting that generative AI hasn’t provided the expected advantages in election interference campaigns.
Existing examples of AI misuse in elections are scarce, and often amplified through the mainstream media. This risks amplifying public anxieties and inflating the perceived threat of AI to electoral processes.
Researchers from the UK’s Alan Turing Institute warn that media coverage may be exaggerating AI’s electoral threat. Their study found only 19 out of 112 elections since 2023 showed AI interference, suggesting the actual risk is significantly lower than public perception.
While there’s no evidence these examples swayed any large number of votes, there were spikes in online harassment against the people targeted by the fakes as well as confusion among audiences over whether the content was authentic.
Sam Stockwell from the Australian Strategic Policy Institute highlights that while AI deepfakes haven’t changed election outcomes, they still cause real harm through harassment and confusion, pointing to secondary effects that deserve attention.
Our Take
The 2024 election cycle serves as an important stress test for AI’s capabilities in real-world manipulation scenarios, and the technology appears to have failed to live up to both its promise and its threat. This outcome suggests that the AI industry and media may have overestimated the technology’s persuasive power while underestimating human critical thinking and existing safeguards.
What’s particularly noteworthy is the irony of AI’s biggest electoral impact: the Taylor Swift endorsement represents an own-goal for Trump’s campaign, where misuse of AI technology backfired spectacularly. This demonstrates that in the current environment, clumsy or obvious AI manipulation may be more likely to galvanize opposition than achieve its intended effect.
However, we shouldn’t become complacent. The research indicates that while AI hasn’t been a game-changer yet, bad actors continue experimenting with these tools. The real concern may not be the 2024 election, but rather the cumulative erosion of trust in media and democratic institutions as AI-generated content becomes more sophisticated and ubiquitous.
Why This Matters
This story represents a crucial reality check for the AI industry and policymakers who have been preparing for worst-case scenarios around election interference. The gap between predicted AI-driven electoral chaos and actual impact reveals important insights about both AI’s current limitations and human resilience to obvious manipulation.
The findings suggest that while AI tools have become more accessible, their effectiveness in sophisticated influence operations remains limited. Voters appear more discerning than anticipated, and existing fact-checking mechanisms—both human and automated—are proving adequate for identifying AI-generated content. This has significant implications for how tech companies, governments, and platforms should allocate resources for election security.
However, the Taylor Swift incident demonstrates that AI’s indirect effects may be more consequential than direct manipulation attempts. Her endorsement, potentially reaching millions of followers, was triggered by AI misuse—showing how the technology’s impact can cascade in unexpected ways. The research also warns that while individual election results may not be swayed, the cumulative effect of AI-generated content could still “damage the broader democratic system” through erosion of trust and increased harassment of targeted individuals.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Tech Tip: How to Spot AI-Generated Deepfake Images
- The Disinformation Threat to Local Governments
- Intelligence Chairman: US Prepared for Election Threats Years Ago
- Jenna Ortega Speaks Out Against Explicit AI-Generated Images of Her
Source: https://www.businessinsider.com/ai-election-taylor-swift-kamala-harris-2024-9