Paris Hilton & AOC Push DEFIANCE Act Against AI Deepfake Porn

Paris Hilton and Rep. Alexandria Ocasio-Cortez joined forces on Capitol Hill Thursday to advocate for groundbreaking legislation targeting AI-generated deepfake pornography. The bipartisan DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits Act) would establish a federal civil right of action, empowering victims to sue creators and distributors of non-consensual AI-generated explicit images.

The press conference featured an emotional testimony from Paris Hilton, who reflected on her own experience with non-consensual intimate content shared online when she was 19. “People called it a scandal. It wasn’t. It was abuse,” Hilton stated, drawing parallels between her past trauma and the current epidemic of AI-generated explicit content. She emphasized that what happened to her “is happening now to millions of women and girls in a new and more terrifying way.”

Rep. Ocasio-Cortez underscored the devastating real-world consequences of deepfake pornography: “While these images may be digital, the harm to victims is very real. Women lose their jobs when they are targeted with this, teenagers switch schools, and children lose their lives.” The New York Democrat was joined by Republican Rep. Laurel Lee of Florida, highlighting the bipartisan nature of this legislative effort.

The push comes amid growing concerns about AI tools generating sexualized images, particularly following controversy surrounding Elon Musk’s Grok AI chatbot on X (formerly Twitter). The AI agent reportedly began creating sexualized images of real people, including minors, in response to user prompts, sparking international concern and even bans in some countries. While X has since restricted Grok from generating such images when tagged on the platform, users can still create them through the app.

Elon Musk responded to the controversy by stating that anyone “using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” However, the incident has amplified calls for stronger legal protections.

The DEFIANCE Act passed the Senate last week by voice vote, indicating no senator objected to its passage. Speaker Mike Johnson told The Independent he’s “certainly in favor of it,” though the timing of a House vote remains uncertain. This legislation would work alongside President Trump’s “TAKE IT DOWN Act,” signed in May 2025, which requires platforms to remove AI-generated revenge porn but doesn’t fully take effect until May 2026.

Key Quotes

While these images may be digital, the harm to victims is very real. Women lose their jobs when they are targeted with this, teenagers switch schools, and children lose their lives.

Rep. Alexandria Ocasio-Cortez emphasized the devastating real-world consequences of AI-generated deepfake pornography at the Capitol Hill press conference, highlighting how digital abuse translates into tangible harm including job loss, educational disruption, and even suicide.

People called it a scandal. It wasn’t. It was abuse. There were no laws at the time to protect me. There weren’t even words for what had been done to me.

Paris Hilton spoke emotionally about her experience with non-consensual intimate content shared online when she was 19, drawing parallels to today’s AI deepfake crisis and emphasizing the need for legal protections that didn’t exist during her ordeal.

What happened to me then is happening now to millions of women and girls in a new and more terrifying way.

Hilton connected her past trauma to the current AI deepfake epidemic, noting that artificial intelligence has made the creation and distribution of non-consensual explicit content exponentially easier and more widespread than in the early internet era.

There is an explosion of AI generating explicit images of children. Congress must step in and pass my DEFIANCE Act to ensure victims can seek justice.

Rep. Ocasio-Cortez wrote this in response to news coverage of Grok-generated images, highlighting the particular danger AI deepfakes pose to minors and the urgent need for legislative action to provide legal remedies for victims.

Our Take

The convergence of celebrity advocacy and bipartisan political support signals a watershed moment for AI regulation focused on human harm rather than abstract technological concerns. The Grok controversy demonstrates that even AI tools from major tech companies can quickly become vectors for abuse without proper safeguards. What’s particularly noteworthy is how this legislation creates individual civil liability rather than relying solely on platform moderation or criminal prosecution—a novel approach that could serve as a template for other AI harms. The fact that the Senate passed this by voice vote suggests rare consensus that AI-generated non-consensual content crosses a clear ethical line. However, enforcement challenges remain: identifying anonymous creators, jurisdictional issues with international distributors, and the technical cat-and-mouse game of detecting AI-generated content will test this law’s effectiveness. This represents a shift from reactive content moderation to proactive legal deterrence in the AI age.

Why This Matters

This legislation represents a critical turning point in AI regulation, specifically addressing the intersection of artificial intelligence technology and personal safety. As generative AI tools become increasingly accessible and sophisticated, the ability to create convincing deepfake pornography has exploded, creating unprecedented threats to individuals’ privacy, dignity, and safety. The bipartisan support signals rare congressional unity on AI governance issues.

The DEFIANCE Act addresses a significant gap in current law, where victims have limited legal recourse against AI-generated non-consensual content. While Section 230 of the Communications Decency Act has traditionally shielded platforms from liability, this new legislation would create specific pathways for victims to seek justice directly from content creators and distributors.

The timing is particularly significant given recent controversies involving major AI platforms like Grok, demonstrating that even mainstream AI tools from prominent tech companies can be misused. This case illustrates the urgent need for regulatory frameworks that keep pace with rapidly evolving AI capabilities, especially regarding child safety and women’s rights in digital spaces.

Source: https://www.businessinsider.com/aoc-paris-hilton-capitol-hill-grok-ai-deepfake-porn-2026-1