South Korea Criminalizes Deepfake Porn Viewing with Prison Time

South Korean lawmakers have enacted groundbreaking legislation that criminalizes not just the creation, but also the possession and viewing of deepfake pornography, marking one of the world’s most comprehensive legal responses to AI-generated sexual content. The new law, passed on Thursday and awaiting presidential approval, imposes severe penalties of up to three years in prison or fines reaching 30 million won (approximately $22,870) for anyone caught watching, saving, or purchasing deepfake pornographic material.

This legislation significantly expands South Korea’s existing legal framework, which already criminalized the creation of sexually explicit deepfakes with penalties of up to five years in prison or fines of 50 million won ($38,109). The new consumer-focused approach represents a paradigm shift in how governments are addressing the proliferation of AI-generated sexual content, targeting demand rather than just supply.

The legislative action comes amid a deepfake crisis in South Korea, where AI-generated pornography has become alarmingly prevalent. According to a 2023 report by Security Hero, a US-based identity theft protection startup, South Korean singers and actresses were the most commonly targeted group globally, comprising 53% of individuals featured in deepfake pornography. The report also revealed that of the 95,820 deepfake videos online in 2023, a staggering 98% were pornographic in nature.

The urgency of the situation became apparent earlier this month when South Koreans protested in Seoul demanding an end to non-consensual deepfake porn, which is frequently distributed through Telegram chatrooms. Authorities discovered a vast network of these chatrooms last month, some targeting school and university staff and students, prompting a nationwide crackdown. South Korean regulators met with Telegram, resulting in the removal of 148 videos.

This legislative development reflects a global trend toward stricter regulation of AI-generated content. In the United States, bipartisan legislation led by Senators Ted Cruz and Amy Klobuchar aims to criminalize the publication of non-consensual, sexually exploitative images, including AI-generated deepfakes. South Korea’s comprehensive approach, however, sets a new international benchmark by holding consumers accountable alongside creators.

Key Quotes

South Korean singers and actresses were the most commonly targeted group, making up 53% of the individuals featured in deepfake pornography

This finding from Security Hero’s 2023 report reveals the disproportionate targeting of South Korean public figures, highlighting why the country has taken such aggressive legislative action against deepfake pornography.

The total number of deepfake videos online was 95,820, with 98% of those being pornographic in nature

Security Hero’s data from 2023 underscores the overwhelming use of deepfake technology for creating non-consensual sexual content, demonstrating that this AI application is primarily being weaponized for exploitation rather than legitimate purposes.

Our Take

South Korea’s dual-pronged approach—targeting both creators and consumers of deepfake pornography—represents the most comprehensive legislative response to AI-generated sexual exploitation to date. This is particularly significant because it acknowledges that demand drives supply in the deepfake economy. By criminalizing viewing and possession, lawmakers are attempting to eliminate the market incentive for creating such content. However, enforcement will be challenging, requiring sophisticated detection capabilities and international cooperation, especially given the role of platforms like Telegram in distribution. The legislation also raises important questions about how AI companies will be held accountable for misuse of their technologies. As generative AI becomes more powerful and accessible, we’re likely to see more governments adopt similar frameworks, potentially creating a patchwork of regulations that AI companies must navigate. This could accelerate the development of technical safeguards and watermarking systems to prevent misuse of AI image and video generation tools.

Why This Matters

This legislation represents a watershed moment in AI regulation, demonstrating how governments are evolving their approach to combat the dark side of generative AI technology. By criminalizing consumption alongside creation, South Korea is addressing the economic incentive structure that fuels deepfake pornography production. This matters because deepfake technology has become increasingly accessible and sophisticated, enabling bad actors to create convincing fake content with minimal technical expertise.

The law’s significance extends beyond South Korea’s borders, potentially setting a precedent for other nations grappling with similar issues. As AI-generated content becomes more prevalent and realistic, the legal frameworks established now will shape how societies balance technological innovation with individual rights and dignity. The focus on protecting victims—particularly the targeting of South Korean entertainers and students—highlights how AI technology can amplify existing patterns of harassment and exploitation.

For the AI industry, this development signals that regulatory scrutiny will intensify, particularly for technologies that can be weaponized for harm. Companies developing generative AI tools may face increased pressure to implement safeguards, and platforms hosting user-generated content will need robust detection and removal systems for deepfake material.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/south-korea-threatens-deepfake-porn-viewers-three-years-prison-fine-2024-9