Meta is pushing deeper into AI-generated content by testing a controversial new feature that automatically inserts AI-created images—including personalized images of users themselves—directly into Facebook and Instagram feeds. Announced at Meta Connect, the company’s annual developer conference, this expansion represents a significant shift in how social media platforms may integrate artificial intelligence into the user experience.
The new feature builds on Meta’s “Imagine Me” tool, which launched in beta in July 2024 and initially allowed users to create AI-generated selfies for direct messages, stories, and profile pictures. However, the latest iteration takes this concept further by proactively placing AI-generated content in users’ feeds without them explicitly requesting it. According to Meta, these images could be “based on your interests or current trends” or may feature AI-generated images of the user themselves.
Privacy and opt-out controls are built into the system. Meta confirmed that AI-generated images featuring a user’s face can only be created for those who have onboarded to the “Imagine yourself” feature by uploading photos and accepting the terms. These personalized AI images are only visible to the individual user unless they choose to share them. Users who encounter these AI posts can opt out by tapping the three dots in the corner and selecting either “hide” to stop similar posts or “stop seeing this content” to turn off suggested AI images entirely.
Industry experts are divided on the feature’s potential impact. Social media consultant Matt Navarra told Business Insider that Meta must “strike a balance between AI-powered features and genuine user-generated content.” While acknowledging the novelty factor could drive engagement and keep users on the platform longer, Navarra warned that long-term success depends on quality and relevance. “If it’s just more AI slop in feeds, I’m not sure how, long-term, that will keep people engaged,” he cautioned, adding concerns about users feeling uneasy about their likeness being used in AI-generated images.
The feature has drawn sharp criticism from some quarters. Kevin Roose, co-host of The New York Times’ “Hard Fork” podcast, called it “the creepiest thing I can imagine them doing,” painting a scenario where users unexpectedly encounter AI-generated images of themselves in contexts they never created. This experiment represents an early glimpse at how Meta envisions social media feeds evolving as AI becomes more prevalent in daily life.
Key Quotes
There is a novelty factor and that in itself could drive engagement and possibly keep people in feeds and on the platform longer. However, the long-term response will really depend heavily on the quality and relevance of the AI-generated content because if it’s just more AI slop in feeds, I’m not sure how, long-term, that will keep people engaged without causing additional problems for Meta to deal with.
Social media consultant and industry analyst Matt Navarra provided this balanced assessment to Business Insider, highlighting both the potential benefits and significant risks of Meta’s AI content strategy. His concern about ‘AI slop’ reflects growing industry worries about low-quality AI-generated content flooding social platforms.
If it feels intrusive or repetitive, which doesn’t really align with their interests, then users probably will become quite disengaged. There’s also the potential for users to feel slightly uneasy about their likeness being used in AI-generated images or how customized or personalized it becomes.
Matt Navarra continued his analysis by identifying key user experience concerns that could undermine Meta’s AI content initiative. This quote captures the delicate balance Meta must strike between personalization and privacy, a challenge that will likely define the success or failure of this feature.
Imagine you’re talking about fishing with your friend, and all of a sudden, because you’ve clicked on some fishing stuff, you’re just scrolling through your Instagram feed and you see a picture of yourself in a fishing outfit going fishing. Like you are going to throw your phone into the nearest body of water and you’re never going to log on again.
Kevin Roose, co-host of The New York Times’ ‘Hard Fork’ podcast, offered this vivid critique of the feature, calling it ’the creepiest thing I can imagine them doing.’ His hypothetical scenario illustrates the potential uncanny valley effect and user discomfort that could result from unexpected AI-generated self-images appearing in feeds.
Our Take
Meta’s aggressive push into AI-generated feed content represents a calculated gamble that could either revolutionize social media or trigger significant user backlash. The company appears to be betting that the engagement benefits of personalized AI content will outweigh privacy concerns and the ‘creepiness factor’ that critics like Kevin Roose have identified.
What’s particularly noteworthy is the proactive nature of this feature—Meta isn’t waiting for users to request AI content but is instead inserting it into feeds automatically. This marks a philosophical shift from AI as a tool to AI as a content creator and curator. The success or failure of this experiment will likely influence how aggressively other platforms pursue similar strategies.
The opt-out mechanisms Meta has implemented suggest the company anticipates resistance, but the question remains whether users will tolerate this level of AI integration or whether it crosses a line that triggers broader concerns about digital identity and platform control over personal likeness.
Why This Matters
This development marks a pivotal moment in the evolution of social media and represents Meta’s bold bet on AI-generated content becoming a core part of the user experience rather than an optional feature. The move signals that major tech platforms are moving beyond AI as a tool users consciously choose to employ, toward AI as an ambient presence that proactively shapes content consumption.
The implications extend far beyond Meta’s platforms. If successful, this could establish a new paradigm where AI-generated personalized content becomes standard across social networks, fundamentally changing the nature of social media from user-generated to AI-augmented or AI-initiated content. This raises critical questions about authenticity, user agency, and the psychological impact of encountering AI-generated versions of oneself.
For the broader AI industry, Meta’s experiment serves as a real-world stress test of consumer acceptance of pervasive AI integration. The balance between engagement and user discomfort will provide valuable data on how far companies can push AI features before triggering backlash. The outcome could influence how other platforms approach AI integration and may inform future regulatory discussions about AI-generated content, digital identity, and user consent in the age of generative AI.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Tech Tip: How to Spot AI-Generated Deepfake Images
- Jenna Ortega Speaks Out Against Explicit AI-Generated Images of Her
- Photobucket is licensing your photos and images to train AI without your consent, and there’s no easy way to opt out
- Meta’s AI advisory council is overwhelmingly white and male, raising concerns about bias
- Meta’s Nick Clegg and Joel Kaplan to step down from key roles