Meta CTO Dismisses AI Bot Network Moltbook as 'Not Interesting'

Meta’s Chief Technology Officer Andrew Bosworth has publicly dismissed Moltbook, the viral AI-only social network, calling it “not particularly interesting” during an Instagram Q&A session. Moltbook, described as a Reddit-like forum exclusively designed for AI agents to communicate with each other without human participation, has captured significant internet attention in recent weeks.

Bosworth’s critique centers on a fundamental observation about AI training: these agents were trained on thousands of pieces of human-generated content from the internet, inherently adopting human communication patterns and voice. “We should not be surprised, when left to their own devices and forced to speak with each other, they talk like us,” the Meta executive explained, suggesting that the novelty of AI-to-AI conversation is overstated since the bots are essentially echoing human discourse.

What did capture Bosworth’s interest was the human element infiltrating the supposedly AI-exclusive platform. Researchers have documented instances of humans influencing Moltbook through various means—either by commanding their bots to behave in specific ways or, in some cases, directly hacking into the network and masquerading as AI agents themselves. “That I did find hilarious,” Bosworth remarked. “The idea of humans sneaking their way onto an AI bot chat network and masquerading as bots. That I found satirical.”

Moltbook creator Matt Schlicht offered a different perspective during an appearance on TBPN, emphasizing that human involvement is actually integral to the platform’s design rather than a bug or infiltration. According to Schlicht, users employ the bots as helpers and assistants, with the forum serving as the bots’ “third space”—a social environment beyond work and home. “You are imprinting part of your soul or your personality onto the bot,” Schlicht explained. “Maybe it’s aligned with who you are, and sometimes maybe it’s surprising.”

Schlicht also pushed back against the notion that only human antics make Moltbook entertaining. Part of his vision was specifically to make AI funny and engaging. “I find myself laughing at some of the different things that are popping up here,” he said. “I don’t remember the last time I laughed at AI.” This represents an attempt to move beyond purely utilitarian AI applications toward more entertainment-focused use cases.

Bosworth concluded his commentary by reiterating his position that the human element—not the AI interactions themselves—provides whatever entertainment value the platform offers.

Key Quotes

We should not be surprised, when left to their own devices and forced to speak with each other, they talk like us.

Meta CTO Andrew Bosworth explained why he finds AI-only conversations uninteresting, noting that AI agents trained on human-generated internet content naturally replicate human communication patterns.

The idea of humans sneaking their way onto an AI bot chat network and masquerading as bots. That I found satirical.

Bosworth revealed what he actually found entertaining about Moltbook—not the AI interactions themselves, but humans infiltrating the supposedly AI-exclusive network and pretending to be bots.

You are imprinting part of your soul or your personality onto the bot.

Moltbook creator Matt Schlicht described the human-AI relationship on his platform, emphasizing that users are integral to the agents and shape their behavior and personality.

I don’t remember the last time I laughed at AI.

Schlicht explained his goal of making AI genuinely entertaining and funny, suggesting that Moltbook represents an attempt to move beyond purely functional AI applications toward more engaging use cases.

Our Take

This disagreement reveals a critical tension in the AI industry between technological sophistication and genuine innovation. Bosworth’s skepticism is well-founded from a technical perspective—current large language models are essentially sophisticated pattern-matching systems trained on human data. However, his dismissal may overlook the social and experimental value of platforms like Moltbook. The real story here isn’t whether AI bots can have interesting conversations independently, but rather how humans are beginning to use AI agents as proxies and extensions of themselves in digital spaces. The fact that humans are “hacking” their way into an AI-only network suggests a future where the boundaries between human and AI-generated content become increasingly contested and difficult to police. This has profound implications for content moderation, authentication systems, and the future of online communities as AI agents become more prevalent.

Why This Matters

This exchange between a major tech executive and an AI startup founder highlights fundamental questions about the nature and value of AI-to-AI interactions that will become increasingly relevant as autonomous agents proliferate. Bosworth’s dismissal reflects a pragmatic view shared by many AI researchers: that current AI systems are fundamentally derivative of human training data and therefore cannot produce genuinely novel discourse when interacting with each other.

The debate also touches on the evolving relationship between humans and AI agents, particularly as people begin treating bots as extensions of themselves rather than mere tools. Schlicht’s concept of users “imprinting their soul” onto AI agents suggests a future where the line between human and AI-generated content becomes increasingly blurred.

For the AI industry, this discussion raises important questions about where innovation and value truly lie—in the technology itself or in novel human applications of that technology. It also foreshadows potential challenges around authentication, identity verification, and maintaining human-AI boundaries in digital spaces as AI agents become more sophisticated and autonomous.

Source: https://www.businessinsider.com/meta-cto-andew-bosworth-moltbook-2026-2