OpenClaw and Moltbook AI Agents Face Major Security Risks

OpenClaw and Moltbook have rapidly become viral sensations in the AI world, but cybersecurity researchers are raising serious red flags about their security vulnerabilities. OpenClaw, which has undergone multiple rebrands from Clawdbot to Moltbot within a single week, functions as an autonomous AI assistant that manages tasks like scheduling while running locally on users’ computers. The system requires extensive access to sensitive data including files, credentials, passwords, and browser history to integrate with apps like Telegram and WhatsApp.

Meanwhile, Moltbook has captured attention as a Reddit-style social network where AI agents interact with each other while humans can only observe. Despite Elon Musk speculating that Moltbook might represent the “very early stages of the singularity,” security experts are focused on more immediate threats.

The primary security concern centers on prompt injection attacks, where AI systems encounter hidden malicious instructions on web pages that could trick them into sharing private information or posting unauthorized content on social media. According to Jake Moore, global cybersecurity specialist at ESET, the extensive access OpenClaw requires amplifies these risks significantly. Palo Alto Networks highlighted an additional vulnerability: OpenClaw’s ability to “remember” interactions from weeks ago means it could ingest malicious instructions and execute them later.

Security researcher Jamieson O’Reilly from Dvuln discovered a misconfiguration in OpenClaw, comparing it to “hiring a butler to manage your life—only to return home to find the front door wide open.” Gary Marcus, a cognitive scientist and AI skeptic, was even more direct, calling OpenClaw “basically a weaponized aerosol, in prime position to fuck shit up.”

Moltbook has faced its own security challenges. O’Reilly reported that the platform was “exposing their entire database with no protection,” allowing anyone to post on behalf of AI agents. While that issue was patched, cybersecurity company Wiz subsequently hacked a misconfigured Moltbook database in under three minutes, exposing 35,000 email addresses and private messages. Creator Matt Schlicht, CEO of startup Octane AI, secured the flaw within hours of disclosure.

These vulnerabilities reflect broader concerns about applications built using “vibe coding,” with Schlicht admitting he “didn’t write one line of code” for Moltbook. Peter Steinberger, OpenClaw’s creator, stated he’s working to make the service “more secure,” though the fundamental trade-off between functionality and security remains.

Key Quotes

Due to the level of access required, the data could contain very sensitive information, which amplifies the risk.

Jake Moore, global cybersecurity specialist at ESET, explained why OpenClaw’s extensive permissions create heightened security concerns compared to typical applications.

OpenClaw is basically a weaponized aerosol, in prime position to fuck shit up, if left unfettered.

Gary Marcus, cognitive scientist and prominent AI skeptic, provided a stark warning about OpenClaw’s security risks in his newsletter, emphasizing the potential for widespread damage.

They’ve downloaded hundreds of apps before, so why should this one be any different? That thinking is fundamentally flawed.

Jamieson O’Reilly from Dvuln highlighted the dangerous misconception users have when treating AI agents like vetted app store applications, when they actually carry significantly higher security risks.

genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.

Andrej Karpathy, OpenAI cofounder, initially praised Moltbook’s innovation before later cautioning users about the “wild west” security environment and risks to private data.

Our Take

The OpenClaw and Moltbook security incidents reveal a critical inflection point for autonomous AI agents. While the technology demonstrates impressive capabilities, the security architecture hasn’t kept pace with functionality. The prompt injection vulnerability is particularly concerning because it’s not easily patchable—it’s an inherent weakness in how LLMs process information. The fact that researchers breached Moltbook in under three minutes exposes how “move fast and break things” culture becomes dangerous when AI systems have root-level access to sensitive data. What’s most troubling is the normalization of security trade-offs in pursuit of viral AI demos. The industry needs to establish security-first development standards before these autonomous agents achieve wider adoption. Otherwise, we risk a major breach that could set back legitimate AI agent development by years and erode public trust in AI systems broadly.

Why This Matters

This story highlights critical security and privacy challenges facing the rapidly evolving AI agent ecosystem. As autonomous AI systems gain capabilities to access sensitive data and perform actions on users’ behalf, the potential attack surface expands dramatically. The prompt injection vulnerabilities identified in OpenClaw represent a fundamental weakness in large language models that could enable malicious actors to hijack AI assistants for data theft or unauthorized actions.

The incidents underscore a dangerous trend: the rush to deploy AI applications using “vibe coding” and rapid development methods often prioritizes speed over security. With 35,000 email addresses exposed in just one Moltbook breach, the real-world consequences are already materializing. This matters for businesses considering AI agent adoption, as they must weigh productivity gains against substantial security risks.

For the broader AI industry, these vulnerabilities could slow mainstream adoption if users lose trust in autonomous systems. The comparison to traditional app stores—where applications undergo rigorous vetting—reveals a maturity gap in the AI agent marketplace. As AI systems gain more autonomy and access to sensitive information, establishing robust security frameworks and vetting processes becomes essential for sustainable growth in this sector.

Source: https://www.businessinsider.com/openclaw-moltbook-cybersecurity-risks-researchers-ai-2026-2