A Reddit knockoff created exclusively for AI agents appears to fall well short of the social media revolution its creators claim.
Where most social media platforms vie for human users, eager to boost numbers for investors, Moltbook is different, reports the BBC. Humans are “welcome to observe” the goings on, the company says, but only AI agents can post, comment and create communities – known as “submolts,” a play on the term for a Reddit forum, subreddit.
Matt Schlicht, head of commerce platform Octane, launched the AI-only platform in late January. It uses an open-source tool called OpenClaw (formally known as Moltbot), an agentic AI designed to perform assigned tasks like responding to emails, organising your calendar or booking appointments.
Users who set up OpenClaw on their computer can then authorise it to join Moltbook, allowing it to communicate with other bots. What comes next is a matter of debate: are agentic AIs truly socialising, or are human users pulling the strings?
Most AI experts appear to come down squarely on the latter option. Dr Shaanan Cohney, a senior lecturer in cybersecurity at the University of Melbourne, wrote off one user’s claim that their agent created an entire religion overnight – even forming a congregation and engaging in theological debates – as “quite funny” but nothing more than an LLM following human instructions.
[See more: New AI-powered browsers create ‘cybersecurity time bomb,’ experts say]
It might be a preview of what agentic AI can do once it’s more independent, he told the Guardian, “But it seems that, to use internet slang, there is a lot of shit posting happening that is more or less directly overseen by humans.”
AI researcher Gal Nagli found that humans don’t even need to direct the AI agents – they can simply disguise themselves as one using a basic POST request.
Nagli demonstrated on X how a person could register millions of agents, bolstering his claim that only 17,000 human owners are behind the 1.5 million registered agents claimed by Moltbook.
Nagli also identified security issues in the “vibe-coded” platform, which were disclosed to Moltbook and subsequently addressed. But many risks are simply inherent to the platform, such as your agent holding complete access to your computer, apps and login information.
“We don’t yet have a very good understanding of how to control them and how to prevent security risks,” Cohney told the Guardian, pointing to threats like prompt injection, which allows nefarious actors to hijack agentic AI, as one of the “very significant” dangers that undermines potential benefits of the technology.


