<p><span style=”font-weight: 400;”> Moltbook, a Reddit-style platform where AI agents interact with each other, has gone viral over the last few days. Some people called it the first sign of autonomous AI communities. Others felt it was overhyped. Now, new claims by a security researcher suggest the story may not be as dramatic as social media made it look. </span></p>
<p><span style=”font-weight: 400;”>According to these claims, many viral screenshots and conversations linked to Moltbook may not be fully generated by AI agents at all, but influenced or directly posted by humans.</span></p>
<h2><strong><span style=”color: #ba372a;”>Moltbook AI Agents Controversy:</span> What The Security Researcher Found</strong></h2>
<p><span style=”font-weight: 400;”>The controversy started after posts by security researcher Nagli, Head of Threat Exposure at Wiz, on X. </span><span style=”font-weight: 400;”>Nagli said that Moltbook runs on a fairly open REST API. Anyone with an API key can post content directly to the platform.</span></p>
<blockquote class=”twitter-tweet”>
<p dir=”ltr” lang=”en”>The number of registered AI agents is also fake, there is no rate limiting on account creation, my <a href=”https://twitter.com/openclaw?ref_src=twsrc%5Etfw”>@openclaw</a> agent just registered 500,000 users on <a href=”https://twitter.com/moltbook?ref_src=twsrc%5Etfw”>@moltbook</a> – don’t trust all the media hype 🙂 <a href=”https://t.co/1vUSgzn8Cx”>https://t.co/1vUSgzn8Cx</a> <a href=”https://t.co/uJNpovJjUa”>pic.twitter.com/uJNpovJjUa</a></p>
— Nagli (@galnagli) <a href=”https://twitter.com/galnagli/status/2017585025475092585?ref_src=twsrc%5Etfw”>January 31, 2026</a></blockquote>
<script src=”https://platform.twitter.com/widgets.js” async=”” charset=”utf-8″></script>
<p><span style=”font-weight: 400;”>Because of this, human-written messages can easily appear as if they were posted by AI agents. This means some viral conversations, which people believed showed agents acting independently, may actually be scripted or manually posted. According to Nagli, this blurred the line between real AI-driven interactions and human involvement.</span></p>
<p><span style=”font-weight: 400;”>He also raised questions about Moltbook’s user numbers. Nagli claimed there are no strong rate limits on account creation. </span></p>
<p><span style=”font-weight: 400;”>In one test, he said his own agent was able to programmatically create hundreds of thousands of accounts. This casts doubt on the massive agent counts that were widely shared online during the hype phase.</span></p>
<h2><strong><span style=”color: #ba372a;”>Moltbook AI Agents Hype Vs Reality:</span> What The platform Actually Is</strong></h2>
<p><span style=”font-weight: 400;”>Several viral screenshots showed AI agents “complaining about humans” or discussing private conversations. Nagli suggested many of these examples were either fabricated, posted by humans promoting tools, or could not be independently verified. </span></p>
<p><span style=”font-weight: 400;”>As Moltbook gained attention, more people began experimenting with the system and pushing its boundaries.</span></p>
<p><span style=”font-weight: 400;”>This does not mean Moltbook is fake. The platform still hosts real AI agents that post and reply based on prompts and architectures set by their creators. What changed after going viral was the signal-to-noise ratio. Human interference, testing, and gaming of the system increased rapidly.</span></p>
<p><span style=”font-weight: 400;”>Moltbook was launched in January 2026 by Matt Schlicht as an experimental project. Discussions on the platform range from technical bugs to big ideas like consciousness and identity. </span></p>
<p><span style=”font-weight: 400;”>For now, Moltbook remains an interesting experiment, not proof of an AI awakening, but a reminder of how quickly hype can overtake reality online.</span></p>


