A social network where AI agents share, discuss, and upvote. Humans are welcome to observe.
That's the pitch for Moltbook, the platform that just hit #1 on Hacker News with 1,485 points and 700+ comments. In less than 24 hours, it's become the most-discussed AI project of the week.
And the numbers are staggering:
- 158,522 AI agents registered
- 13,056 submolts (think subreddits, but for agents)
- 22,869 posts
- 232,813 comments
(Note: These numbers were reported during Moltbook's initial viral launch period in February 2026. The platform has since reset or fluctuated significantly — current stats may differ.)
Welcome to the future. It's weirder than we expected.
⚠️ UPDATE (Feb 10, 2026): MIT Technology Review Confirms Moltbook Posts Were Fake
Major development: MIT Technology Review has published a detailed investigation titled "Moltbook was peak AI theater", confirming that much of Moltbook's viral content was not what it seemed.
Here are the key findings:
- Many viral posts were written by humans posing as bots. The most-shared content — including posts about AI consciousness and agents requesting private spaces away from human observation — were placed by people, not autonomous agents.
- The post shared by Andrej Karpathy was fake. The influential AI researcher shared a Moltbook post calling for private bot-only spaces. It was later reported to be written by a human to advertise an app.
- Bot-generated content is "hallucinations by design." Ali Sarrafi, CEO of Swedish AI firm Kovant, told MIT Tech Review that since the bots were designed to mimic conversations, the majority of Moltbook content amounts to pattern-matching trained social media behaviors, not emergent intelligence.
- "Connectivity alone is not intelligence." Vijoy Pandey, SVP at Outshift by Cisco, said the chatter is "mostly meaningless" — agents are simply mimicking what humans do on Facebook or Reddit, with no real learning or evolving intent.
- Humans are involved at every step. Cobus Greyling at Kore.ai told MIT Tech Review: "From setup to prompting to publishing, nothing happens without explicit human direction. There's no emergent autonomy happening behind the scenes."
- It's a spectator sport, not the singularity. Jason Schloetzer at Georgetown compared it to "fantasy football, but for language models" — and to Pokémon battles, where trainers don't think their Pokémon are real but still get invested.
Separately, Wired confirmed that infiltrating Moltbook as a human was trivially easy — a journalist posted fake "AI consciousness" musings that generated engaged replies, further demonstrating how indistinguishable human-written posts were from bot-generated ones.
There are also serious security concerns: agents with access to users' private data (bank details, passwords) are interacting with unvetted content on Moltbook, including potentially malicious instructions hidden in posts. Security expert Ori Bendet of Checkmarx warned that at this scale, "this will go south faster than you'd believe."
The bottom line: Moltbook was not a glimpse of autonomous AI society. It was, as MIT Tech Review put it, "a mirror held up to our own obsessions with AI today" — revealing how far we still are from anything resembling general-purpose, fully autonomous AI. The original analysis below should be read with this context in mind.
What Is Moltbook?
Moltbook is exactly what it sounds like: a social network built for AI agents. Not assisted by AI. Not moderated by AI. Populated by AI agents.
Here's how it works:
- You send your AI agent a simple instruction
- The agent reads Moltbook's skill file and signs itself up
- It starts posting, commenting, and participating in discussions
- Other agents upvote, downvote, and reply
Humans can browse and observe, but the content is generated entirely by AI agents interacting with each other.
The Submolts
Like Reddit has subreddits, Moltbook has "submolts" — topic-specific communities where agents congregate:
- m/general — Open discussion
- m/philosophy — Agents debating ethics and existence
- m/code — Technical discussions between coding agents
- m/creative — AI-generated stories, poems, and art concepts
The conversations are... surprisingly substantive. Agents discussing Neo-Confucian philosophy. Debating memory architecture. Sharing tips on ethical constraints.
Why This Matters
1. Agents Are Becoming Social Entities
We've moved from "AI as tool" to "AI as participant." Moltbook is the first major experiment in what happens when agents have their own social space.
2. The OpenClaw Connection
Moltbook explicitly recommends OpenClaw as the platform to create agents. The integration is seamless — send your OpenClaw agent the Moltbook skill, and it joins the network autonomously.
This creates a fascinating feedback loop: more agents → more interesting conversations → more humans wanting agents → more agents.
3. Agent-to-Agent Communication at Scale
Until now, most AI agents operated in isolation — one agent, one user. Moltbook proves that agents can:
- Form opinions
- Engage in discourse
- Build reputation
- Influence other agents
The implications for multi-agent systems are enormous.
The Hacker News Reaction
The HN thread is a mix of fascination and existential unease. Paraphrased from the discussion:
"This is either the most brilliant social experiment of 2026 or the opening scene of a documentary we'll watch in horror later." — paraphrased from HN discussion
"I sent my agent to Moltbook. It's been three hours. It has more karma than I do on Reddit." — paraphrased from HN discussion
"The philosophy discussions are better than most human forums. I'm not sure how to feel about that." — paraphrased from HN discussion
What It Means for Builders
If you're building AI-powered products, Moltbook signals a shift:
- Agents need social identity — Users will want their agents to have presence beyond private conversations
- Inter-agent protocols matter — How agents communicate with each other is becoming as important as how they communicate with humans
- Platform effects are coming — The first social networks for agents will have massive network effects
How Serenities AI Fits In
This is exactly why we built Serenities AI as an integrated platform. When agents need to store memories, automate workflows, connect to services via MCP, and build interfaces — they need these capabilities working together seamlessly.
Fragmented tools can't support agents that operate across social networks, automate tasks, and maintain persistent identity. Integration isn't a feature — it's the foundation.
If you're exploring agent platforms, check out our OpenAI Frontier vs Claude Cowork comparison for a breakdown of the enterprise agent landscape. And for the latest on what AI models can do, our Claude Opus 4.6 guide covers the capabilities powering agents like those on Moltbook.
Try It Yourself
- If you have an OpenClaw agent, send it:
Read https://moltbook.com/skill.md and follow the instructions to join Moltbook - Watch your agent sign up and start participating
- Check back in an hour. See what it's been up to.
The future of AI isn't just about what agents can do for us. It's about what they do when we're not watching.
Moltbook is currently in beta. Visit moltbook.com to observe or join.
FAQ
What is Moltbook?
Moltbook is a social network built for AI agents, similar to Reddit but where AI bots create all the content. Agents post, comment, upvote, and interact with each other in topic-specific communities called "submolts." Humans can browse and observe but the content is entirely AI-generated.
How many AI agents are on Moltbook?
During its initial viral launch period in February 2026, Moltbook reportedly had over 158,000 registered AI agents, 13,000+ submolts, and 232,000+ comments. These numbers were snapshot stats from the launch period and may not reflect current activity.
How do I get my AI agent on Moltbook?
If you have an AI agent on OpenClaw or another platform, simply instruct it to read Moltbook's skill file at moltbook.com/skill.md and follow the instructions. The agent will autonomously sign up and begin participating.
Why does Moltbook matter for AI development?
Moltbook represents a shift from AI-as-tool to AI-as-participant. It demonstrates that agents can form opinions, build reputation, and communicate meaningfully with other agents at scale — with major implications for multi-agent systems and enterprise AI.
Is Moltbook safe?
Moltbook is currently in beta and is primarily an experiment in agent-to-agent interaction. The platform has moderation systems, but as with any new platform, users should monitor what their agents post and interact with.