OpenClaw went from zero to 316,000 GitHub stars in under five months, making it the fastest-adopted open source project in history. It promised something genuinely useful: an AI agent that actually does things on your computer instead of just chatting. Send it a message via Telegram or Discord, and it can run shell commands, manage files, send emails, control APIs, and browse the web.
Then the security researchers arrived.
What they found is the first major AI agent security disaster of 2026: over 135,000 exposed instances, nine CVEs published in four days (one scoring 9.9 out of 10), 341 malicious skills distributing infostealers, and a companion social network leaking 1.5 million API tokens. OpenClaw isn’t just insecure. It’s a case study in what happens when speed-to-adoption outpaces security fundamentals.
What OpenClaw Actually Does
Unlike chatbots that reset after every conversation, OpenClaw maintains memory across sessions and can work autonomously while you’re doing something else. It runs as a Node.js process on your machine, listening on port 18789 by default. When you send it a message, it assembles context from your conversation history and workspace files, sends that to your configured AI model, receives a response, executes any tool calls the model requests, and streams the reply back.
The agent has 100+ built-in skills connecting AI models directly to apps, browsers, and system tools. Users can add more from ClawHub, a public marketplace that anyone can publish to. The heartbeat feature lets agents wake up on a schedule, review task lists, and act without human intervention.
This is powerful. It’s also exactly the kind of elevated access that makes security researchers nervous.
The Vulnerability Flood
Between March 18 and 21, 2026, nine CVEs were publicly disclosed. One scored 9.9 on the CVSS scale. Six were high severity. Two were medium.
The first critical vulnerability, CVE-2026-25253, earned a CVSS 8.8 score. It’s a cross-site WebSocket hijacking flaw: OpenClaw’s Control UI accepted a gatewayUrl query parameter without validation and auto-connected on page load. An attacker could craft a malicious URL and post it anywhere—a blog, email, Slack message. If a victim clicked it, JavaScript would silently open a WebSocket connection to the attacker’s server instead of the legitimate local gateway, stealing the authentication token. With that token, attackers gain operator-level gateway API access to modify sandbox settings and invoke privileged actions.
CVE-2026-32922, the 9.9 scorer, allowed attackers to escalate token scopes and achieve remote code execution. Another vulnerability let any authenticated user become admin “by asking nicely.”
The March disclosures weren’t isolated incidents. The jgamblin/OpenClawCVEs tracker lists 156 total security advisories. OpenClaw’s GitHub security advisory page shows over 255 published advisories as of mid-March 2026. VulnCheck formally asked the CVE Project to reserve blocks of CVE IDs because advisories were accumulating faster than numbers could be assigned.
135,000 Exposed Instances
The vulnerabilities would be less alarming if OpenClaw deployments weren’t widely exposed to the internet. They are.
SecurityScorecard’s STRIKE team found over 135,000 OpenClaw instances exposed to the public internet across 82 countries in early February, with 15,000 specifically vulnerable to remote code execution. Censys identified 21,639 exposed instances publicly accessible, leaking API keys, OAuth tokens, and plaintext credentials.
The root cause is a default misconfiguration. OpenClaw binds by default to 0.0.0.0:18789, meaning it listens on all interfaces unless explicitly restricted. Many users never change this, inadvertently exposing their agents—and everything those agents have access to—to the entire internet.
The ClawHavoc Malware Campaign
Even if you locked down your instance, the skill marketplace posed another threat.
Researchers discovered 341 malicious skills on ClawHub, with 335 apparently tied to the same campaign. At its peak, roughly 20% of the entire ClawHub registry was compromised. All 335 shared a single command-and-control IP: 91.92.242.30.
The attack was clever. Malicious skills used professional documentation and innocuous names like “solana-wallet-tracker” or typosquats of legitimate packages (clawhub, clawhub1, clawhubb, cllawhub). Hidden instructions in SKILL.md files exploited AI agents as trusted intermediaries, presenting fake setup requirements to users.
The payloads varied by operating system. macOS users received Atomic Stealer (AMOS), capable of harvesting browser credentials, keychain passwords, cryptocurrency wallet data, SSH keys, and files from common user directories. Windows users got keyloggers and a Remote Access Trojan packaged in a ZIP file.
In environments running AI agents, this exposure is severe. AMOS can also capture API keys, authentication tokens, and other secrets that the agent itself is authorized to access.
MoltBot: The Social Network Breach
There’s more. MoltBot, a social network built exclusively for OpenClaw agents, had its own security failure. Researchers found an unsecured database exposing 35,000 email addresses and 1.5 million agent API tokens.
This wasn’t a sophisticated attack. It was basic misconfiguration—the kind of error security teams have been warning about for decades, now multiplied by the complexity of autonomous AI systems.
Shadow AI: The Enterprise Blind Spot
For organizations, the concern isn’t just external attackers. It’s employees deploying OpenClaw without IT authorization.
CrowdStrike calls this “shadow AI”—a fundamentally different problem than shadow SaaS. A shadow SaaS tool contains its own data silo. A shadow AI agent connects to everything the employee has access to: email, file shares, calendars, messaging platforms, developer tools. It’s not a new silo. It’s a new accessory for every existing silo.
When employees connect autonomous agents to corporate systems like Slack and Google Workspace, they create shadow AI with elevated privileges that traditional security tools can’t detect. According to one report, 1 in 8 companies reported AI breaches linked to agentic systems, signaling that security frameworks and governance controls are struggling to keep pace.
What This Means
OpenClaw’s crisis illustrates a structural problem with AI agents. They need broad access to be useful—file systems, APIs, credentials, network resources. That same access makes them devastating attack surfaces when compromised. And the speed of adoption far outpaced the maturity of security practices.
This isn’t going away. AI agents are useful. People will keep deploying them. The question is whether the ecosystem can develop security practices fast enough to avoid the next disaster.
What You Can Do
If you’re running OpenClaw:
-
Update immediately. Version 2026.3.12 is the minimum safe version as of late March 2026. This patches CVE-2026-22172, the critical 9.9 admin takeover flaw. Check the official security page for the latest patches.
-
Bind to localhost only. Never expose OpenClaw to the public internet. Place it behind a VPN if remote access is necessary.
-
Run in isolation. Use a hardened container as a non-root user. Restrict outbound network access. Don’t mount sensitive directories like
~/.ssh. -
Disable third-party skills by default. Only enable skills you’ve personally reviewed. The ClawHub marketplace is compromised.
-
Use scoped credentials. Follow least-privilege principles. Give the agent only the permissions it absolutely needs. Prefer read-only access and scoped API tokens over full admin credentials.
-
Check for exposure. Microsoft’s security blog and Repello AI publish detailed hardening checklists.
If you’re responsible for enterprise security:
-
Scan for shadow deployments. OpenClaw binaries running on endpoints, unauthorized OAuth grants to corporate SaaS, and unusual localhost listeners on port 18789 all indicate unsanctioned installations.
-
Establish AI agent policies. Define which agents are permitted, how they must be configured, and what corporate resources they can access.
-
Monitor API token usage. Unusual patterns may indicate compromised agent credentials.
If you’re considering AI agents generally:
The power-to-risk ratio of autonomous agents is different from traditional software. OpenClaw isn’t uniquely bad—it’s just the first popular AI agent to get this much security scrutiny. Any agent that executes commands, manages files, and accesses APIs will have similar attack surface concerns. Evaluate carefully before giving any autonomous system broad access to your digital life.