Anthropic made a packaging mistake. Within 48 hours, criminal groups had turned it into a malware delivery system targeting the exact developers most likely to go poking around leaked source code. Separately, four unpatched vulnerabilities in CrewAI — one of the most popular open-source agent frameworks — can be chained from a single prompt injection into full remote code execution on the host machine.
Here’s what happened, what it means, and what you should do about it.
The Claude Code Leak: From Accident to Attack in Two Days
On March 31, someone on Anthropic’s release team published Claude Code version 2.1.88 to npm with a 59.8 MB JavaScript source map file still attached. That file contained 513,000 lines of fully unobfuscated TypeScript across 1,906 files — internal orchestration logic, security architecture, the works.
The root cause was mundane: someone forgot to add *.map to .npmignore or properly configure the files field in package.json. The kind of mistake that would fail a code review at most companies.
Security researcher Chaofan Shou disclosed the leak publicly on X, and within hours the codebase was downloaded from Anthropic’s own Cloudflare R2 bucket, mirrored to GitHub, and forked tens of thousands of times.
Anthropic’s statement was brief: “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.” They confirmed no customer data or credentials were exposed.
That’s where a normal leak story would end. This one didn’t.
Vidar and GhostSocks: The Weaponized Repositories
By April 1, threat actors had created fake GitHub repositories claiming to host “unlocked” Claude Code builds with enterprise features and no usage restrictions. The repositories were optimized for Google search visibility — within a day, they appeared near the top of results for anyone searching “leaked Claude Code.”
The bait was a 7-Zip archive containing ClaudeCode_x64.exe, a Rust-based executable that deployed two payloads:
- Vidar — a commodity information stealer that harvests browser passwords, cookies, cryptocurrency wallets, and credit card data
- GhostSocks — a network proxy tool that routes traffic through infected machines, turning them into nodes in a residential proxy network
Security researchers at Zscaler identified at least two similar repositories under different accounts (idbzoomh and my3jie), suggesting the same operator was testing multiple delivery routes. The archives were being updated frequently as of April 2, meaning the campaign remained active.
This is a textbook case of “event-jacking” — attackers riding a viral news cycle to reach victims whose curiosity has overwhelmed their caution. Developers searching for the leaked code were the perfect targets: technically sophisticated enough to be valuable, and primed to download unfamiliar executables.
CrewAI: Four CVEs, No Patches
While the Claude Code drama played out in public, a quieter but arguably more dangerous set of vulnerabilities surfaced in CrewAI, the popular open-source Python framework for building multi-agent AI systems.
Security researcher Yarden Porat of Cyata discovered four vulnerabilities tracked under CERT/CC advisory VU#221883. All four stem from the Code Interpreter tool, which is supposed to run Python code safely inside a Docker container. The problem is what happens when Docker isn’t available.
CVE-2026-2275: Insecure sandbox fallback. When the Code Interpreter can’t reach Docker, it silently falls back to SandboxPython — an environment that allows arbitrary C function calls via ctypes. If a developer has set allow_code_execution=True (which many do), an attacker with prompt injection access gets native code execution.
CVE-2026-2287: Docker runtime verification failure. CrewAI doesn’t check whether Docker is still running during execution. If Docker stops mid-session, the framework falls back to the insecure sandbox without warning. Same result: remote code execution.
CVE-2026-2285: Arbitrary file read. The JSON loader tool reads files without validating paths. An attacker can read any file on the server, including credentials, environment variables, and private keys.
CVE-2026-2286: Server-side request forgery. The RAG search tools don’t validate URLs at runtime, allowing attackers to hit internal services and cloud metadata endpoints — the classic SSRF pattern that leads to cloud account takeover.
The attack chain works like this: an attacker sends a crafted prompt (directly or through an indirect injection in data the agent processes). The prompt triggers code execution. If Docker isn’t running or has stopped, CrewAI’s fallback gives the attacker a shell. From there, they can read local files (CVE-2026-2285), reach internal services (CVE-2026-2286), and exfiltrate whatever they find.
No official patch exists yet. CrewAI’s maintainers have said they plan to block vulnerable modules, enforce fail-closed configurations, and add runtime warnings. Until those changes ship, any CrewAI deployment with the Code Interpreter enabled is exposed.
$45 Million in AI Trading Agent Losses
In a related development, protocol-level weaknesses in AI trading agents on Solana triggered over $45 million in losses when attackers compromised the agents’ long-term memory and tool-connection protocols. Once inside, the AI agents obediently executed large token transfers — over 261,000 SOL tokens worth approximately $27–30 million — because their permission systems allowed it.
The pattern is the same one showing up in CrewAI: AI agents with broad permissions, no fail-safe isolation, and trust boundaries that evaporate under adversarial conditions.
What This Means
Three themes keep repeating in AI security incidents, and this week’s batch hits all of them:
Supply chain attacks are getting faster. The window between Anthropic’s leak and the first malware-laden repositories was roughly 24 hours. Attackers have automated their response to high-profile incidents, with pre-built infrastructure ready to deploy fake repositories the moment a trending topic emerges.
Agent frameworks aren’t built for adversarial conditions. CrewAI’s sandbox-or-nothing architecture assumes Docker will always be available. When that assumption breaks, the framework doesn’t fail safely — it fails open, handing attackers exactly the access they want. This isn’t unique to CrewAI. Most agent frameworks were built by teams optimizing for developer convenience, not security.
AI agents with broad permissions are attack multipliers. The crypto losses demonstrate what happens when agents can move money. The CrewAI vulnerabilities show what happens when agents can execute code. As agents get more capable and more connected, the blast radius of a single compromise expands accordingly.
What You Can Do
If you searched for the Claude Code leak:
- Did you download anything? Scan your system with your endpoint protection tool immediately
- Check browser extensions, saved passwords, and cryptocurrency wallets for signs of compromise
- Any executable you downloaded from unofficial sources should be treated as malicious
If you use CrewAI:
- Disable the Code Interpreter tool until patches ship
- If you must use it, ensure Docker is running and monitor that it stays running
- Set
allow_code_execution=Falseunless absolutely necessary - Audit what files and network access your agents can reach
- Watch the CERT/CC advisory for patch updates
For everyone building with AI agents:
- Treat every agent as an untrusted process. Limit file access, network access, and execution privileges to the minimum required
- Don’t rely on Docker-or-bust architectures. If your sandbox fails, your agent should stop — not fall back to running unsandboxed
- Validate all inputs to agent tools, especially URLs and file paths
- Monitor agent behavior in production. If an agent starts reading files or making network requests it shouldn’t, kill the session
The tools are getting more powerful. The attacks are getting faster. The security posture of most AI deployments hasn’t kept up.