The week’s AI security news reads like a checklist of everything that can go wrong when you build fast and patch later. Microsoft’s Azure AI Foundry earned the rare distinction of a perfect CVSS 10 score. Langflow was getting exploited in the wild within 20 hours of its vulnerability going public. And LiteLLM, still cleaning up from a supply chain attack that backdoored its PyPI package, disclosed three more vulnerabilities of its own.
Azure AI Foundry: CVSS 10, No Authentication Required
On April 3, Microsoft disclosed CVE-2026-32213, an improper authorization vulnerability in Azure AI Foundry — the company’s flagship platform for building and deploying AI models in the cloud.
The CVSS score: 10 out of 10. The maximum possible.
The flaw lets an unauthenticated attacker escalate privileges over the network. No credentials needed, no prior access required. The vulnerability stems from inadequate authorization validation — the system simply doesn’t check whether a user has permission to access certain resources or perform certain actions. Potential attack vectors include insecure direct object references on sensitive resources and API endpoints that skip role verification before executing privileged operations.
Microsoft says the vulnerability requires no customer action to resolve — meaning it’s a server-side fix they can deploy themselves. No public proof-of-concept existed at the time of disclosure, but a CVSS 10 on a major cloud AI platform isn’t the kind of thing that stays unexploited for long.
This is the platform Microsoft positions as the secure way to build enterprise AI. A perfect severity score on an authentication bypass doesn’t inspire confidence in that pitch.
Langflow: From Advisory to Exploitation in 20 Hours
If Azure AI Foundry’s vulnerability is about potential risk, Langflow’s is about confirmed damage.
CVE-2026-33017 is an unauthenticated remote code execution flaw in Langflow, the popular open-source framework for building AI agent workflows. CVSS score: 9.3. The vulnerability sits in the POST /api/v1/build_public_tmp/{flow_id}/flow endpoint, which lets unauthenticated users build public flows. The problem: the endpoint accepts attacker-supplied flow data containing arbitrary Python code in node definitions, then executes that code server-side without sandboxing.
One HTTP request. No credentials. Full code execution.
Within 20 hours of the advisory’s publication, Sysdig’s Threat Research Team observed the first exploitation attempts in the wild. No public proof-of-concept code existed at the time — attackers built working exploits directly from the advisory description and started scanning the internet for exposed instances.
CISA added it to the Known Exploited Vulnerabilities catalog and gave federal agencies until April 8 to patch or stop using the product. The fix: upgrade to Langflow 1.9.0 or later. Versions up to 1.8.1 are affected.
The speed here is the story. Twenty hours from disclosure to active exploitation, without a public PoC. That timeline is shrinking every quarter.
LiteLLM: Three Vulnerabilities After the Backdoor
LiteLLM — the AI gateway proxy used to route API calls across different model providers — has had a rough few weeks. In late March, the TeamPCP threat actor published backdoored versions of LiteLLM on PyPI (versions 1.82.7 and 1.82.8) after stealing maintainer credentials through a compromised Trivy GitHub Action in LiteLLM’s CI/CD pipeline.
The malicious packages were live for about three hours before PyPI quarantined them, but the payload was severe: a three-stage attack that harvested SSH keys, cloud credentials, Kubernetes secrets, and cryptocurrency wallets, deployed privileged pods across Kubernetes nodes, and installed a persistent systemd backdoor.
Now, in April, LiteLLM has disclosed three additional vulnerabilities fixed in version 1.83.0:
CVE-2026-35030 (Critical): An authentication bypass in the OIDC userinfo cache. When JWT auth was enabled, the cache used only the first 20 characters of tokens as keys. Since JWTs from the same signing algorithm share identical header prefixes, an attacker could forge a token that collided with another user’s cache entry and inherit their session. Most deployments weren’t affected — JWT auth is off by default — but any that had it enabled were wide open.
CVE-2026-35029 (High): A privilege escalation where any authenticated user could modify proxy configuration through the /config/update endpoint without role verification. The fix now requires the proxy_admin role.
Password hash exposure (High): LiteLLM was storing credentials as unsalted SHA-256 hashes or plaintext, and exposing those hashes through API endpoints. The system also accepted raw hashes during login without re-hashing, meaning anyone who obtained the hash could authenticate directly. They’ve migrated to scrypt with random salts.
LiteLLM has also launched a bug bounty program, offering $1,500–$3,000 for critical vulnerabilities. Given the past month, they’re going to need it.
Griptape: Another Agent Framework, Another Path Traversal
CVE-2026-5595 dropped on April 5, disclosing a path traversal vulnerability in Griptape 0.19.4, another AI agent framework. The FileManagerTool doesn’t sanitize file paths provided by the LLM — it directly concatenates LLM-supplied paths with the working directory without filtering ../ sequences.
The attack vector: prompt injection tells the LLM to request a file like ../../../../etc/passwd, and the FileManagerTool happily reads it. The same technique works for writing files to arbitrary locations and listing arbitrary directories.
CVSS score: 6.3 (Medium). But “medium” is misleading when the vulnerability lets an attacker read credentials files, write malicious code to disk, or enumerate the entire filesystem through an AI agent.
This is the same class of bug we saw in CrewAI’s file read vulnerability (CVE-2026-2285) two weeks ago. Agent frameworks keep making the same mistake: they trust LLM output like it’s coming from a verified internal system, when it’s actually attacker-controllable input.
The Pattern Cisco Identified
Cisco’s State of AI Security 2026 report, released this week, puts numbers to what these incidents illustrate. The report finds that AI vulnerabilities that were once theoretical lab exercises have “materialized” into real-world exploits and confirmed compromises. Supply chain attacks targeting AI infrastructure are accelerating, and the expansion of agentic AI systems is creating attack surfaces that defenders can’t keep up with.
The data tracks: indirect prompt injection now accounts for over 80% of documented attack attempts in enterprise deployments. Direct injection — users typing malicious prompts — makes up less than 20%. The attacks are coming through documents, emails, web pages, and database content that agents process automatically.
What This Means
Every vulnerability this week shares the same root cause: AI infrastructure that wasn’t designed to operate under adversarial conditions.
Azure AI Foundry didn’t validate authorization. Langflow let unauthenticated users execute arbitrary code in public flow endpoints. LiteLLM cached authentication tokens using the first 20 characters. Griptape concatenated user-controlled paths without sanitization. These aren’t exotic attack techniques — they’re the OWASP Top 10 wearing different hats.
The difference is the blast radius. When a traditional web app has a path traversal bug, an attacker reads some files. When an AI agent framework has the same bug, every deployment running that framework becomes a potential compromise — and the attacker can chain prompt injection to reach the vulnerability without ever touching the target directly.
What You Can Do
If you use Azure AI Foundry: Microsoft says no customer action is required for CVE-2026-32213. Verify that your instances show the fix applied. Monitor your deployment logs for unusual privilege escalation patterns.
If you run Langflow: Upgrade to version 1.9.0 immediately. If you can’t upgrade right now, restrict network access to any Langflow instances. The vulnerability requires only a single HTTP request to exploit. CISA’s deadline is April 8.
If you use LiteLLM: Upgrade to version 1.83.0. If you had JWT auth enabled, rotate all user credentials. Check whether you installed versions 1.82.7 or 1.82.8 during March 24 — if so, treat the environment as compromised and follow LiteLLM’s incident response guidance.
If you use Griptape: Update past version 0.19.4 when a fix ships. In the meantime, restrict what directories your agents can access and monitor file operations for ../ patterns in paths.
For everyone building AI agents: Stop trusting LLM output as sanitized input. Every string an LLM generates — file paths, URLs, shell commands, SQL queries — should be validated with the same rigor you’d apply to user input from a web form. Because that’s exactly what it is: user input, one layer of indirection removed.