Langflow Under Attack: Critical RCE Exploited Within 20 Hours of Disclosure

A single HTTP request can own your AI workflow server. CVE-2026-33017 shows why authentication shouldn't be optional.

Computer screen showing code in a dark environment, representing cybersecurity

A critical vulnerability in Langflow, the popular low-code AI workflow builder, went from public advisory to active exploitation in just 20 hours. The attackers didn’t need a proof-of-concept exploit—they built one from the advisory itself.

CVE-2026-33017 carries a CVSS score of 9.3 and affects all versions through 1.8.1. The flaw is embarrassingly simple: an API endpoint that executes arbitrary Python code with no authentication required.

The Vulnerability

Langflow is an open-source framework for building AI agents and RAG (retrieval-augmented generation) pipelines. Its visual drag-and-drop interface has made it popular for prototyping and deploying AI workflows. According to the project’s documentation, it’s used for chatbots, document analysis, content generators, and agentic applications.

The vulnerability lives in the /api/v1/build_public_tmp/{flow_id}/flow endpoint, which lets users build “public flows” without authentication. When you supply the optional data parameter, the endpoint processes your flow definition instead of pulling stored data from the database. That flow definition can contain arbitrary Python code in node definitions—code that gets passed straight to exec() with zero sandboxing.

One HTTP POST request. That’s all it takes. No credentials, no session cookies, no authentication headers. Send malicious JSON, get remote code execution.

Security researcher Aviral Srivastava reported the flaw to Langflow on February 26, 2026. The advisory went public on March 17.

20 Hours to Exploitation

Cloud security firm Sysdig was watching. According to their analysis, the first exploitation attempts appeared at 16:04 UTC on March 18—exactly 20 hours after the advisory dropped. No public proof-of-concept existed. Attackers had reverse-engineered working exploits directly from the technical description.

The attacks came in waves. Sysdig tracked six unique source IPs over 48 hours, each with different objectives:

Phase 1: Automated scanning (hours 20-21) Four IPs used the nuclei scanning tool to find vulnerable instances. Their payloads executed system commands and exfiltrated results to callback servers. All requests included the telltale header Cookie: client_id=nuclei-scanner.

Phase 2: Manual exploitation (hours 21-24) An attacker from a Netherlands IP worked methodically—enumerating directories, checking for credential files, fingerprinting systems via the id command, then attempting to download stage-2 payloads via bash.

Phase 3: Credential harvesting (hours 24-30) The most sophisticated attacker focused on secrets. They dumped environment variables to capture API keys and database credentials, searched for configuration files, and targeted .env files containing production secrets.

What Attackers Were After

This wasn’t random scanning. Langflow instances often contain valuable targets:

  • API keys for OpenAI, Anthropic, and other AI providers
  • Database credentials for vector stores and application backends
  • Cloud credentials in environment variables
  • Business data flowing through AI workflows

An attacker who compromises a Langflow server potentially gains access to every external service the AI workflows connect to. That could mean your entire AI infrastructure.

Indicators of Compromise

Sysdig published the attacker infrastructure they observed:

Source IPs:

  • 77.110.106.154 (Frankfurt, Germany)
  • 209.97.165.247 (Singapore)
  • 188.166.209.86 (Singapore)
  • 205.237.106.117 (Paris, France)
  • 83.98.164.238 (Netherlands)
  • 173.212.205.251 (France)

Command-and-control servers:

  • 143.110.183.86:8080
  • 173.212.205.251:8443 (hosting stage-2 payloads)

Callback domains: Twelve unique interactsh subdomains across .oast.live, .oast.me, .oast.pro, and .oast.fun TLDs used for exfiltration.

Check your logs for connections to these addresses. If you find matches, assume compromise.

What You Should Do

If you run Langflow:

  1. Update immediately. The fix is in development version 1.9.0.dev8. If you can’t update, take the instance offline.

  2. Assume compromise if your instance was publicly accessible before March 17. Audit your environment variables and secrets.

  3. Rotate everything. API keys, database passwords, cloud credentials—anything that was accessible to the Langflow process.

  4. Check for persistence. Attackers may have installed backdoors, cron jobs, or SSH keys.

  5. Put it behind authentication. Never expose Langflow directly to the internet. Use a reverse proxy with proper authentication.

If you’re evaluating AI tools:

This is what happens when “move fast and break things” meets production AI deployments. Before adopting any AI framework:

  • Check if the API endpoints require authentication
  • Look for proper input validation and sandboxing
  • Review the project’s security track record
  • Don’t expose development tools to the public internet

The Bigger Picture

Langflow isn’t unique. The rush to deploy AI tools has left security as an afterthought across the industry. We’ve seen similar issues in vLLM, AnythingLLM, MS-Agent, and now Langflow—all within the past few weeks.

The pattern is consistent: endpoints that accept arbitrary code without authentication, rendering engines that don’t sanitize input, APIs that trust user-supplied data. These are mistakes the web development world largely solved a decade ago. The AI tool ecosystem is learning the same lessons the hard way.

Meanwhile, attackers have noticed. The 20-hour turnaround from advisory to exploitation shows they’re watching AI security disclosures closely. If your AI infrastructure has a public advisory, expect probes within a day.

The HiddenLayer 2026 AI Threat Landscape Report found that 1 in 8 companies reported AI breaches linked to agentic systems. As these tools become more central to business operations, the targets will only get more valuable.

Patch your Langflow instances. Better yet, put them behind authentication in the first place. The attackers are already looking.