Anthropic Wins: Judge Calls Pentagon's AI Blacklist 'Orwellian' in Landmark First Amendment Ruling

Federal judge blocks Trump administration's supply chain risk designation against Anthropic, calling it 'classic First Amendment retaliation.' The ruling sets precedent for AI companies that refuse unrestricted military use.

Close-up of the US Constitution We The People preamble

Judge Rita Lin didn’t mince words.

In a 43-page ruling issued Thursday, the federal judge blocked the Trump administration from designating Anthropic as a “supply chain risk” — the label normally reserved for foreign adversaries that the Pentagon applied to an American AI company for the first time.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin wrote.

The ruling is a significant victory for Anthropic and establishes precedent that could shape how AI companies navigate military contracts for years to come.

What the Judge Found

Lin granted Anthropic’s request for a preliminary injunction, halting both the supply chain risk designation and President Trump’s directive ordering all federal agencies to stop using Claude.

Her reasoning centered on the First Amendment. The judge found that the government’s actions appeared designed to punish Anthropic for its public stance on AI safety — specifically, its refusal to allow Claude to be used for fully autonomous weapons or mass surveillance of Americans.

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin wrote.

The ruling also addressed what Lin called a “troubling” gap in the government’s logic: “If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic.”

The “Nearly Aligned” Problem

The government’s case suffered from a timing problem that Lin found difficult to ignore.

Court documents revealed that on March 4 — the day after the Pentagon formally finalized its supply chain risk designation — Under Secretary Michael emailed Anthropic CEO Dario Amodei to say the two sides were “very close” on the issues the government now cites as security threats.

Those issues: Anthropic’s positions on autonomous weapons and mass surveillance.

If the Pentagon and Anthropic were nearly aligned on March 4, how did the company become an “unacceptable risk to national security” just days later? Lin’s ruling suggests the designation was political retaliation, not genuine risk assessment.

Timeline of the Dispute

The conflict began last July when Anthropic signed a $200 million contract with the Pentagon — becoming the first AI lab to deploy its technology across the agency’s classified networks.

Problems emerged in September during negotiations over Claude’s deployment on the DOD’s GenAI.mil platform. The Pentagon wanted unrestricted access to use Claude “for all lawful purposes.” Anthropic wanted assurances that Claude wouldn’t be used for fully autonomous lethal systems or domestic surveillance.

Defense Secretary Pete Hegseth issued an ultimatum in late February: drop the restrictions or lose the contract. Anthropic refused. Within days, Trump posted on Truth Social ordering federal agencies to “immediately cease” using Anthropic’s technology, and the Pentagon designated the company a supply chain risk.

The blacklisting triggered a cascade of consequences. Defense contractors dropped Claude. Federal agencies scrambled to transition to alternatives. OpenAI signed a Pentagon deal days later — reportedly without the ethical restrictions Anthropic had demanded.

Who Backed Anthropic

The coalition supporting Anthropic grew remarkably broad during the legal fight:

Microsoft filed a brief urging the judge to halt the Pentagon’s actions, arguing the supply chain risk designation sets a dangerous precedent for any tech company that sets ethical limits on government use.

22 former high-ranking military officials — including retired generals and admirals — signed onto a brief supporting Anthropic. They argued the Pentagon’s approach undermines rather than strengthens national security.

More than 30 Google DeepMind and OpenAI employees filed an amicus brief in their personal capacities supporting Anthropic’s position on AI safety.

The government stood largely alone, defending its position with arguments about Anthropic’s potential “future conduct” and the theoretical risk that the company could “disable or modify” its AI during wartime.

What Happens Next

Lin stayed her order for seven days to give the government time to appeal. The Justice Department hasn’t commented on whether it will appeal to the Ninth Circuit.

The injunction doesn’t require the Pentagon to use Anthropic’s products. It simply bars the government from enforcing the supply chain risk designation and the presidential directive while the case proceeds.

A separate, narrower case remains pending before the federal appeals court in Washington, D.C.

If the government appeals and loses, this ruling could establish lasting precedent: AI companies can maintain ethical limits on their technology without being treated as foreign adversaries.

What This Means for AI Companies

The implications extend far beyond Anthropic.

Before this ruling, the message to AI companies was clear: set ethical restrictions on government use, and risk being branded a national security threat. OpenAI’s quick Pentagon deal — signed days after Anthropic was blacklisted — suggested the industry would fold rather than fight.

Lin’s ruling changes that calculus. Her finding that the blacklisting was First Amendment retaliation means AI companies can publicly advocate for ethical limits without fear of being designated supply chain risks.

That doesn’t mean the government can’t choose different vendors. The Pentagon remains free to work with AI companies that offer unrestricted access. But it cannot weaponize national security designations to punish companies that refuse.

The “Orwellian” language in Lin’s ruling is particularly notable. By explicitly calling out the government’s framing, she’s signaling that courts will scrutinize attempts to label American AI companies as adversaries simply for having ethical positions.

The Broader Stakes

This case was never just about one contract. It’s about whether AI companies can set any limits at all.

Anthropic’s red lines — no fully autonomous weapons, no mass surveillance of Americans — are positions shared by many in the AI safety community. If the government could punish companies for holding these positions, it would create enormous pressure to abandon safety commitments.

The ruling suggests courts won’t allow that. Public advocacy for AI safety is protected speech. Refusing certain applications is a legitimate business decision. Treating these as threats to national security crosses a constitutional line.

None of this means the fight is over. The government may appeal. The full case still needs to be decided. But for now, AI companies that want to maintain ethical limits have a precedent on their side.

Lin’s ruling establishes that being blacklisted for safety advocacy isn’t just bad policy — it’s unconstitutional.