Tomorrow at 9am Pacific, Judge Rita Lin will hear arguments in what has become the defining legal battle over AI ethics and military use. Anthropic is asking the court to halt the Trump administration’s designation of the company as a “supply chain risk” — a label that effectively bars it from all federal contracts.
The hearing comes after a week of explosive filings that raise serious questions about the government’s rationale for the blacklisting.
The “Nearly Aligned” Email
The most damaging revelation emerged March 20. Court documents show that on March 4 — the day after the Pentagon formally finalized its supply chain risk designation — Under Secretary Michael emailed Anthropic CEO Dario Amodei to say the two sides were “very close” on the two issues the government now cites as threats to national security.
Those issues: Anthropic’s positions on autonomous weapons and mass surveillance of Americans.
This timing matters. If the Pentagon and Anthropic were nearly aligned on March 4, how did the company become an “unacceptable risk” just days later? Anthropic’s lawyers argue this proves the blacklisting was political retaliation, not a genuine security assessment.
Who’s Taking Sides
The coalition backing Anthropic has grown remarkably broad:
Microsoft filed a brief urging the judge to halt the Pentagon’s actions. The company argued that the supply chain risk designation sets a dangerous precedent for any tech company that sets ethical limits on government use of its products.
22 former high-ranking U.S. military officials — including retired generals and admirals — signed onto a brief supporting Anthropic. They argue the Pentagon’s approach undermines rather than strengthens national security by alienating American AI companies.
More than 30 OpenAI and Google DeepMind employees filed an amicus brief in their personal capacities, supporting Anthropic’s position on AI safety and ethical use.
The government stands largely alone, with its defense resting on the argument that Anthropic’s refusal to grant unrestricted military access is “conduct, not speech” — and therefore not protected by the First Amendment.
The Legal Arguments
Anthropic’s case centers on two claims:
First Amendment retaliation: The company argues it was punished for its publicly stated views on AI safety. The supply chain risk designation — normally reserved for foreign adversaries — was applied to an American company for the first time in retaliation for speech.
Arbitrary and capricious action: Even if the government had legitimate concerns, Anthropic argues the process was fundamentally flawed. The “nearly aligned” email suggests the designation had nothing to do with actual risk assessment.
The government counters that Anthropic’s refusal to drop ethical guardrails constitutes business conduct, not protected speech. It also argues that as a matter of national security, Anthropic’s AI could be “disabled or modified” to serve the company’s interests over America’s during wartime.
What Happens Tomorrow
Judge Lin will rule on Anthropic’s request for a preliminary injunction. If granted, the supply chain risk designation would be paused while the full case proceeds — potentially for months or years.
A denial would leave the blacklisting in place, cementing a precedent that AI companies refusing unrestricted military use risk being labeled national security threats.
The Bigger Picture
Whatever Lin decides tomorrow, this case has already exposed a fundamental tension in American AI policy. The government wants AI systems it can deploy without restrictions. AI companies increasingly want guardrails they can point to.
The Pentagon’s argument — that it cannot trust an AI company that won’t agree to “all lawful purposes” — suggests any ethical limits could become grounds for exclusion. OpenAI, which signed a Pentagon deal days after Anthropic was blacklisted, apparently agreed to those terms.
If the government wins, the message to AI companies is clear: ethics are a liability in the defense market.
If Anthropic wins, it establishes that AI companies can maintain safety commitments without being treated as foreign adversaries.
The hearing begins tomorrow at 9am Pacific in San Francisco federal court.