Yesterday, the U.S. Department of Defense signed classified network AI agreements with eight companies: OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, SpaceX, Reflection, and Oracle.
One name is conspicuously absent: Anthropic. The company that was the first to deploy AI on Pentagon classified networks — before any of those eight — is now excluded.
Here's what happened.
TL;DR
- 2025: Claude became the first AI deployed on Pentagon classified networks via Palantir's Maven toolkit; Anthropic signed a $200M DoD contract
- Feb 27, 2026: Pentagon demanded Anthropic remove safety guardrails covering autonomous weapons and mass surveillance — or lose the contract
- Anthropic refused → Pentagon designated it a "supply chain risk"; Trump ordered federal agencies to cut ties
- March 26: Federal judge blocked the designation, ruling it unconstitutional retaliation
- May 1, 2026: Pentagon signed deals with eight other companies. Anthropic is still out.
Photo by Unsplash | The Pentagon's AI partnerships now include OpenAI, Google, Microsoft, and six others — but not Anthropic
What the Pentagon Demanded
The DoD's requirement was framed simply: all AI models used by the military must be available for "any lawful purpose." Every company on the classified network contract agreed to this condition.
Anthropic refused to budge on two specific points:
1. Autonomous weapons systems — AI autonomously controlling weapons without human oversight. Anthropic's position: current AI is not reliable enough to operate weapons systems.
2. Mass domestic surveillance — using Claude for large-scale surveillance of American citizens. No adequate legal framework governs this use, per Anthropic.
The other seven companies (plus Oracle) accepted the condition. Anthropic did not.
"Supply Chain Risk" — A Label Reserved for Foreign Adversaries
Timeline
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Early 2025 → Claude first deployed on DoD classified networks
Late 2025 → Anthropic signs $200M Pentagon contract
Feb 27 2026 → Pentagon issues ultimatum: remove safety guardrails
Feb 27 2026 → Anthropic refuses → "supply chain risk" designation
→ Trump orders federal agencies to cut ties
Mar 26 2026 → Federal judge blocks designation (unconstitutional)
Apr 2026 → White House reopens talks after Anthropic reveals Mythos
May 1 2026 → Pentagon signs deals with 8 companies — Anthropic excluded
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
On February 27, Defense Secretary Pete Hegseth formally designated Anthropic a "supply chain risk" — a label previously reserved for entities like Chinese and Russian state-linked firms. This was its first application to a U.S. company.
The same day, President Trump ordered federal agencies to cease using Anthropic products.
Anthropic sued the federal government, arguing the designation was unconstitutional retaliation for exercising its right to set usage terms.
On March 26, a federal judge in California granted an injunction blocking the supply chain risk designation. The ruling: the Pentagon's action violated Anthropic's constitutional rights.
Anthropic won legally. But a court win and a DoD contract are different things.
Photo by Unsplash | AI safety guardrails have moved from theoretical discussion to billion-dollar business decisions
The Court Win That Didn't Solve Everything
The Pentagon proceeded to negotiate with other providers. Yesterday's announcement made it official: classified network AI access goes to eight companies — none of which is Anthropic.
The White House reportedly reopened discussions with Anthropic in recent weeks, partly after Anthropic unveiled Mythos, a cybersecurity threat identification tool. Whether that leads to a classified network agreement remains to be seen.
The business math is stark. The Pentagon is one of the world's largest single buyers of AI services. Ceding that market — along with the data pipelines, model feedback loops, and deployment infrastructure that come with it — to OpenAI, Google, and Microsoft is not a trivial cost.
Anthropic ran that calculation. It chose to hold the line anyway.
What This Means for Developers
A few things worth considering if you're building with any of these models:
The "safety AI company" identity has real costs. Anthropic has consistently positioned itself differently from OpenAI on safety. This is the bill for that positioning — not a theoretical trade-off, but a concrete one measured in hundreds of millions of dollars and an entire government market.
Every AI tool reflects a set of values. If you're building with Claude, this week made those values more legible. If you're building with OpenAI or Gemini, those companies made a different choice. Neither fact is hidden anymore.
Military AI governance is now a developer-adjacent issue. Autonomous weapons, surveillance AI, and "lawful use" policies are no longer abstract policy questions. They're supply chain decisions that affect which models are available, how they're deployed, and what they're trained on going forward.
The Anthropic IPO planned for October adds another layer. How investors price this principled stance — versus the revenue it cost — will be worth watching.
The story isn't over. Talks with the White House have resumed, and the court sided with Anthropic. Whether the DoD eventually reopens the door, or this exclusion becomes permanent, is still an open question.
What's clear is that Anthropic made a choice — and that choice is now on the record.
Sources
- Pentagon strikes deals with 7 Big Tech companies after shunning Anthropic — CNN Business, May 1, 2026
- Pentagon clears 8 tech firms to deploy their AI on its classified networks — Breaking Defense, May 1, 2026
- Judge blocks Pentagon's effort to punish Anthropic — CNN Business, March 26, 2026
- Pentagon formally designates Anthropic a supply chain risk — CBS News, 2026
Related reading:
- Anthropic IPO 2026: Google's $40B Investment, $30B Revenue, the $1T Valuation — Anthropic's growth trajectory and what the IPO means
- Anthropic Hits $19B Revenue + Google TPU Deal — The commercial side of Anthropic's expansion