Anthropic Vows to Sue Pentagon After Supply Chain Risk Designation

AI

Anthropic Vows to Sue Pentagon After Supply Chain Risk Designation

aianthropicpentagonclaudepoliticsllm
AI company Anthropic is taking legal action after the US Department of Defense officially designated it a supply chain risk, effectively banning military contractors from using Claude — a move that has drawn bipartisan criticism and upended the AI industry's relationship with Washington.

Anthropic has vowed to sue the Pentagon after the US Department of Defense officially designated the AI company a supply chain risk — a move that, in practice, prohibits any contractor working with the military from doing business with the firm.

How It Escalated

The designation came swiftly and without warning. According to Anthropic CEO Dario Amodei, the company received a letter from the Department of Defense just a day before the designation became "effective immediately." No advance notice was given, and Anthropic says it received no communication from the White House or Pentagon ahead of the public announcements.

The roots of the dispute lie in a disagreement over the terms under which the military can use AI models. Anthropic reportedly refused to allow the Department of Defense to use Claude for certain purposes it considered ethically out of bounds — and that refusal, rather than any security concern, appears to be at the heart of the matter. A Pentagon official framed it differently: "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes."

The Political Dimension

What makes this story unusual is how openly political it has become. President Donald Trump posted on Truth Social directing all federal agencies to stop using Anthropic entirely. Defense Secretary Hegseth followed with his own announcement on X, saying Anthropic would be "immediately" designated a supply chain risk. Neither post came with prior notice to the company.

Those familiar with Anthropic suggest the animus stems partly from the fact that CEO Dario Amodei, unlike many of his Silicon Valley peers, has not donated large sums to Trump or publicly praised the administration. In the current political climate, that neutrality appears to have been read as opposition.

Senator Kirsten Gillibrand was blunt: "Designating Anthropic as a supply chain risk was shortsighted, self-destructive, and a gift to our adversaries. The government openly attacking an American company for refusing to compromise its own safety measures is something we expect from China, not the United States."

The Fallout — and the Opportunity

As Anthropic's relationship with the US military collapsed, OpenAI moved in. CEO Sam Altman announced a new Department of Defense contract with what he described as "more guardrails than any previous agreement for classified AI deployments, including Anthropic's" — a pointed comparison.

Meanwhile, Microsoft confirmed it will continue embedding Claude in products for non-defense clients. "Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers" for non-defense work.

Anthropic itself remains a significant force. Its chief product officer confirmed that "more than a million people" are signing up for Claude every day, and Claude remains the most downloaded AI app in several countries. The legal challenge is expected to centre on whether the designation exceeded the Secretary's authority under the requirement to use "the least restrictive means necessary" to protect the supply chain.

Source: BBC News

Comments

Loading comments...