Home / Business and Economy / AI Firm Fights Pentagon's 'Security Risk' Label
AI Firm Fights Pentagon's 'Security Risk' Label
9 Apr
Summary
- US appeals court expedited Anthropic's lawsuit against the Pentagon.
- Pentagon designated Anthropic a national security supply chain risk.
- Dispute arose after Anthropic refused use for surveillance/autonomous weapons.

A US appeals court has ordered an expedited legal process for Anthropic's dispute with the Department of War. The AI company, creator of the Claude model, is challenging its designation by the Pentagon as a national security supply chain risk. This label is unusual, typically reserved for entities from foreign adversaries.
The appellate panel denied Anthropic's immediate request to halt the designation but recognized the seriousness of the AI startup's legal arguments. The court emphasized that imposing judicial management over the Department of War's procurement of vital AI technology during active conflict would be a substantial burden on military operations.
The conflict originated in February when Anthropic expressed concerns regarding the Pentagon's potential use of its technology for mass surveillance or fully autonomous weapons systems. This stance reportedly angered Pentagon chief Pete Hegseth. Federal Judge Rita Lin had previously issued a temporary freeze on the sanctions, deeming the government's blacklisting likely unlawful and arbitrary.
Anthropic stated gratitude for the expedited review and expressed confidence in ultimately proving the designation unlawful. The company affirmed its commitment to collaborating productively with the government while ensuring the responsible development and deployment of AI for all Americans.