Home / Technology / Pentagon Threatens AI Firm Anthropic Over Military Contracts
Pentagon Threatens AI Firm Anthropic Over Military Contracts
25 Feb
Summary
- Pentagon warned Anthropic of contract termination if terms aren't met.
- Dispute centers on Anthropic's guardrails for its Claude AI tool.
- Up to $200 million in military work is at risk for the AI startup.

The Pentagon has issued a stern warning to AI startup Anthropic PBC, threatening to terminate existing military contracts if the company does not comply with government terms for the use of its technology. This ultimatum comes after a meeting between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth.
The core of the conflict lies in Anthropic's demand for specific guardrails on its Claude AI tool. These include restrictions against using its products for autonomous targeting of enemy combatants or mass surveillance of US citizens. The potential termination of contracts puts up to $200 million in military work at risk.
Anthropic, which is valued at approximately $380 billion, was the first AI company cleared to handle classified materials within the US government. Its Claude Gov tool is popular among Pentagon officials, though it faces competition from rivals like OpenAI and Google's DeepMind.
The Pentagon's concern was heightened by questions surrounding Anthropic's AI use during a special forces operation in early January. Anthropic has stated it has not discussed specific operations or expressed concerns outside of routine technical discussions. The dispute emerges as the Pentagon aims to become an "AI-first" force, urging reduced bureaucratic barriers to AI adoption.




