Home / Technology / Pentagon AI Deal: OpenAI Wins, Anthropic Loses
Pentagon AI Deal: OpenAI Wins, Anthropic Loses
2 Mar
Summary
- Pentagon terminated Anthropic talks, citing supply chain risk.
- OpenAI secured a deal to provide AI for classified Pentagon systems.
- Disagreement over surveillance of Americans led to Anthropic's failure.

The Department of Defense has terminated negotiations with AI company Anthropic for a $200 million contract, labeling the firm a "security risk." This decision stemmed from disagreements over the ethical use of AI, specifically Anthropic's refusal to allow its technology for surveillance of Americans or autonomous weapons without human oversight.
In contrast, the Pentagon swiftly reached a framework agreement with OpenAI. This deal allows for the use of OpenAI's AI in classified systems, with OpenAI negotiating technical guardrails for its safety principles. The Pentagon's chief technology officer, Emil Michael, publicly accused Anthropic's CEO, Dario Amodei, of being "a liar" during the tense negotiations.
Anthropic plans to sue the Pentagon over the "supply chain risk" designation, which has never been used against an American company. Officials within U.S. intelligence agencies, including the C.I.A., have privately urged for a resolution, with some hoping for a future peace agreement between the parties.




