Home / Technology / Pentagon Bans Anthropic AI Over Safeguards
Pentagon Bans Anthropic AI Over Safeguards
28 Feb
Summary
- Pentagon labeled Anthropic a supply-chain risk.
- OpenAI will deploy AI on Pentagon's classified network.
- Anthropic refused to change stance on surveillance/weapons.

The Pentagon has declared Anthropic a significant supply-chain risk, mandating a six-month transition period for the AI company to hand over its services to another provider. This decision stems from an escalating disagreement between Anthropic and defense officials regarding the implementation of safeguards on AI technologies, specifically concerning surveillance and autonomous weapons.
In parallel, OpenAI has secured an agreement to deploy its artificial intelligence models within the Defense Department's classified network. OpenAI's CEO, Sam Altman, stated this arrangement upholds the company's principles against domestic mass surveillance and ensures human responsibility for the use of force, including autonomous weapons.
Anthropic has maintained its position, asserting that no amount of pressure will alter its stance against mass domestic surveillance or fully autonomous weapons. This conflict highlights a broader tension between major tech companies and government agencies over the ethical deployment of advanced AI, particularly in military applications. Both Anthropic and OpenAI are also focusing on profitability and potential IPOs, underscoring the increasing commercialization of AI development.




