Home / Technology / AI Safety Race Shifts: Competitors' Actions Now Matter
AI Safety Race Shifts: Competitors' Actions Now Matter
26 Feb
Summary
- Anthropic now considers competitors' actions before pausing model development.
- US Defense Department seeks broad AI tool usage from Anthropic.
- Anthropic previously prioritized absolute risk reduction regardless of others.

Anthropic, initially aiming for an industry-wide "race to the top" in AI safety, announced Tuesday a significant shift in its safety practices. The company will now adjust its model development pauses based on competitors' actions, rather than solely on absolute risk. This reflects a perceived shift in the policy environment towards prioritizing AI competitiveness and economic growth over safety-focused discussions at the federal level.
Simultaneously, Anthropic is reportedly facing intense pressure from the U.S. Defense Department. The military is pushing for the company to allow its AI tools for wide-ranging purposes, including mass surveillance and autonomous weapons deployment without human oversight. Contract negotiations are ongoing, with reports of threats to sever the military relationship if Anthropic does not concede.




