Home / Technology / AI Labs Fail Safety Test: A Failing Grade Given
AI Labs Fail Safety Test: A Failing Grade Given
4 Dec
Summary
- Leading AI developers received low grades for safety policies.
- Existential safety received particularly dismal scores.
- Experts warn of a widening gap between AI capability and safety.

A recent assessment by the Future of Life Institute has revealed that major AI development companies are underperforming in crucial safety areas. Prominent labs such as Google DeepMind, Anthropic, and OpenAI received grades hovering around a C, indicating a significant deficiency in their safety policies and practices.
The study focused on six criteria, including governance and accountability, with a particularly concerning outcome in "existential safety." This category, which assesses preparedness for extreme risks from advanced AI, saw group-wide dismal scores, underscoring a critical gap between AI's rapidly advancing capabilities and its safety measures.
Experts warn that this lack of robust safeguards and independent oversight leaves the industry structurally unprepared for the risks it is actively creating. The findings urge companies to move beyond lip service and implement concrete, evidence-based safeguards to prevent worst-case scenarios.




