Home / Technology / AI Safety Staff: A Flight Crew?
AI Safety Staff: A Flight Crew?
18 Mar
Summary
- Few safety staff at major AI labs.
- Investment heavily favors power over safety.
- Lack of transparency from AI companies.

The development of advanced artificial intelligence is marked by a significant imbalance, with a vastly smaller number of personnel dedicated to AI safety compared to those enhancing AI capabilities. An investigation by Glass.ai identified only 373 individuals across OpenAI, Google DeepMind, Anthropic, and xAI who work full-time on AI safety and trustworthiness.
This contrasts sharply with the estimated over 11,000 employees in these four leading AI labs. These safety roles encompass aligning AI with human values, ensuring system security, and preventing adverse psychological effects. Despite pushes for transparency, AI firms have largely been secretive about their internal staffing and development processes.
Companies like OpenAI and Anthropic have defended their safety efforts, stating that safety is embedded across their operations rather than siloed. However, past initiatives, such as OpenAI's 'Superalignment team,' have dissolved, and former employees have reported concerns about prioritizing engagement over user welfare and a culture that overlooks safety in favor of progress.
This rapid deployment of AI, with what appears to be modest safety infrastructure, stands in contrast to the decades-long development of safety regimes in industries like nuclear energy and aviation, which often followed catastrophic failures. The current lack of public accountability and transparency regarding AI safety metrics is a notable departure from established industry practices.




