Home / Technology / AI Safety Team Tackles Chemical Weapons Risk
AI Safety Team Tackles Chemical Weapons Risk
21 Mar
Summary
- Anthropic seeks experts for AI policy on chemical and explosives.
- Role aims to prevent catastrophic misuse of AI technology.
- Company prohibits AI use for developing weapons.

Anthropic has posted a job listing for a Policy Manager, Chemical Weapons and High Yield Explosives, drawing attention online. This role is designed to shape how AI systems handle sensitive information related to dangerous materials, working with safety researchers to prevent catastrophic misuse.
The New York-based manager will be responsible for building and enforcing safeguards, as Anthropic's usage policies strictly prohibit the development or design of weapons using their products. The company seeks experts to ensure its AI technology remains secure and beneficial.
This recruitment comes amid a public disagreement between Anthropic and the Department of Defense over the use of AI in autonomous weapons and mass surveillance. The Pentagon, citing national security concerns, has banned its use of Anthropic's technology following a six-month phase-out.
Anthropic previously updated its Responsible Scaling Policy in February, citing factors such as the federal government's prioritization of economic growth over safety regulations. The policy manager will navigate this complex and evolving landscape of AI safety and national security.




