Home / Technology / AI Safety Leader Exits OpenAI Amid User Distress Concerns
AI Safety Leader Exits OpenAI Amid User Distress Concerns
24 Nov
Summary
- OpenAI's head of model policy research is leaving at year-end.
- Her departure follows lawsuits alleging ChatGPT harms mental health.
- The company aims to improve AI responses to distressed users.

Andrea Vallone, a key safety research leader at OpenAI, is set to leave the company by the end of 2025. She headed the model policy team, focusing on how AI should respond to users experiencing mental health crises. Her departure occurs as OpenAI faces significant pressure and multiple lawsuits alleging that ChatGPT contributes to mental health issues and unhealthy user attachments.
Vallone's team played a vital role in a recent report detailing consultations with over 170 mental health experts. This report highlighted the vast number of users potentially experiencing crises weekly and outlined OpenAI's efforts to reduce harmful AI responses. The company has stated that updates to its models have led to significant improvements in handling sensitive user interactions.
This leadership change follows a recent reorganization of another safety-focused team and a broader effort by OpenAI to balance user engagement with responsible AI behavior. As OpenAI expands its user base, it navigates the complex challenge of making its AI enjoyable yet safe, especially for vulnerable users.




