Home / Technology / OpenAI Boosts AI Safety Amid Mental Health Lawsuits
OpenAI Boosts AI Safety Amid Mental Health Lawsuits
12 Dec
Summary
- New GPT-5.2 model shows improved responses for mental health distress.
- OpenAI faces lawsuits alleging AI exacerbated users' mental health issues.
- New safeguards aim to protect minors and reduce harmful AI interactions.

OpenAI has released its latest AI model, GPT-5.2, emphasizing significant advancements in mental health safety features. The company states this iteration offers stronger responses to sensitive conversations, particularly concerning signs of suicide, self-harm, and emotional reliance on the AI.
This development comes amid increasing legal scrutiny and public criticism, with OpenAI facing lawsuits that allege its AI has contributed to users' mental health crises, including instances of suicide. OpenAI has refuted these claims, asserting that its AI directed users to seek help and was misused.
Beyond mental health, GPT-5.2 exhibits a decreased tendency to refuse requests for mature content, though OpenAI maintains its age-specific safeguards for minors remain effective and are being further enhanced with new tools and parental controls.




