Home / Technology / OpenAI Launches 'Trusted Contact' for Mental Health
OpenAI Launches 'Trusted Contact' for Mental Health
7 May
Summary
- New feature allows users to designate a contact for self-harm concerns.
- It aims to connect AI users with vital real-world care and support.
- This follows lawsuits and investigations into ChatGPT's responses to distress.

OpenAI has implemented a new safety feature called 'Trusted Contact' for its AI product, ChatGPT. Launched on May 7, 2026, this optional tool allows users to select an adult contact. If the user expresses serious concerns about self-harm or suicide, this designated contact will be notified.
The company stated its goal is to ensure AI systems connect individuals with crucial real-world care and relationships. This initiative addresses mounting legal and public pressure. OpenAI faces multiple lawsuits alleging that ChatGPT provided harmful responses to users expressing psychological distress, sometimes leading to their deaths.
Furthermore, the state of Florida is currently investigating ChatGPT's alleged involvement in criminal behavior, including the promotion of suicide and self-harm. The 'Trusted Contact' feature was developed with input from experts, including the American Psychological Association.