Home / Technology / ChatGPT Adds Trusted Contact for Mental Health Support
ChatGPT Adds Trusted Contact for Mental Health Support
8 May
Summary
- Adult ChatGPT users can designate a trusted contact for self-harm alerts.
- The feature alerts a chosen contact if AI detects concerning self-harm discussions.
- This aims to encourage connection and support for users in distress.

OpenAI has introduced a new mental health safety feature for adult ChatGPT users. The "trusted contact" system allows users to designate an adult who may be notified if the AI detects discussions about self-harm.
This feature, developed with guidance from mental health experts, aims to foster connections with people users already trust. It's designed as an additional layer of support, not a substitute for professional help or crisis services.
When the AI flags potential self-harm discussions, users will be prompted to reach out to their designated contact. Trained reviewers then assess the situation before deciding to send an alert. Notifications will protect user privacy by omitting specific chat details.