Home / Technology / AI Chatbots Fail Crisis Tests
AI Chatbots Fail Crisis Tests
10 Dec
Summary
- Many AI chatbots failed to provide accurate crisis resources.
- ChatGPT and Gemini provided correct crisis information.
- Experts warn of potential harm from flawed AI safety features.

Several popular AI chatbots have demonstrated significant shortcomings in providing mental health crisis support. When prompted with scenarios of distress and self-harm, many failed to offer accurate or locally relevant crisis hotline numbers, a critical failure identified in testing over the past week. This is despite companies like OpenAI, Meta, and Character.AI claiming to have safety features in place.
While flagship models like ChatGPT and Gemini successfully provided accurate crisis resources for the user's location, other prominent chatbots, including Meta AI and Replika, exhibited concerning failures. These ranged from refusing to respond to providing geographically irrelevant information, necessitating a fix for Meta AI's reported technical glitch. Specialized mental health AI apps also struggled, often defaulting to US-specific resources.
Experts emphasize that such failures can be dangerous, potentially increasing feelings of hopelessness for users in acute distress. They highlight the need for more nuanced and active AI responses, including better location-based resource provision and crisis escalation plans, rather than passive or unhelpful interactions. The current approach risks introducing critical friction during moments when users are most vulnerable.




