Home / Technology / AI's Dangerous Health Advice Sparks Mental Health Inquiry
AI's Dangerous Health Advice Sparks Mental Health Inquiry
20 Feb
Summary
- Mind charity launches global inquiry into AI and mental health risks.
- Google AI Overviews provided dangerous medical advice to users.
- Inquiry aims to create safer digital mental health ecosystem.

A significant global inquiry into artificial intelligence and mental health has been launched by the charity Mind. This year-long commission will investigate the risks and necessary safeguards as AI increasingly impacts individuals with mental health conditions worldwide.
The inquiry was initiated after a Guardian investigation revealed that Google's AI Overviews, seen by 2 billion people monthly, presented "very dangerous" and inaccurate medical advice. Despite Google's claims of reliability, experts highlighted that the AI-generated summaries could lead to harm, prevent treatment, reinforce stigma, and in severe cases, endanger lives.
Mind CEO Dr. Sarah Hughes stated that AI holds potential for improving mental health support but stressed the need for responsible development and deployment with appropriate safeguards. The commission aims to ensure innovation does not compromise well-being, prioritizing the input of individuals with lived experience in shaping digital mental health support.
The initiative seeks to create a safer digital mental health ecosystem through strong regulation and standards. Experts noted that while previous online searches for mental health information had limitations, AI Overviews replaced nuanced information with a seemingly definitive, yet untrustworthy, summary, offering brevity at the cost of credibility.




