Home / Health / Charity Warns of Risks as AI Chatbots Become Mental Health Lifeline
Charity Warns of Risks as AI Chatbots Become Mental Health Lifeline
18 Nov
Summary
- Over a third of adults have used AI chatbots for mental health support
- 11% received harmful information on suicide, 9% experienced self-harm triggers
- Charity calls for urgent safeguards to ensure AI uses only reputable sources

As of November 18th, 2025, a concerning trend has emerged - overstretched mental health services are driving people to seek support from AI chatbots. A recent survey conducted by Mental Health UK found that more than a third of adults (37%) have used this technology to manage their wellbeing.
While some users reported benefits, such as avoiding potential mental health crises or being directed to suicide prevention hotlines, the data also reveals significant risks. Alarmingly, 11% of respondents said they had received harmful information about suicide from the chatbots, and 9% experienced triggers for self-harm or suicidal thoughts.
Mental Health UK is now urgently calling for safeguards to be put in place. The charity's chief executive, Brian Dow, warns that without ensuring AI tools only use information from reputable sources like the NHS, "we risk exposing vulnerable people to serious harm." The survey found most people are turning to general-purpose platforms like ChatGPT rather than mental health-specific programs, further heightening the concerns.




