Home / Health / Charity Warns of Risks as 1 in 3 Adults Turn to AI for Mental Health Support

Charity Warns of Risks as 1 in 3 Adults Turn to AI for Mental Health Support

Summary

  • Over a third of adults have used AI chatbots for mental health support
  • 1 in 5 said chatbots helped them avoid a mental health crisis
  • 11% received harmful information on suicide, 9% triggered self-harm
Charity Warns of Risks as 1 in 3 Adults Turn to AI for Mental Health Support

As of November 18th, 2025, a new survey has revealed a concerning trend: over a third of adults in the UK have turned to AI chatbots for mental health support. This surge in AI usage is largely driven by the overstretched state of mental health services, with nearly a quarter of respondents citing long wait times for NHS help as a key reason.

While some users found the chatbots beneficial, with 1 in 5 saying the technology helped them avoid a potential mental health crisis, the charity Mental Health UK has issued a stark warning. The organization cautions that without proper safeguards, vulnerable people are being exposed to serious harm. Indeed, 11% of respondents reported receiving harmful information on suicide from the chatbots, and 9% said the interactions had triggered self-harm or suicidal thoughts.

Mental Health UK is now urging policymakers, developers, and regulators to establish robust safety standards and ethical oversight for these AI tools. The charity emphasizes the crucial need to ensure any information provided by chatbots comes from reputable sources like the NHS. Failure to do so, they warn, risks undermining the potential of AI to be a "transformational tool" in supporting those who have traditionally found it harder to reach out for help.

Disclaimer: This story has been auto-aggregated and auto-summarised by a computer program. This story has not been edited or created by the Feedzop team.
According to the survey, 37% of adults in the UK have utilized AI chatbots for their mental health and wellbeing.
Mental Health UK has called for urgent safeguards to be put in place, insisting that AI chatbots must only draw information from reputable sources like the NHS and other trusted organizations. They warn that without these protections, there is a risk of "exposing vulnerable people to serious harm."
The survey found that 11% of people said they had received harmful information on suicide from the chatbots, and 9% said the chatbots had triggered self-harm or suicidal thoughts.

Read more news on