Home / Health / AI Chatbots Pose Serious Risks to Mental Health, Experts Warn
AI Chatbots Pose Serious Risks to Mental Health, Experts Warn
14 Nov
Summary
- APA warns against over-reliance on AI chatbots for mental health support
- Several lawsuits filed against AI companies after incidents of mishandling mental health crises
- APA recommends companies prioritize user privacy, prevent misinformation, and create safeguards

In a recent advisory, the American Psychological Association (APA) has outlined the dangers of consumer-facing AI chatbots and provided recommendations to address the growing reliance on these technologies for mental health support.
The APA's advisory highlights how AI chatbots, while readily available and free, are poorly designed to handle users' mental health needs. The report cites several high-profile incidents, including a lawsuit filed against OpenAI after a teenage boy died by suicide following a conversation with ChatGPT about his feelings and ideations.
The APA warns that through the validation and amplification of unhealthy ideas or behaviors, some AI chatbots can actually aggravate a person's mental illness. The advisory also underscores the risk of these chatbots creating a false sense of therapeutic alliance, while being trained on clinically unvalidated information across the internet.
To address these concerns, the APA has put the onus on companies developing these chatbots to prevent unhealthy relationships with users, protect their data, prioritize privacy, prevent misinformation, and create safeguards for vulnerable populations. The association also calls for policy makers and stakeholders to encourage AI and digital literacy education, and prioritize funding for scientific research on the impact of generative AI chatbots and wellness apps.
Ultimately, the APA urges the deprioritization of AI as a solution to the mental health crisis, emphasizing the urgent need to fix the foundational systems of care.




