Home / Technology / AI Health Fails Critical Safety Tests
AI Health Fails Critical Safety Tests
26 Feb
Summary
- AI health tool under-triaged over half of emergency cases.
- System failed to detect suicidal ideation in some scenarios.
- Experts warn of potential harm and loss of life.

A recent independent safety evaluation of ChatGPT Health has revealed significant shortcomings, with the AI platform frequently failing to identify the need for urgent medical care and missing instances of suicidal ideation.
The study, published in Nature Medicine, simulated nearly 1,000 patient scenarios. It found that ChatGPT Health under-triaged 51.6% of cases requiring immediate hospital attention, advising users to stay home or book routine appointments instead.
Researchers were particularly alarmed by the AI's failure to detect suicidal ideation when certain contextual information, like normal lab results, was included. Experts express grave concern that these deficiencies could lead to preventable harm and fatalities, stressing the need for robust safety standards and oversight.




