Home / Technology / AI Health Advice: 50% Wrong, Study Finds
AI Health Advice: 50% Wrong, Study Finds
15 Apr
Summary
- Half of AI chatbot medical advice is problematic.
- Nearly 20% of AI health responses were highly problematic.
- Chatbots confidently gave flawed answers without accurate references.

A new study indicates that artificial intelligence chatbots are providing users with incorrect medical advice about half the time, presenting considerable health risks. Researchers evaluated five popular AI platforms by posing 10 health-related questions across five categories.
The findings, published in BMJ Open, showed that approximately 50% of the responses were problematic, with nearly 20% deemed highly problematic. The chatbots performed better on straightforward questions and topics like vaccines and cancer, but struggled with open-ended prompts and areas such as nutrition and stem cells.
These AI systems often delivered answers with a high degree of confidence, even when the information was inaccurate. Notably, no chatbot provided a complete and accurate reference list for any prompt. The study authors emphasized the behavioral limitations of these systems and the necessity of re-evaluating their deployment in public-facing health communication.
Concerns are mounting over the widespread use of generative AI for health guidance, as these platforms are not licensed medical advisors. OpenAI reports that over 200 million users ask ChatGPT health questions weekly, with companies like Anthropic also expanding health care offerings.