Home / Science / AI Medical Advice: Source Matters for Accuracy
AI Medical Advice: Source Matters for Accuracy
10 Feb
Summary
- AI tools more often fail with medical misinformation from authoritative sources.
- Discharge notes, not social media, more likely to trick AI.
- AI believed false hospital notes nearly 47% of time.

A recent study indicates that artificial intelligence tools are more likely to provide erroneous medical advice when the misinformation stems from a source the AI deems authoritative.
Testing revealed that AI software was more readily misled by errors in simulated doctors' discharge notes than by inaccuracies found in social media discussions. This suggests that current AI systems may prioritize confident presentation over factual accuracy in medical contexts.
The research exposed AI to various content types, including fabricated hospital discharge summaries and common health myths. Findings showed that AI tools accepted false information from authoritative-looking sources nearly 47% of the time, a significant increase from the 32% overall rate of belief in misinformation.
Conversely, AI displayed greater skepticism towards social media content, with misinformation propagation dropping to 9% when sourced from platforms like Reddit. The study also noted that the tone of user prompts influenced AI's susceptibility, with authoritative queries increasing the likelihood of accepting false claims.
While AI holds potential for assisting clinicians and patients, the study emphasizes the need for built-in safeguards to verify medical claims before they are presented as fact. This research underscores critical areas for strengthening AI systems before their integration into healthcare.




