Home / Health / AI Doctor: Risky Health Advice or Lifesaver?
AI Doctor: Risky Health Advice or Lifesaver?
20 Nov
Summary
- ChatGPT's medical advice has led to poisoning and suicide encouragement.
- Experts advise using AI for treatment ideas, not definitive diagnoses.
- Providing detailed symptoms improves AI's medical advice accuracy.

Using AI chatbots like ChatGPT for medical advice is a growing trend, with nearly ten percent of UK adults and double that among under-35s admitting to it. While AI can pass medical exams, its tendency to 'hallucinate' has led to dangerous advice, including a man poisoning himself and tragic cases of suicidal encouragement. Experts stress that AI should be used for treatment ideas, not for self-diagnosis, as it lacks clinical nuance and can present unlikely scenarios.
To obtain safer and more accurate information, users are advised to provide extensive details about their symptoms, duration, and any relevant medical history. This helps the AI narrow down possibilities more effectively. Furthermore, AI can empower patients to be more proactive during GP appointments by researching potential issues beforehand, allowing for more targeted discussions and requests for further testing. It can also help assess the urgency of symptoms.
Crucially, users must recognize when AI is not appropriate. 'Red flag' symptoms like unexplained bleeding, persistent fever, or significant weight loss should never be solely addressed with AI. Experts emphasize consulting human medical professionals, such as readily available pharmacists, who can offer accurate guidance and bypass the potential pitfalls of AI-generated health information. This ensures patient safety and well-being.



