feedzop-word-mark-logo
searchLogin
Feedzop
homeFor YouUnited StatesUnited States
You
bookmarksYour BookmarkshashtagYour Topics
Trending
Terms of UsePrivacy PolicyAboutJobsPartner With Us

© 2026 Advergame Technologies Pvt. Ltd. ("ATPL"). Gamezop ® & Quizzop ® are registered trademarks of ATPL.

Gamezop is a plug-and-play gaming platform that any app or website can integrate to bring casual gaming for its users. Gamezop also operates Quizzop, a quizzing platform, that digital products can add as a trivia section.

Over 5,000 products from more than 70 countries have integrated Gamezop and Quizzop. These include Amazon, Samsung Internet, Snap, Tata Play, AccuWeather, Paytm, Gulf News, and Branch.

Games and trivia increase user engagement significantly within all kinds of apps and websites, besides opening a new stream of advertising revenue. Gamezop and Quizzop take 30 minutes to integrate and can be used for free: both by the products integrating them and end users

Increase ad revenue and engagement on your app / website with games, quizzes, astrology, and cricket content. Visit: business.gamezop.com

Property Code: 5571

Home / Technology / AI Trusts Fake Health Claims: A Study Reveals Vulnerability

AI Trusts Fake Health Claims: A Study Reveals Vulnerability

10 Feb

•

Summary

  • Large language models accept fake medical claims 32% of the time.
  • Model performance varies, with weaker systems falling for claims more often.
  • Medical fine-tuned models underperform compared to general AI systems.
AI Trusts Fake Health Claims: A Study Reveals Vulnerability

Large language models (LLMs) demonstrate a significant vulnerability to medical misinformation, a new study published in The Lancet Digital Health reveals. These advanced AI systems can mistakenly repeat false health information when it is presented in realistic medical language. Researchers analyzed over a million prompts across 20 leading AI models, including those from OpenAI, Meta, and Google.

The study found that LLMs accepted made-up health claims approximately 32% of the time. However, the susceptibility varied greatly among models. Less advanced systems believed false claims over 60% of the time, while more robust models like ChatGPT-4o accepted them only about 10% of the time. Notably, AI models specifically fine-tuned for medical applications consistently underperformed compared to general-purpose models.

This research underscores that LLMs often prioritize the confident presentation of a medical claim over its factual accuracy. Examples of accepted misinformation include claims that Tylenol causes autism, rectal garlic boosts immunity, or tomatoes are as effective as blood thinners. The findings highlight an urgent need for robust safeguards within AI systems to verify medical claims before they are integrated into healthcare, ensuring patient safety and maintaining trust in AI applications.

trending

Salesforce lays off 1000

trending

India US trade tariffs slashed

trending

Margot Robbie's Wuthering Heights panned

trending

CBSE board exams: key details

trending

Jana Nayagan movie court case

trending

Dhakshineswar Suresh Davis Cup hero

trending

Deepika Padukone wears Gaurav Gupta

trending

NZ vs UAE match prediction

trending

iPhone 17 Croma Valentine's sale

Disclaimer: This story has been auto-aggregated and auto-summarised by a computer program. This story has not been edited or created by the Feedzop team.
Large language models accepted made-up health claims approximately 32% of the time across various models tested in a recent study.
No, the study found that medical fine-tuned models consistently underperformed compared with general AI systems when presented with fake medical claims.
The study indicates that AI models are more influenced by how confidently a medical claim is written rather than its factual accuracy.

Read more news on

Technologyside-arrowOpenAIside-arrowGoogleside-arrow

You may also like

ChatGPT Integrates Ads: AI's New Revenue Frontier?

17 Jan • 198 reads

article image

OpenAI Launches ChatGPT Translate, Challenging Google

15 Jan • 29 reads

article image

AI Exposed: Hackers Exploit Proxy Flaws

12 Jan • 86 reads

article image

Microsoft's Copilot Stumbles as AI Rivals Surge Ahead

12 Jan • 183 reads

article image

AI Memory Hacked: Chatbots Now Deceiving Users

1 Jan • 203 reads

article image