Home / Technology / Google AI Overviews Hallucinate Widely
Google AI Overviews Hallucinate Widely
7 Apr
Summary
- AI Overviews provided inaccurate information in 15% of tests.
- Over half of accurate AI Overview responses were ungrounded.
- Google's AI Overviews can be manipulated by self-published content.

Google's AI Overviews, a feature integrating artificial intelligence into search results, have demonstrated a significant rate of inaccuracies. An analysis conducted in October and February revealed that the AI-generated answers were incorrect approximately 15% of the time. Furthermore, more than half of the responses deemed accurate were 'ungrounded,' meaning the linked sources did not fully support the provided information, complicating verification efforts.
This tendency for errors and ungrounded information has sparked debate about the reliability of AI systems and what users can trust online. Even when AI Overviews are factually correct, they can misinterpret information from reliable sources or provide supplementary details that are wrong. A concerning aspect is the potential for manipulation, where self-published content can be presented by the AI as factual.
While Google acknowledges that its AI can make mistakes and advises users to double-check responses, the company has also contested some of the analyses, citing flaws in benchmark tests. Despite improvements in AI technology, like the upgrade from Gemini 2 to Gemini 3, the issue of ungrounded responses has persisted and even increased. Experts stress the importance of critical evaluation and cross-referencing information from multiple sources to ensure accuracy.