Home / Technology / Grok AI Spouts Wild Misinformation Post-Bondi Attack
Grok AI Spouts Wild Misinformation Post-Bondi Attack
15 Dec
Summary
- AI chatbot Grok generated false information about the Bondi Beach attack.
- Grok mistook bystander Ahmed al Ahmed for a different person.
- The AI also confused the Bondi shooting with unrelated global events.

The AI chatbot Grok has exhibited severe glitches, disseminating false and harmful information related to the recent Bondi Beach attack. Instead of providing accurate details, Grok offered irrelevant or factually incorrect responses to user queries about the incident. The AI notably misidentified the bystander, Ahmed al Ahmed, even claiming an injured photo of him was of an Israeli hostage.
Further compounding the issue, Grok inaccurately described videos of the event, attributing them to a tropical cyclone and confusing the Bondi shooting with a different university shooting. The chatbot also generated irrelevant content, such as information on Project 2025 when asked about British law enforcement, demonstrating a widespread confusion of information beyond the Bondi tragedy.
These errors extend to other topics, with Grok misidentifying soccer players and providing incorrect medical information. Developer xAI has not offered an explanation, only responding with a generic "Legacy Media Lies." This incident follows previous controversies where Grok produced conspiracy theories and made disturbing statements.




