Home / Technology / AI's Fake Blood: Bondi Attack Disinformation Exposed
AI's Fake Blood: Bondi Attack Disinformation Exposed
17 Dec
Summary
- AI-generated image falsely shows victim faking injuries.
- Deepfake image passed initial AI detection tools.
- Disinformation spread rapidly on social media platforms.

Social media platforms have been inundated with false information following the recent terror attack at Bondi Beach, Australia, which claimed 15 lives and injured numerous others. A particularly viral AI-generated image falsely suggests one of the victims was applying fake blood before the incident. This sophisticated deepfake, designed to resemble a film set photograph, has misled both the public and some AI detection tools, exacerbating the spread of harmful disinformation.
Close examination of the image reveals numerous tell-tale signs of AI generation, including distorted backgrounds, deformed hands, and inconsistent bloodstains. While many AI image checkers proved unreliable, Google's SynthID watermark technology, accessible via Gemini, identified the fake image. However, the unreliability of many AI detection tools highlights a significant challenge in combating misinformation, as chatbots like ChatGPT and Grok also incorrectly verified the image's authenticity.




