Home / Technology / Deepfakes Power Evolving AI Investment Scams
Deepfakes Power Evolving AI Investment Scams
7 May
Summary
- Cybercriminals exploit cloaking for AI investment scams.
- Commercial tracking software enables large-scale operations.
- Deepfake content adds credibility to fraudulent schemes.

A significant increase in AI-themed investment scams is being driven by the sophisticated use of deepfake technology and cloaking techniques. Cybersecurity analysis revealed that around 15,500 domains were actively involved in delivering these fraudulent schemes.
Cloaking, a method that conceals harmful content from security scanners and displays it only to intended victims based on attributes like location and device, has become a central element of cybercriminal operations. Threat actors leverage commercial tracking software to manage these campaigns at scale.
These scams frequently promote automated trading platforms using deceptive AI narratives and often employ deepfake imagery to enhance their perceived legitimacy and create urgency. Generative AI tools are also used to produce vast amounts of campaign material programmatically, facilitating rapid expansion across different languages and regions.
Despite ongoing efforts by researchers, these campaigns demonstrate resilience, with operators quickly rotating domains and reusing infrastructure. Standard endpoint protection and firewall controls struggle to detect cloaked content, as harm often occurs only after victims have been directed through malicious pathways, posing a persistent and high risk.