Home / Technology / AI Voice Scams Surge, Exposing Vulnerabilities in Fraud Detection
AI Voice Scams Surge, Exposing Vulnerabilities in Fraud Detection
10 Nov
Summary
- Real-time AI models enable convincing voice conversations
- UK firm defrauded of $25M in deepfake scam last year
- Vishing attacks on Cisco extracted customer data

In the past year, the threat of AI-enabled voice phishing, or vishing, scams has become a reality. Advances in real-time, speech-native AI models have made it possible for anyone to create a synthetic voice that can converse fluently, improvise naturally, and sustain a dialogue in a human-like manner.
Last year, a UK tech company, Arup, was defrauded of $25 million in a deepfake scam, while a vishing attack on Cisco succeeded in extracting information from the company's cloud-based customer relationship management system. These incidents demonstrate that what was once a theoretical possibility is now a growing concern, as the technology has become more accessible and easier to exploit.
Experts warn that the increasing realism and low cost of voice cloning platforms, such as ElevenLabs and Cartesia, have further compounded the threat. Public officials have already been impersonated in such attacks, according to the FBI, which has advised the public not to assume that messages claiming to be from senior US officials are authentic.




