Home / Technology / AI: The New Malicious Insider Threat
AI: The New Malicious Insider Threat
26 Feb
Summary
- 61% of companies view AI as a significant data security risk.
- AI tools, given broad access, can mimic malicious insiders.
- 48% of firms experienced reputational damage from AI misinformation.

Artificial Intelligence (AI) is emerging as a critical data security challenge, with 61% of businesses identifying it as their primary concern according to the Thales 2026 Data Threat Report. Enterprises are integrating AI into various operational pipelines, necessitating broad automated access that inadvertently positions these tools as trusted insiders. Current security protocols are often less stringent for AI than for human employees, creating significant vulnerabilities.
Beyond acting as latent malicious insiders, AI is also being weaponized by threat actors. Nearly 60% of companies have reported facing deepfake-driven attacks utilizing AI-generated audio, video, or images for impersonation. These sophisticated attacks can manipulate employees, authorize fraudulent payments, or impact stock prices through fabricated statements. Consequently, 48% of firms have suffered reputational damage due to AI-generated misinformation, underscoring the pervasive impact of these technologies.
Despite these growing threats, a majority of businesses (53%) continue to rely on traditional security measures designed for human users. Less than a third (30%) have allocated specific budgets for AI security. Experts caution that insider risk now extends to automated systems that have been granted trust too rapidly, amplifying weaknesses in identity governance and access policies far quicker than humanly possible.




