Home / Crime and Justice / AI Creates Child Abuse Material: Schools Face Blackmail
AI Creates Child Abuse Material: Schools Face Blackmail
8 May
Summary
- Criminals use AI to manipulate children's photos for blackmail.
- Schools are urged to remove identifiable pupil images.
- UK advisory body issues guidance to protect against AI sextortion.

Criminals are increasingly manipulating images of children found on school websites and social media to create child sexual abuse material (CSAM) using AI tools. These bad actors then attempt to blackmail schools for money, threatening to release the fabricated explicit images. Experts and the UK's National Crime Agency (NCA) are urging educational institutions to remove identifiable pictures of pupils from their online platforms.
The Internet Watch Foundation (IWF) recently reported an incident where a UK secondary school was targeted. Criminals used publicly available photos, transformed them into CSAM via AI, and sent them to the school demanding payment. The IWF flagged these images, with 150 being classified as CSAM under UK law, to tech platforms to prevent their distribution.
In response, an advisory body on tackling online harms has issued guidance to schools. Recommendations include removing face-on student images and opting for blurred or distant shots instead. Schools are also advised to avoid publishing identifiable information like names or faces and to consider if pupil photos are necessary at all.
This misuse of generative AI fuels a growing problem of sextortion, a crime that has tragically been linked to teen suicides in the UK. The NCA has identified criminal gangs in West Africa and Nigeria as hubs for such activities. While not widespread, authorities are concerned that more schools will become targets if protective measures are not enhanced.