As child abuse image crimes increase, we’re calling on Ofcom and tech companies to take action

There was a 25% rise in child abuse image offences recorded by UK police in 2022/23. We're urging tech companies and Ofcom to do more to keep children safe.

Police data that we requested1 shows that, in 2022/23, police logged more than 33,000 offences where child abuse images were collected and distributed.2

When we compared data from forces who had supplied figures this year as well as in previous years, we found there was a 25% increase in offences between 2021/22 and 2022/23. And a 79% increase on figures from five years ago (2022/23 compared to 2017/18).3
 
Our investigation into child abuse images on social media shows:
  • Snapchat was involved in almost half (44%) of instances where the online platform was identified by police.4
  • Meta-owned platforms Instagram, Facebook and WhatsApp were used in a quarter of offences where the online platform was identified by police.4
The new data shows the widespread use of social media and messaging apps in child sexual abuse image crimes. This results largely from a failure to design child safety features into products. 
 
It comes as insight from Childline shows young people being targeted by adults to share child abuse images via social media and the calculated use of end-to-end encrypted private messaging apps by adults to find and share child abuse images. 
 
We’re calling on tech companies to take swift and ambitious action to address what is currently happening on their platforms. And we’re urging Ofcom to significantly strengthen its approach to tackling child sexual abuse through effective enforcement of the Online Safety Act. 

A 14-year-old girl told Childline: “One night I got chatting with this guy online who I’d never met and he made me feel so good about myself. He told me he was 15, even though deep down I didn’t believe him.

"I sent him a couple of semi-nudes on Snap(chat), but then instantly regretted it. I asked him to delete the pics, but he just kept on making me do stuff for him not to post them – like getting me to strip live on camera. I just want to block him, but if I block him, he will just post the pictures.5

Online Safety Act implementation

Disrupting the increasing levels of online child sexual abuse offences will require regulated tech platforms to introduce systemic changes to their products to stop them being used to organise, commit, and share child abuse.

Ofcom has already run a consultation on its proposals for how internet services should approach their new duties relating to illegal content.

We want these measures to be introduced without delay and urge Ofcom to begin work on a second version of the codes that will require companies to go much further.

Companies should be required to use technology that can help identify and tackle grooming, sextortion and new child abuse images.

We also want tougher measures for private messaging services to make child protection a priority, including in end-to-end encrypted environments.

Meta’s encryption plans

Facebook and Instagram, both owned by Meta, were used in more than a fifth of abuse image instances where a platform was recorded. Meta’s roll-out of end-to-end encryption on these sites will prevent authorities from identifying offenders and safeguarding victims.

We want Meta to pause its plans until it can prove child safety will not be compromised. A balance needs to be found between the safety and privacy of all users, including children. Further rollout should be delayed until Ofcom can study Meta’s risk assessment as part of the new regulatory regime.

Sir Peter Wanless, NSPCC Chief Executive, said: “It’s alarming to see online child abuse continue to rise, especially when tech companies should be acting to make their sites safe by design ahead of incoming regulation.

“Behind these crimes are children who have been targeted by adults who are able to organise and share sexual abuse with other offenders seamlessly across social media and messaging apps.

“The Online Safety Act sets out robust measures to make children fundamentally safer on the sites and apps they use so they can enjoy the benefits of a healthy online experience.

“Ofcom has been quick off the blocks but must act with greater ambition to ensure companies prioritise child safety in the comprehensive way that is so desperately needed.”


References

  1. 1. The NSPCC sent Freedom of Information requests to all 43 police forces in England and Wales and received data from PSNI and Police Scotland relating to recorded offences of indecent images of children.

  2. 2. 35 forces provided useable data for 2022/23. There was a total of 33,226 offences recorded.

  3. 3. We compared data from forces who provided figures for 2017/18 (17,817 offences), 2021/22 (25,582 offences) and 2022/23 (31,892 offences).

  4. 4. Many police forces have previously refused to share information about the social media sites used in child abuse image offences on time and cost grounds. So the data this year is based on a key word search provided by the NSPCC and is not extensive of every method used in child abuse image offences.

    We asked police forces to search for instances where the following were used: Snapchat, Facebook/Messenger, Instagram, WhatsApp, X/Twitter, Kik, TikTok, YouTube, Discord, Skype, Facetime, Roblox, Oculus, VR, Metaverse, Only Fans, Signal, iCloud, iMessage, Dropbox, Mega, Patreon.

    Police disclosed the platform involved in 9,876 instances. Of these Snapchat was flagged 4,312 times (44%), Instagram 1,217 (12%), Facebook 852 (9%) and WhatsApp 522 times (5%). Other notable platforms included Kik 494 (5%), TikTok 427 (4%), Twitter/X 294 (3%) and YouTube 330 (3%).

  5. 5. Snapshots are based on real Childline and service users but are not necessarily direct quotes.