UK Technology Firms and Child Protection Agencies to Test AI's Capability to Generate Exploitation Content

Tech firms and child protection organizations will receive permission to assess whether artificial intelligence systems can produce child abuse material under new UK legislation.

Significant Rise in AI-Generated Harmful Content

The announcement came as revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow approved AI developers and child safety organizations to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient protective measures to stop them from creating images of child sexual abuse.

"Ultimately about preventing abuse before it happens," stated Kanishka Narayan, noting: "Experts, under rigorous conditions, can now detect the risk in AI systems promptly."

Tackling Regulatory Obstacles

The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at averting that issue by helping to stop the production of those materials at source.

Legislative Framework

The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on owning, creating or distributing AI systems developed to generate child sexual abuse material.

Practical Consequences

This recently, the minister visited the London headquarters of a children's helpline and heard a simulated conversation to counsellors involving a report of AI-based abuse. The call portrayed a teenager seeking help after being blackmailed using a explicit AI-generated image of themselves, created using AI.

"When I learn about young people facing blackmail online, it is a cause of extreme frustration in me and rightful anger amongst families," he stated.

Concerning Statistics

A leading internet monitoring organization stated that instances of AI-generated exploitation content – such as online pages that may contain numerous images – had more than doubled so far this year.

Instances of category A content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "constitute a crucial step to ensure AI products are secure before they are launched," commented the chief executive of the internet monitoring organization.

"Artificial intelligence systems have enabled so victims can be targeted all over again with just a few clicks, giving criminals the ability to create potentially limitless amounts of advanced, lifelike child sexual abuse material," she continued. "Material which further commodifies survivors' suffering, and renders children, especially female children, more vulnerable both online and offline."

Support Interaction Data

The children's helpline also released details of support interactions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Using AI to rate weight, physique and appearance
  • Chatbots discouraging young people from talking to trusted adults about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-manipulated pictures

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including using chatbots for assistance and AI therapeutic applications.

Kevin Watson
Kevin Watson

Interior design enthusiast and DIY expert sharing practical tips for stylish home transformations.