UK Tech Firms and Child Protection Officials to Test AI's Capability to Generate Exploitation Content

Tech firms and child protection agencies will receive authority to assess whether artificial intelligence tools can generate child abuse images under new UK laws.

Significant Increase in AI-Generated Harmful Content

The announcement came as revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will allow approved AI developers and child safety organizations to inspect AI models – the foundational technology for chatbots and image generators – and verify they have sufficient protective measures to prevent them from creating depictions of child sexual abuse.

"Ultimately about preventing abuse before it happens," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the risk in AI models early."

Tackling Legal Obstacles

The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at preventing that issue by helping to halt the creation of those images at their origin.

Legislative Structure

The amendments are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI models developed to generate exploitative content.

Real-World Impact

This week, the official visited the London headquarters of Childline and heard a simulated conversation to counsellors featuring a account of AI-based exploitation. The interaction depicted a adolescent seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.

"When I hear about children experiencing extortion online, it is a cause of intense frustration in me and justified anger amongst parents," he said.

Alarming Statistics

A leading online safety organization reported that cases of AI-generated exploitation material – such as online pages that may contain multiple images – had more than doubled so far this year.

Instances of category A material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a vital step to guarantee AI products are safe before they are released," commented the chief executive of the internet monitoring organization.

"AI tools have made it so survivors can be targeted all over again with just a simple actions, providing criminals the capability to create possibly endless amounts of advanced, lifelike child sexual abuse material," she added. "Content which further commodifies victims' suffering, and renders young people, especially female children, less safe on and off line."

Counseling Session Data

Childline also published details of support sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:

  • Employing AI to evaluate weight, body and appearance
  • AI assistants discouraging children from consulting trusted adults about harm
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated images

During April and September this year, Childline delivered 367 support interactions where AI, chatbots and related terms were discussed, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, including utilizing chatbots for support and AI therapeutic apps.

Dr. Deborah Hill
Dr. Deborah Hill

Elara is a seasoned writer and researcher passionate about sharing practical knowledge and innovative ideas with readers worldwide.