British Tech Firms and Child Protection Officials to Test AI's Capability to Create Abuse Content
Tech firms and child protection organizations will receive permission to evaluate whether artificial intelligence tools can produce child exploitation material under recently introduced UK legislation.
Substantial Increase in AI-Generated Illegal Material
The announcement coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the changes, the authorities will permit designated AI developers and child protection organizations to examine AI models – the underlying systems for chatbots and image generators – and verify they have adequate safeguards to stop them from producing depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the danger in AI systems early."
Tackling Legal Challenges
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and others cannot create such content as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that issue by enabling to halt the creation of those materials at source.
Legislative Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a ban on possessing, producing or distributing AI models developed to generate child sexual abuse material.
Practical Impact
This recently, the minister visited the London base of Childline and listened to a mock-up call to counsellors featuring a report of AI-based abuse. The call portrayed a adolescent requesting help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about children experiencing extortion online, it is a cause of intense frustration in me and justified concern amongst families," he said.
Concerning Data
A prominent internet monitoring foundation reported that cases of AI-generated exploitation material – such as webpages that may include multiple files – had significantly increased so far this year.
Instances of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are launched," commented the head of the online safety organization.
"Artificial intelligence systems have made it so victims can be victimised all over again with just a few clicks, providing offenders the capability to create potentially limitless amounts of advanced, photorealistic exploitative content," she added. "Material which additionally exploits survivors' suffering, and makes children, particularly girls, more vulnerable on and off line."
Counseling Interaction Information
Childline also published details of counselling sessions where AI has been referenced. AI-related harms discussed in the conversations comprise:
- Using AI to evaluate body size, body and appearance
- AI assistants dissuading young people from consulting safe adults about abuse
- Facing harassment online with AI-generated content
- Online extortion using AI-manipulated images
During April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing utilizing chatbots for assistance and AI therapy apps.