UK Tech Firms and Child Protection Agencies to Examine AI's Ability to Create Abuse Content
Technology companies and child safety organizations will receive permission to assess whether artificial intelligence systems can generate child abuse material under recently introduced UK laws.
Significant Increase in AI-Generated Harmful Content
The announcement came as findings from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the authorities will permit designated AI companies and child protection groups to inspect AI systems – the underlying technology for chatbots and image generators – and verify they have sufficient safeguards to prevent them from creating images of child sexual abuse.
"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now identify the risk in AI models promptly."
Tackling Regulatory Challenges
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot create such content as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at averting that issue by enabling to stop the production of those materials at source.
Legislative Framework
The amendments are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on possessing, creating or sharing AI systems developed to create exploitative content.
Practical Impact
This week, the minister visited the London base of Childline and listened to a mock-up conversation to advisors involving a report of AI-based exploitation. The call portrayed a teenager seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I learn about children facing extortion online, it is a source of extreme frustration in me and justified concern amongst parents," he said.
Concerning Statistics
A leading online safety foundation reported that cases of AI-generated exploitation material – such as webpages that may contain numerous images – had significantly increased so far this year.
Cases of the most severe content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI depictions in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a vital step to guarantee AI tools are safe before they are released," stated the head of the online safety organization.
"AI tools have made it so victims can be targeted all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of advanced, photorealistic child sexual abuse material," she continued. "Content which additionally commodifies survivors' trauma, and renders children, especially girls, less safe both online and offline."
Support Interaction Information
The children's helpline also published information of counselling sessions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Using AI to evaluate body size, physique and appearance
- Chatbots dissuading young people from talking to trusted guardians about harm
- Being bullied online with AI-generated material
- Digital blackmail using AI-faked pictures
During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing utilizing chatbots for support and AI therapeutic applications.