UK leads world, legalizing proactive AI testing to combat child abuse material.

UK tackles surging AI-generated CSAM with a world-first law allowing proactive pre-release testing to embed safety.

November 12, 2025

UK leads world, legalizing proactive AI testing to combat child abuse material.
The United Kingdom is pioneering a new legislative approach to combat the proliferation of AI-generated child sexual abuse material (CSAM) by enabling pre-release testing of artificial intelligence models.[1] This world-first initiative will empower designated technology companies and child protection organizations to scrutinize AI systems for their potential to create illegal and harmful content before they are made publicly available. The move comes in response to a significant and alarming increase in the volume of synthetic CSAM, a trend that poses new and complex challenges for law enforcement and child safety advocates. By shifting the focus from reactive content removal to proactive vulnerability assessment, the UK government aims to ensure safety is a fundamental component of AI development, not an afterthought.[2]
The new legislation, introduced as an amendment to the Crime and Policing Bill, addresses a critical legal barrier that has previously hampered safety research.[3][4] Under existing UK law, the creation and possession of CSAM is illegal, which has prevented AI developers and safety researchers from legally testing whether their models could be manipulated to generate such material.[5][6] This legal restriction meant that vulnerabilities could often only be identified after an AI tool was released and exploited by malicious actors. The new law will grant the Technology Secretary and Home Secretary the authority to designate trusted organizations, such as AI developers and charities like the Internet Watch Foundation (IWF), as "authorised testers."[5][7] These approved bodies will be legally permitted to conduct rigorous "red teaming" exercises, a form of ethical hacking designed to identify and mitigate safety risks by actively attempting to provoke the AI into generating harmful content under controlled conditions.[4][8][9]
This proactive testing framework is a direct response to harrowing statistics released by the IWF, which revealed that reports of AI-generated CSAM more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.[10][7] The data also highlighted a disturbing surge in the creation of material depicting infants, with images of children aged zero to two years old increasing from five in 2024 to 92 in 2025.[2] Furthermore, the severity of the content is intensifying, with a notable rise in the most serious categories of abuse imagery.[4] This new form of abuse not only creates limitless amounts of sophisticated, photorealistic material but also serves to revictimize survivors whose real images can be used to train these AI models.[10][2] The legislation aims to tackle this problem at its source, preventing the generation of such content in the first place.[10]
The implications of this new policy for the artificial intelligence industry are substantial, marking a significant step towards embedding a "safety by design" philosophy into the development lifecycle.[11] AI developers will now have a legal framework within which to conduct essential safety evaluations, a process that major tech companies already claim to undertake in some form.[8] The government's plan involves creating an expert group on AI and child safety to establish clear safeguards for the testing process, ensuring the protection of sensitive data and the wellbeing of the researchers involved.[2][4] While welcomed by child protection organizations like the IWF and the NSPCC, some have called for the testing to be made mandatory for all AI models released in the UK.[11] The legislation also extends beyond CSAM, enabling testers to check models for protections against the generation of extreme pornography and non-consensual intimate images.[2] This signals a broader regulatory interest in holding AI creators accountable for the potential misuse of their powerful technologies.
In conclusion, the United Kingdom's initiative to legalize pre-release testing of AI models for their capacity to generate child abuse material represents a landmark shift in the global approach to AI safety and online child protection. By removing legal hurdles for responsible research and fostering collaboration between the government, the tech industry, and child safety experts, the policy aims to proactively address a rapidly escalating threat.[5] This measure is part of a wider government strategy to make the UK the safest place to be online, which also includes making it illegal to possess or distribute AI models specifically designed to create CSAM.[4][12] The success of this pioneering legislation will likely be closely watched internationally as other nations grapple with the complex legal and ethical challenges posed by the rapid advancement of generative AI technologies.

Sources
Share this article