April 15 / 2025 / Reading Time: 3 minutes

Action Needed to Close Legal Gaps on AI-Generated Child Sexual Abuse Material

New research has uncovered legal gaps for tackling child sexual abuse material (CSAM) created by generative artificial intelligence (gen-AI) across the Five Eyes nations. 

The research has prompted calls for lawmakers to strengthen legislation to ensure children are protected as gen-AI evolves rapidly.  

The findings are part of an investigation into the robustness of regulations across the United Kingdom, United States of America, Australia, New Zealand and Canada. Known as the Five Eyes nations, the countries work closely on issues such as cybersecurity and the global problem of technology-facilitated child sexual exploitation and abuse (TF-CSEA).  

The study is part of a new report investigating who benefits from the multi-billion-dollar industry of child sexual exploitation and abuse, carried out by Childlight – Global Child Safety Institute, which is hosted by the University of Edinburgh.  

The report flags legal gaps across the five countries including:  

United Kingdom:  

The silence of UK-wide legislation on whether indecent pseudo-photographs that include fictitious children fall under its scope. This could be covered by relevant case law interpretation or legislative updates. Specifically, Scotland has gaps on topics such as the criminalisation of paedophile manuals (i.e. guides on how to groom, sexually abuse and exploit children); and of non-photographic indecent images of children (e.g. cartoons, manga, hentai).  

  • Action: The UK should update their laws to clearly criminalise all acts relevant to pseudo-photographs that include fictitious children. Scotland should update and create more targeted laws criminalising all aspects of such activities to make the protection of children more robust.  

 

United States of America: 

The USA has outdated legislation in some of its states, which fails to address newer forms of TF-CSEA. Civil remedies are often inadequate, leaving gaps in accountability. 

  • Action: Amend existing CSAM laws, particularly on a state level, to explicitly cover AI-generated content.  

Canada:  

In Canada, Federal Criminal Code does not specifically ban AI-generated CSAM. This has been to an extent mitigated by wide interpretation of existing legislation by the Canadian Supreme Court. Victims also face a patchwork of protections, which varies depending on where they live.  

  • Action: Amend CSAM laws to regulate the misuse of AI.  

 

Australia and New Zealand:  

In both Australia and New Zealand, existing definitions of CSAM or similar terminology used in criminal legislation are broad enough to capture AI-generated CSAM and the first sentencing decisions have emerged in Australia against users abusing gen-AI to create CSAM. Still, the research could not identify any prosecutions in Australia and New Zealand, in which AI software creators, or holders of datasets used to train AI specifically have been held criminally liable for the production of CSAM on their platforms.   

  • Action: Policymakers should assess if existing regulations cover criminal liability for AI software creators/dataset holders who fail to install proper guardrails in their products to safeguard children before putting them to market.  

Childlight Research Fellow Dr Konstantinos Gaitis said: “While we found generally laws across the Five Eyes countries are broad enough already to cover the advent of AI or are adapting to it through legislative updates and case law, there are still some gaps and work to be done. These gaps should be addressed to fully provide the protections and accountability needed to keep children safe. AI is rapidly developing, with increased levels of autonomy, so it is essential laws keep pace.” 

The study is among the first to examine the regulatory context of the five closely inter-connected countries in terms of accountability around CSAM created via gen-AI. Researchers looked at hundreds of pieces of legislation, cases and statutes, using the ‘black-letter law’ approach to the research, which focuses on the letter of the law.  

The review also found many examples of good practice across the five countries which are helping ensure those who create gen-AI child abuse material are held accountable, including Online Safety Acts in the UK and Australia, robust federal laws in the US and the proposed Online Harms Act in Canada. Keeping legislation broad, or ‘tech-agnostic’ can help ensure it can keep pace with technological advancements with targeted modifications needed, as gen-AI develops.  

 

 

 

 

Share This Article:

If you have been affected by exploitation or abuse and need support, please visit