Mama Fatima Singhateh, United Nations Special Rapporteur on the Sale, Sexual Exploitation and Sexual Abuse of Children

The Global Landscape of Child Sexual Exploitation and the Role of AI in Addressing It

Last year I presented my annual report to the Third Committee of the United Nations General Assembly on the Existing and emerging sexually exploitative practices against children in the digital environment,[1] and in that report I highlighted how the misuse of existing and emerging technologies exacerbates and amplifies children’s exposure to risks, harms, and various forms of sexual exploitation and sexual abuse. The issues raised herein are derived from said report unless otherwise indicated.

We know that one in three internet users worldwide is estimated to be a child, with a child going online for the first time every half a second.[2] Children today are spending more time in the digital environment than ever before. Almost 80 per cent of children and young people aged 15 to 24 are driving the force of connectivity globally, compared with 65 per cent of the rest of the population.[3] In this context, numerous research and reports have revealed the sustained threat and intensification of manifestations of child sexual abuse and exploitation in the digital environment, both in terms of scale and method.[4]

One must acknowledge that information and communication technologies offer a wide array of opportunities to protect and uphold children’s rights, yet their rapid, evolving, and unprecedented capabilities present significant risks to children and expose them to harms caused by the misuse of technologies. This is supported by research which showed that more than 300 million children a year are estimated to be victims of online sexual abuse and exploitation.[5] The research also revealed that 12.6 per cent of the world’s children have been victims of non-consensual communication as well as non-consensual sharing of and exposure to sexual images and video while another 12.5 per cent have been subjected to online solicitation.[6]

We see every day how existing technology is misused to sexually harass, take and share non-consensual images and videos, generate child sexual abuse materials, sexually extort children, and livestream child sexual abuse. This is amplified through developments in Artificial Intelligence (‘AI’) which not only replicates existing safety concerns that plague the current digital ecosystem and technological landscape but also exacerbates and facilitates more severe threats against children as well as gives rise to new trends. The misuse of AI has not only significantly facilitated the multiplication of child sexual abuse materials but also driven the emergence of entirely new forms of large-scale abuse facilitated by technology such as grooming, sexual extortion of children and high-definition live-streaming of abuse.

We see how Generative AI, through deepfakes and synthetic images, is increasingly being weaponized to create realistic child sexual abuse content with little to no interaction with real children. These AI models generate materials using image prompts, manipulate real images, or create entirely artificial content that appears indistinguishable from real-life abuse. Perpetrators adapt and customize original child sexual abuse materials into new content, produce and manipulate accessible and benign content of known or unknown children into sexually abusive material, or even create fully AI-generated child sexual abuse materials from scratch. These materials are easily accessible and can be downloaded offline.

This technology enables perpetrators to expand their methods of exploitation, making detection and law enforcement efforts more difficult, while also facilitating anonymity and the evasion of oversight. It is also increasingly difficult for law enforcement to triage, distinguish and identify victims, and identify whether or not there is a real child in danger. This results to situations where investigators spend time and resources pursuing the rescue of children who may turn out to be virtual characters, or waiting for further verification of the quality of the content generated, thereby diverting much needed resources and time away from the rescue of real children.

Despite the forgoing, we have increasingly seen how AI is leveraged to address the risks posed by AI itself and other technologies in the online sexual exploitation of children. Through image matching technologies and advanced algorithms, AI is used to analyse images, videos, and text to detect explicit or exploitative content involving children, content which traditional detection tools are usually unable to detect. It is also used to identify known child sexual abuse materials and flag new content for review. This can dramatically speed up the identification and removal of child sexual abuse materials as well as reduce human moderators from being exposed to sustained extreme content which can have lasting health and wellbeing implications.[7] Through AI, online platforms can be monitored for suspicious behaviour patterns, such as grooming tactics, inappropriate interactions, or the sharing of sexually exploitative content. It can also be used to assist in identifying victims and perpetrators through the analysis of metadata, facial recognition, and other digital footprints in child sexual abuse materials.

The synthetic nature of AI-generated child sexual abuse materials, however, complicates easy detection, as traditional tools that rely on hashing, like PhotoDNA, to identify known child sexual abuse materials may not recognize new AI-generated materials.[8] Nevertheless, through capacity building and resource allocation, law enforcement and platforms can adapt their detection systems to identify and block synthetic content, as well as through human expertise ensure that AI systems are accurate to reduce false positives and wrongful accusations or censorship.

Additionally, the sheer scale and complexity of this digital landscape underscore the urgent need for stricter and more robust legal and regulatory frameworks. Laws must be updated to address the creation, distribution and possession of AI-generated child sexual abuse materials, ensuring that both real and synthetic abuse material are criminalized equally under the law.

Collaboration amongst technology companies, law enforcement, and governments is essential to combat child sexual exploitation online, track perpetrators and remove harmful content quickly, especially as such offences are often cross-border in nature.

It is also imperative that the rights of children be mainstreamed within business models, rather than retrofitting safeguards after harm has occurred. Privacy- and safety-by-design approaches would therefore ensure that the security and integrity of technological infrastructure are not compromised. Furthermore, strengthening the ethics of due process safeguards, transparency and accountability of the industry makes it possible to not only pinpoint where weaknesses exist in safety practices; identify where discrepancies exist in respect of the extent of their action, performance and presence in jurisdictions; but also determine what innovative and preventive technologies are needed and where law enforcement is required.[9]

Instead of passively relying on warrants, notices and takedown procedures, the burden to report abuse should be placed squarely on technology companies and online service providers. Businesses should be mandated to put in place or improve on systems that proactively and rapidly implement concrete automated and human detection, removal and moderation tools, and to promptly report to the relevant authorities any harmful, exploitative and abusive content of children occurring or amplified through their products and services, including by identifying real-time abuse, and accounts used by perpetrators.

When designing and engineering technologies, greater choices over content moderation and user-controlled moderation settings should be provided. This should include robust age verification and restriction; parental controls; regular reminders and notifications of dangers to users; restricting interactions and making private the follower lists of children; and implementing warning tools and deterrence campaigns to suspected perpetrators.

The challenges posed by the borderless nature of digital crimes can hinder investigative and judicial processes. It would therefore be important to create and strengthening specialized multidisciplinary units with adequate resources both human and financial for effective investigations, as well encourage international cooperation among regulatory bodies, law enforcement, and industry stakeholders.

I want to conclude by emphasising that the evolution of technology and developments in AI come with many benefits and include the protection of children in the digital environment, yet the rapid technological development and change also present opportunities for organized criminal groups and perpetrators to adapt their approach to target their victims.

This raises the question about the effectiveness of current statutory and enforcement regimes, especially given the ease with which criminals can reach children, and child sexual abuse materials can be found in the digital environment. Without immediate action, effective regulation, product safety, effective content moderation, education, and enhancing the work of law enforcement this phenomenon will only grow.

Children have the right to feel safe in all spaces, wherever they are. A concerted effort is therefore required from all stakeholders to respond to this problem, and technology companies have an important role to play in this regard.

 

[1] Available at https://docs.un.org/en/A/79/122

[2] International Telecommunication Union (ITU), Guidelines for Industry on Child Online Protection (Geneva, 2020).

[3] ITU, Measuring Digital Development: Facts and Figures 2023 (Geneva, 2023).

[4] See A/79/122, available at https://docs.un.org/en/A/79/122

[5] Childlight – Global Child Safety Institute, Into the Light Index of Child Sexual Exploitation and Abuse Globally: 2024 Report (Edinburgh, 2024).

[6] Ibid.

[7] See https://www.weprotect.org/thematic/artificial-intelligence-and-gen-ai/

[8] Ibid.

[9] A/HRC/52/61, available at https://docs.un.org/en/A/HRC/52/61