Susanne Drakborg, World Childhood Foundation

Bridging Worlds to Protect Children: Reflections on AI and Cross-Sectoral Collaboration

Introduction: More Than a Technical Problem

At World Childhood Foundation, we have worked for twenty-six years to prevent child sexual abuse. Since 2002 our focus has increasingly turned toward the digital sphere, where abuse has proliferated at unprecedented speed and scale, and where technical opportunities to protect children are underexplored. Since launching the Stella Polaris initiative in 2021, we have been immersed in the complex task of connecting artificial intelligence (AI) expertise with the deeply human realities of child protection.

The scale of the problem we face demands urgent innovation. Yet technical innovation alone is not enough. True progress depends on something far more difficult: genuine cross-sectoral collaboration. AI developers, law enforcement officers, researchers, and child protection professionals must not only come together – they must learn to understand one another. This requires humility, trust, and a sustained dialogue.

Cross-Sector Collaboration: A Non-Negotiable Foundation

There is a consensus that collaboration across sectors is necessary, but we often underestimate how difficult such collaboration truly is. Within World Childhood Foundation, we regard it not as an optional ideal, but as a non-negotiable precondition for progress.

The challenges are many. Professionals from different fields do not always share terminology, assumptions, or timelines. Sometimes confusion is immediate and obvious; other times, it is masked by agreement on shared terms that in fact conceal deep misunderstandings. Without people who can bridge these divides – who are fluent in both the languages of AI and of child rights – progress stalls.

One area where this is especially evident is data. There can be no AI without data, yet in the context of child sexual abuse material (CSAM), questions of data legality, quality, access, and bias are fraught.

The development of effective AI tools – whether for detection, classification, or early prevention – requires training on real and representative data. The legal and ethical boundaries around such data are complex and vary by jurisdiction, often limiting who can access it and under what conditions. Even when access is permitted, issues of consent, storage, and sharing must be navigated with extreme care. Working with child sexual abuse material – even in the service of protecting others – can take a significant emotional toll on law enforcement officers, forensic analysts, and technical experts, who may experience secondary trauma through repeated exposure. For survivors whose abuse has been recorded and distributed, the knowledge that these images may continue to circulate – even in restricted settings such as law enforcement investigations or AI model training – can be deeply distressing. This underscores the importance of involving survivors’ perspectives in discussions about data use and privacy safeguards.

What is clear, however, is that AI is necessary to combat child sexual abuse. At present, the few tools in use are largely reactive – employed only after a child has already been abused. While these retrospective tools are important in identifying victims, removing harmful material, and supporting law enforcement investigations, they do not prevent abuse from occurring in the first place. There remains an unconscionable lack of AI applications aimed at proactive, preventative intervention. This must change.

The current AI landscape is shaped by the dominance of a few large technology companies. These actors possess the technical capacity, data access, and infrastructure needed to develop advanced AI tools – but their efforts are often driven by internal priorities, commercial considerations, and reputational concerns rather than by the needs of children or the child protection sector.

Despite the high stakes, the development of AI solutions to combat child sexual abuse remains fragmented and uncoordinated. Many tools are built in isolation by smaller actors, with limited transparency or interoperability. In some cases, different actors work in parallel on similar problems without meaningful collaboration, leading to duplication of effort and missed opportunities for synergy.

Moreover, the very nature of child sexual abuse material as illegal and highly sensitive material limits the ability to develop effective AI solutions. Restrictions on handling such content – while essential for safeguarding victims and upholding legal norms – also create major barriers to innovation.

As new AI tools to combat child sexual abuse are being developed, it is essential to ensure they are designed with global applicability in mind. If future AI solutions are built without considering the resource constraints, legal frameworks, and linguistic diversity of low-income countries, they risk becoming tools of exclusion rather than protection. Creating effective AI for child protection must include a commitment to accessibility, adaptability, and fairness across geographies, so that children everywhere – not just in high-resource settings – benefit from technological advancements.

Finally, there is the concern about bias in AI tools used to detect abuse. Many of these systems have been trained primarily on datasets that reflect limited demographic diversity – often featuring images of Caucasian children and adults. This raises serious questions about how well these tools perform in detecting abuse involving individuals of different ethnicities or cultural backgrounds. If left unaddressed, such bias could result in deeply unequal protection for children across the globe.

Scaling Prevention: AI and the Perpetration Prevention Frontier

AI has a critical role to play not only in investigative work but also in perpetration prevention. It is increasingly clear that societies cannot arrest their way out of this problem. Moreover, the true ambition should not be to respond to abuse after the fact, but to prevent it from happening in the first place. This includes reaching individuals who are actively moving along pathways toward offending – such as those seeking out darknet forums specializing in child sexual abuse material, searching for guides on how to abuse children, or engaging in grooming-like behaviors online. These are not hypothetical risks; they are identifiable patterns of high-risk behavior that can be acted on – ethically and within legal boundaries – before harm is done.

AI can support this kind of prevention by helping law enforcement, clinical experts, and civil society actors detect such patterns more efficiently, and respond with timely, targeted interventions. In some cases, that may mean flagging users for investigation; in others, it may involve redirecting individuals to anonymous help platforms or therapeutic resources.

In the area of mental health and behavioral intervention, AI offers the potential to support both front-end and back-end efforts: from triage systems and anonymous self-help platforms to therapist-guided digital interventions. AI can help scale access, personalize care, and optimize limited resources – all while supporting those who may otherwise remain invisible in current systems.

One example of such innovation is the Prevent It initiative, supported by World Childhood Foundation. This online, therapist-guided program uses cognitive behavioral therapy to reach individuals at risk of consuming child sexual abuse material. It has been rigorously evaluated through randomized controlled trials, showing measurable reductions in child sexual abuse material viewing time among participants. As highlighted in a recent systematic review,[1] Prevent It represents one of the most promising approaches in this space, and its ongoing adaptation into multiple languages further expands its potential reach.

However, this field remains vastly under-researched and underfunded. The integration of AI in perpetration prevention requires not only technical innovation, but also robust safeguards, international legal clarity, and ethical frameworks grounded in child rights and trauma-informed care. It is a frontier that demands attention.

Elodie: Modernizing the Fight Against Child Sexual Abuse Through AI

A compelling example of cross-sectoral collaboration is the project called Elodie. This AI tool is being developed to assist law enforcement by generating textual summaries of child sexual abuse material, thereby reducing the emotional toll on investigators while also improving the efficiency, consistency and quality of documentation.

The project has evolved through a series of collaborative hackathons, partially with support from World Childhood Foundation. The initiative is bringing together experts from the private sector, law enforcement, and civil society. In a recent phase, the team succeeded in refining models to generate text summaries of child sexual abuse material – eventually enabling investigators to decrease the time spent looking at the material. In one powerful illustration of its capabilities, the AI tool was able to identify that an image depicted an infant boy, an adult male, and male genitalia. However, it could not describe the specific act taking place – because it has not had enough training on the full reality of such material. The extreme sensitivity, legality, and emotional gravity surrounding this imagery have meant that very few datasets are accessible to properly train AI models. The model’s silence revealed not only its technical limitations, but also unintentionally carried profound symbolic weight: the tools we build struggle to confront what we, as humans, often cannot bear to fully acknowledge.

Because even we, as humans, rarely speak openly about what truly happens in these images. We use terms like "child sexual abuse" to shield ourselves and others from the unbearable specifics. And now, we have created machines that, too, cannot find the words. What appears poetic at first is, in fact, a serious problem: the inability of current AI systems to describe child sexual abuse material in sufficient detail has real-world consequences. Investigators are left without the technological support they desperately need to process vast amounts of evidence, slowing down the pursuit of justice and leaving children at continued risk. Our reluctance to face the full horror of these crimes has shaped not only our language but our technology – and in doing so, has compromised our ability to act.

If we are serious about protecting children, we must be equally serious about equipping those on the frontlines with better tools. Artificial intelligence has the potential to dramatically improve the speed, accuracy, and emotional sustainability of investigations into child sexual abuse. But realizing that potential will require investment, collaboration, and the moral courage to engage directly with the hardest realities. Funding for the development of AI tools trained specifically – and ethically – on real-world material is a necessity.

The Stella Polaris Initiative: Creating an Ecosystem for Innovation

If we are to build tools that match the scale and complexity of the problem, we must also build the conditions that make such tools possible. Stella Polaris was launched by World Childhood Foundation as a hub to accelerate the use of AI in combating child sexual abuse, particularly in Sweden but with international reach. Its core mission is to foster cross-sectoral innovation, to bridge gaps between technology and child rights, and to translate global momentum into local and scalable action.

To achieve this, Stella Polaris has focused on cultivating shared understanding and practical collaboration. Through curated workshops, matchmaking efforts, thematic summits, and sustained dialogue, we have brought together actors from across sectors to explore both the risks and the potential of AI in this field. At the heart of this work is a commitment to move beyond conversation – to initiate and support real-world pilots and test cases that surface ethical dilemmas, uncover technical challenges, and explore new solutions.

Much of this work has involved identifying and addressing critical knowledge gaps – including those related to perpetration prevention, the development of scalable AI tools, and the mapping of available datasets for responsible AI training.

Through these efforts, it has become increasingly evident that no single discipline or sector can navigate these challenges alone. Progress depends on building a shared fluency – where those working on the frontlines of child protection and those designing the systems that could support them can understand one another.

A Call for Commitment and Courage

The talent, ideas, and technical solutions needed to protect children already exist. What is needed now is long-term commitment – financial, political, and institutional. Those working in this field bring expertise, resilience, and a strong commitment to protecting children. To support their efforts, there must be sustained investment and the development of reliable infrastructure. Without these, promising initiatives like Elodie will remain prototypes rather than achieving the scale required for meaningful impact.

There is no shortage of technical innovation. What remains is the collective will to sustain, scale, and responsibly deploy these efforts. The use of AI in child protection has already shown remarkable promise, while also exposing critical gaps. It is a field that demands bravery as much as brilliance, humility alongside innovation. In an era where malicious actors increasingly leverage AI for harm, we must harness AI itself as part of the solution. While concerns about the broader implications of AI are valid and must be addressed, they cannot prevent action in response to urgent and immediate challenges. Thoughtful and responsible deployment of AI is necessary. The fight against child sexual abuse is not only technological or legal; it is fundamentally a human responsibility. And in that recognition lies a shared conviction: every child deserves a life free from harm.

 

[1] Seto, M.C., Roche, K., Rodrigues, N.C., Curry, S., & Letourneau, E. (2024). Evaluating Child Sexual Abuse Perpetration Prevention Efforts: A Systematic Review. Journal of child sexual abuse, 33(7), 847–868. https://doi.org/10.1080/10538712.2024.2356194