DOI: 10.65398/ALZN2959
Christoffer Rahm, MD, PhD, Docent Centre for Psychiatry Research Department of Clinical Neuroscience Karolinska Institutet & Stockholm Health Care Services Region Stockholm
The Double-Edged Sword of AI: Insights from the Frontlines of Child Sexual Abuse Perpetration Prevention
Your Royal Highnesses, esteemed members of the Academy, colleagues, and friends,
Before delving into the risks and potentials of artificial intelligence (AI) in child protection, I wish to share insights from my fieldwork, hoping to convey the urgency I feel regarding the need for immediate action.
As a medical doctor specializing in psychiatry and leading a research group at Karolinska Institutet in Stockholm, Sweden, my team focuses on perpetration prevention in the realm of child sexual abuse (CSA) and exploitation. Collaborating internationally, we develop and scientifically evaluate novel medical and psychotherapeutic interventions – both online and offline – for individuals with a sexual interest in children who are at risk of offending.
Over the past decade, we have developed five interventions. Three have demonstrated significant effectiveness in controlled clinical studies and are currently aiding in the prevention of harm to children daily. A fourth intervention is undergoing evaluation in a randomized controlled trial within Swedish prisons, and we are preparing to launch a program tailored for survivors. We are also collaborating on several others. It is encouraging to witness the rapid global dissemination of our methods.
Notably, we have observed that the interval between initial abusive fantasies and the first hands-on offense typically spans 5–10 years. This window presents a critical opportunity for societal intervention. Evidence suggests that successfully intervening with a single individual can prevent, on average, 20–25 acts of abuse if their interest is in girls, and over 200 acts if their interest is in boys. This does not account for the extensive consumption of child sexual abuse material (CSAM). At the onset of our program, participants report viewing CSAM for 5–7 hours weekly. Post-treatment, around two thirds cease consumption entirely, while the remainder significantly reduce both the quantity and severity of the material.
This shows that prevention is possible.
When we have the funding to keep our programs open, clients come in hundreds from all over the world. Almost none of them are known to law enforcement.
As a sidenote, before moving on to my observations regarding AI, I would like to comment on yesterday’s sessions. While I value the emphasis on future policy development, I urge attention to the current, large-scale challenges we face. Just as nations must bolster defenses against potential future conflicts, they must also allocate resources to ongoing battles. Those of us working directly with potential perpetrators to deter them from offending require substantial support to manage the pressures and advance our efforts. Our work protects children from all kinds of homes. Our interventions are evidence based, cost effective and scalable, we need support to maximise their outreach.
Back to the focus of this panel:
Through my work and patient interactions, I have come to understand that AI is already deeply integrated into perpetrators’ methods of child sexual exploitation. The era of hoping that AI would not influence this field has passed. Clients report:
- AI-generated CSAM is employed to create abusive scenarios, incorporating real individuals into synthetic content, thereby tailoring materials to specific preferences.
- AI facilitates scaled grooming behaviors, enabling a single offender to target hundreds of children simultaneously.
- AI is weaponized against law enforcement, inundating detection systems with synthetic material to obstruct investigations and overwhelm investigators.
Ai has come to stay, I guess, as an everyday tool in the offender world. Was it worth it, I ask the developers of AI technique.
On our side, we strive to harness AI for protective purposes:
- Machine learning assists in identifying brain activation patterns, shedding light on the mechanisms through which risk-reducing medications operate, thereby informing the development of new treatments.
- AI-powered motivational interviewing interventions are being developed to automate and scale early intervention efforts for at-risk individuals, offering 24/7 availability. By integrating deterrence messaging on platforms hosting CSAM and deploying AI-based chatbots, we encourage individuals seeking such material to connect with clinics within our international network, contributing to abuse prevention.
- At Karolinska, we are developing federated learning methods within secure, sandbox-like environments. These allow private companies to develop and test AI solutions using healthcare data, with the aim of adapting these methods for other sensitive datasets not only from health care but also from law enforcement or social services, all within ethical and legal frameworks.
These examples illustrate the potential of AI for good in our field. However, my primary message is that we are lagging behind.
While offenders have seamlessly incorporated AI into their daily activities, researchers, law enforcement, and public health professionals grapple with bottlenecks – ethical and legal restrictions, compliance with data protection regulations, research ethics hurdles, and limited data access. Too often, discussions about AI’s potential in child protection – be it developing defensive tools to detect and disrupt abuse or tailoring treatments – conclude with resignation over these obstacles. To transform AI into a daily tool for good, we must address and overcome these challenges.
I want to conclude on a different note, going back to basics. Standing in the front line, surrounded by all these techniques both for offending and for reaching out with preventative care, I often think that we must never forget that, at the center of all this, we are still humans, mostly the same despite millions of years on this planet.
AI mirrors us – it reflects our desires, our values, and our choices. It mimics human behavior, amplifying both our worst instincts and our best intentions.
Technology is not in control – we are. It’s our thoughts and actions that shape the future.
The question is not what AI will become, but rather what we, as a society, choose to make of it. The choice is ours.
Let us choose wisely.
Thank you.