Jacqueline Beauchere MBE, Global Head of Platform Safety, Snap Inc.

AI-generated child sexual exploitation and abuse imagery: Research and actions by Snap Inc.

1 Introduction

The sexual exploitation and abuse of children is illegal, vile, and, as a topic of polite conversation, often taboo. But these horrific crimes can’t be ignored. They need to be discussed in the halls of government, at boardroom tables, and at kitchen tables. Young people need to be attuned to online sexual risks, and adults need to understand the issues so they can assist youth in crisis. As a collective, we need to get comfortable being uncomfortable.

These issues are repeatedly discussed and actioned at Snap Inc., the company behind the popular messaging app Snapchat. Indeed, Snap is committed to making its service a hostile environment for illegal or violating activity – including the various forms of child sexual exploitation and abuse and its newest dimension: artificial intelligence-generated sexual imagery of minors.

2 Overview of Snapchat

Snap’s mission — to empower people to express themselves, live in the moment, learn about the world, and have fun together — informed Snapchat’s fundamental architecture. Adhering to this mission has enabled us to create a platform that reflects human nature and fosters and enhances real friendships. It continues to influence our design processes and principles, our policies and practices, and the resources and tools we provide to our community. Moreover, it undergirds our constant efforts to improve how we address the inherent risks and challenges associated with serving a large online community.

A considerable part of living up to our mission has been building and maintaining trust with our community and partners, as well as child online protection experts, law- and policymakers, and safety organizations around the world. These relationships have been built through our deliberate, consistent decisions to put privacy and safety at the heart of our product design.

Snap has always prioritized the well-being of our community, embracing privacy- and safety-by-design from the app’s inception. For us, safety means our community members can maximize their desirable in-app experiences, while minimizing those associated with illegal or potentially harmful Content, Contact, Conduct, and Commercial Activity — what we and others call the “Four Cs.” Every action that we take toward safety at Snap is in furtherance of one or more of our four Safety Goals:

  • By design we aim to PROTECT members of our community from the outset
  • We want to PREVENT any potential harm from coming to them
  • We RESPOND to their needs and concerns, and
  • We regularly LEARN and ADAPT, because we know that offenders are doing the same.

2.1 Safety @ Snap

From the production and distribution of contraband imagery to the grooming of children for sexual purposes and financially motivated “sextortion,” Snap has been fighting back against the online sexual exploitation and abuse of minors since Snapchat was created. We have adopted and implemented a number of proactive and reactive measures to help mitigate digital risk and reduce potential online harm. Specifically:

●      Research. Each year, we conduct research about Generation Z teens’ and young adults’ exposure to a range of online risks, including various sexual harms. Although Snap commissions this research, the study covers 13-to-24-year-olds’ experiences across applications, platforms, and services in six countries,[1] with no particular focus on Snapchat. The research is our annual contribution to the evidentiary base across the technology ecosystem, which helps to inform our own policy and product decisions, and we hope benefits others. The latest results, including findings of low knowledge about the illegalities relating to such imagery of minors, will be the focus of this paper.

●      Policies. Snap has in place aggressive policies that we aim to consistently enforce, prohibiting “any activity that involves sexual exploitation or abuse of a minor, including sharing child sexual exploitation or abuse imagery, grooming, or sexual extortion (sextortion), or the sexualization of children.” When we become aware of such material or activity, we take action at the content, account and, in some instances, the device level. We also report all identified instances of child sexual exploitation and abuse to the U.S. National Center for Missing and Exploited Children’s (NCMEC), as required by federal law, including attempts to engage in such conduct. NCMEC then liaises with domestic and international law enforcement as needed.

●      Detection. Snap uses established and innovative proactive detection methods to identify illegal imagery or other forms of child sexual exploitation and abuse before it can be reported to us. By way of just a few examples, we use PhotoDNA’s robust hash-matching technology to identify duplicates of known, illegal photos, and we leverage Google’s CSAI Match technology to find duplicates of known, illegal videos. We also employ “signal-based” detection to identify and remove bad actors prior to them having the opportunity to target and victimize others. We participate in all major industry and cross-sector hash- and signal-sharing programs, including the Tech Coalition’s Lantern initiative,[2] NCMEC’s Take It Down program,[3] and Report Remove[4] from the Internet Watch Foundation and the UK National Society for the Prevention of Cruelty to Children.

●      In-App Tools. We know that for teens in particular, connecting with both known and unknown individuals, whether in real life or online, presents risk. This is why we have intentionally made it difficult for strangers to identify, much less contact, younger users on Snapchat. In an attempt to guard against unwanted contact from others, we’ve always made available vital tools for blocking accounts and reporting violating content and behaviors to us. We have strengthened the requirements for teens to surface for other users on our platform, and added a series of in-app warnings if teens are contacted by someone who could be suspicious – for example, someone with whom the teen does not share any friends; someone who has been blocked or reported by others; or someone who is from a region where the teen’s social circle isn’t typically located. In addition, our Family Center[5] suite of tools offers parents, caregivers, and other trusted adults insight into who their teens are friends with on Snapchat and who they have been communicating with recently, without revealing the actual content of those communications. This way, our younger users can preserve their privacy and growing independence at a critical stage in their development, and parents can feel empowered to engage in crucial conversations with their teens about staying safe online. At its core, Family Center is about sparking meaningful dialogue between parents and teens about safety and well-being both on and off Snapchat.

●      Collaboration. Because no one entity or organization can solve these novel and nuanced issues alone, we collaborate with others in industry and across the technology ecosystem. We hold seats on boards, within advisory groups, and steering committees of influential global safety organizations; we partner with industry peers to fight cross-platform abuse; we engage with scores of civil society organizations to share best practices and advocate for safer online spaces for youth; and we support law enforcement agencies in their work to bring offenders to justice. Indeed, Snap is eager to work with individuals and groups that want to make a constructive and productive difference in the lives of young people – on Snapchat and across the technology sphere. In addition, we are often called upon to participate in academic research studies, task forces, and special-project working groups, based on our strong advocacy work. We join these forums with the intent to share, learn, and continually improve.

●      Awareness-Raising and Education. When it comes to online sexual exploitation and many related issues, raising awareness and maintaining educational efforts are of indispensable importance. We offer resources in-app, on our website,[6] at our Support Site,[7] and via our partners and collaborators. Our in-app offerings include four episodes about sexual risks and potential harms.[8] We also support others’ awareness-raising efforts, with the U.S. Department of Homeland Security’s “Know2Protect” campaign[9] being just one recent example.

2.2 Snap and AI-Generated Sexual Exploitation + Abuse Imagery

As operators of the My AI conversational chatbot and other generative AI features,[10] Snap remains vigilant against both the possible creation, and the re-sharing or distribution, of AI-generated child sexual exploitation and abuse imagery (CSEAI). Snap prohibits use of its generative AI tools to create content that violates our Terms of Service and our Community Guidelines,[11] including CSEAI.

My AI and Snap’s other generative AI features are built on Large Language Models, like GPT models from OpenAI and Google’s Gemini models. The features leverage those models’ underlying safety aspects and have a number of additional Snap-bespoke safeguards built in to help ensure content complies with our Community Guidelines. In addition, we conduct safety testing, including regular analysis of My AI’s responses. In doing so, a very small percentage of responses have been found to be “non-conforming,” meaning they did not adhere to Snap’s policies. A common pattern stems from users repeatedly trying to trick My AI into replying with inappropriate content.

It is important to note that when Snap identifies suspected AI-generated CSEAI that was created on another platform and someone is attempting to re-share or distribute the material via Snapchat, Snap treats suspected AI-generated photos and videos as it does “authentic” CSEAI. This means we take action at the content, account and, in many instances, the device level, and report the material and the account to NCMEC as required by U.S. federal law.

3 Snap’s Digital Well-Being Index

Since 2022, Snap has been conducting research into the digital well-being of teens and young adults in six countries: Australia, France, Germany, India, the UK and the U.S. The research polls 13-to-17-year-olds, 18-to-24-year-olds, and parents of 13-to-19-year-olds about young people’s online risk exposure; the relationships they are forging in the digital world; and their level of agreement with 20 sentiment statements. The statements, which span five categories – Positive and Negative emotions, Engagement, Relationships, and Achievement[12] – make up the Digital Well-Being Index (DWBI), a measure of young people’s online psychological well-being.

Taking into account, over the preceding three months, all of their online experiences on any device or app – not just Snapchat – respondents are asked to register their level of agreement with each of the 20 statements. For instance, “Generally felt what I did online was valuable and worthwhile,” in the Positive Emotion category, and “Have friends who really listen to me when I have something to say online,” under Relationships.[13] The DWBI assigns a score of between 0 and 100 to each respondent based on their agreement with the statements. Individual respondent scores then generate DWBI readings per country as well as a six-country average.

In the first two years (2022, 2023), the DWBI across all six countries stood at 62 – a fairly average reading, neither particularly impressive nor especially worrisome. In Year 3 (2024), the DWBI ticked one point higher to 63. This was somewhat noteworthy because the index inched up despite the fact that young people’s reported risk exposure also increased – a suggestion of growing resilience on the part of both teens and young adults in 2024.

Below are some additional high-level findings from Year 3:[14]

  • Eighty percent of 13-to-24-year-olds in the six countries said they experienced at least one online risk in 2024, up nearly five percentage points from the first survey in 2022.
  • Deception was common in these risk scenarios with 59% of Generation Z respondents noting they had engaged with someone online who lied about their identity.
  • In 2024, more Gen Zers (compared to previous years) said they spoke with someone or sought help after experiencing an online risk. Nearly six in 10 (59%) of 13-to-24-year-olds reported seeking help, up nine percentage points from 2023.
  • Just over half (51%) of parents of 13-to-19-year-olds said they actively checked in with their teens about online life, up nine percentage points from Year 2.
  • Slightly more parents (45% v. 43% in Year 2) responded that they trust their teens to act responsibly online and don’t feel the need to actively monitor them.
  • In Year 3, as with the two prior studies, results show that young people with a greater number of “support assets” available to them enjoy stronger digital well-being. Support assets are defined as people at home, at school/work, and in the community to whom the young person can turn for encouragement and support.

3.1 DWBI Deeper-Dive: Sexual Risks

In addition to the annual DWBI research, for the past two years, Snap has delved deeper into some specific risk areas, including polling teens, young adults, and parents of teens about young people’s experiences with certain sexual risks. In 2023[15] and 2024,[16] we asked Generation Z about exposure to sextortion – scams that deceive primarily teens and young adults into sharing intimate imagery that can quickly turn to blackmail. And, in 2024, we asked about their involvement with AI-generated imagery, including fake sexual photos and videos.

In 2024, 23% of the 6,004 13-to-24-year-olds surveyed in the six countries said they were victims of sextortion. About half (51%) reported having been lured into certain online situations or having engaged in risky digital behaviors that could have led to sextortion. These included being “groomed” (37%), being “catfished” (30%), being hacked (26%), or sharing intimate imagery online (17%).

The online grooming of minors for sexual purposes typically involves an adult befriending a young person, winning their trust through flattery and attention, and then sexualizing the relationship and continuing the abuse. This might include taking photos or videos and even meeting in person. Online catfishing occurs when criminals pretend to be someone they’re not to entice a target into sharing personal information or producing sexual imagery. Hacking usually involves an offender gaining unauthorized access to a target’s devices or online accounts to steal intimate photos or personal information. For the most part, in both the catfishing and hacking scenarios, the videos, photos, or other private information acquired is then used to blackmail the victim into acceding to the perpetrator’s demands in supposed exchange for not releasing the compromising imagery to the person’s family and friends.

Voluntarily sharing digital intimate imagery among young people is largely regarded as sexual exploration in the 21st century and that characterization is backed by research.[17] But the practice remains a key risk vector for sextortion and other potential harms stemming from misrepresentation and falsehoods. In 2024, of the 17% of respondents who admitted to sharing or distributing intimate imagery online, 63% said they were lied to by the perpetrator and 58% reported losing control of the material once it was sent. Those under age 18 who admitted to sharing intimate imagery were particularly vulnerable: 76% said they were lied to by the abuser and 66% said they lost control of the imagery.

Meanwhile, Generation Z’s overall involvement with intimate imagery online remained a blindspot for parents in 2024. One in five (21%) parents of teens said they thought their teen had ever been involved with sexual imagery online, when in fact, more than a third (36%) had admitted to such connections in just the last three months – a 15-percentage-point gap and a gross departure from the accuracy with which parents’ were able to predict their teens’ exposure to, or experiences with, other online risks, like unwanted contact, bullying, harassment, and hate speech. In these other instances, parents said their teens came forward to tell them what happened; they heard about the incident from someone else, like a friend or peer; or they figured it out for themselves. Sexual risks may be a weakness for parents based in no small part on the nature of the risks themselves. Parents may be disinclined to accept that their children are interested and engaged in sexual exploration and discovery, regardless of their age or maturity level. Irrespective of the reasons behind the apparent disconnect, the research findings represent an opportunity to both educate parents and encourage teens to seek guidance from adults about sexual risks as young people appear to be doing with other online issues.

3.2 DWBI Deeper-Dive: AI-Generated Imagery

Finally, the 2024 study also examined Generation Z’s involvement with AI-generated imagery, including photos and videos of a sexual nature. Here are some top-level findings about that portion of the research:

  • Across teens, young adults, and parents of teenagers aged 13 to 19 (N=9,007) in the six countries, eight in 10 respondents (81%) said they had ever encountered AI-generated imagery of any sort.
  • Nearly two-thirds (62%) said they had seen AI-generated imagery in the prior three months.
  • Of those who had seen AI-generated imagery of any type, most (54%) had encountered it on social media; smaller percentages had seen the material while browsing the web (22%) or reviewing news articles (19%). Fewer still had received AI-generated imagery of any kind from someone else (9%).
  • AI-generated photos (as opposed to videos) were the most common media format seen (83%), and this was the case across all three age demographics. Meanwhile, jokes and memes (53%) and AI-generated photos of celebrities (50%) were the most common AI images seen, while 15% of respondents reported viewing AI-generated imagery of someone they knew in real life.
  • As for use of AI-image generation tools, nearly three in 10 (29%) said they had used such tools, the most common age demographic for use being 18-to-24-year-olds at 36%.

Additionally, results show that one-quarter (24%) of those who said they had seen AI-generated photos and videos said they had seen imagery of a sexual nature. Of those, 2% said they believed the imagery included someone believed to be under the age of 18.

Encouragingly, when people saw this type of content, nine out of 10 took some action, ranging from blocking or deleting the content (54%) to speaking to trusted friends or family members (52%). Just 42%, however, said they reported the content to the relevant platform or service, or to a hotline or helpline. (Lower reporting rates on digital safety issues generally follow a broader trend, which is also evidenced in our research.[18])

Even more alarming is that more than 40% of respondents were unclear on the legal obligation for platforms/services to report sexual images of minors, even if such images were intended as jokes or memes. And, while a larger number (70%+) recognized it was illegal to use AI technology to create fake sexual content of another person, or to retain, view, or share sexual images of minors, findings indicate there is considerable work to do to ensure the global general public is aware of the legalities surrounding this content. (See Figure 1 for details.)

 

In the U.S., for example, nearly 40% of respondents said they believe it is legal to use AI technology to create fake sexual images of a person. (See country details in Figure 2 below). In addition, anecdotally, we have heard of a concerning trend from industry colleagues: With the proliferation of this type of content, some teenage girls in particular are experiencing “FOMO” and feeling “left out” if they are not featured in AI-manipulated sexual imagery that their peers are inappropriately creating and sharing. This disturbing point further underscores the need to educate and grow awareness of this specific online risk among all stakeholders and for trusted adults and informed peers to play an active role in discouraging this type of behavior.

4 Conclusion

Child sexual exploitation and abuse online is a reprehensible crime that demands a concerted effort from all sectors of society to eradicate. This pervasive issue necessitates a comprehensive, collaborative approach if we hope to find any viable solutions. At Snap, we are dedicated to playing our part in minimizing risks and reducing potential harm on our platform and across the broader technology ecosystem. We employ a multifaceted strategy to combat child sexual exploitation and abuse, including annual research conducted in six countries. This research helps to inform our own policies and product decisions, and contributes to the broader knowledge base, as we collectively strive to learn, improve, and adapt in our ongoing mission to safeguard young people – and indeed, all individuals – across today’s digital landscape.

 

[1] The six countries included in the research are Australia, France, Germany, India, the UK, and the U.S.

[2] See, https://www.technologycoalition.org/newsroom/announcing-lantern

[3] See, https://takeitdown.ncmec.org/

[4] See, https://www.iwf.org.uk/our-technology/report-remove/

[5] See, https://parents.snapchat.com/parental-controls?utm_source=GoogleSEM&utm_medium=PAIDUA&utm_campaign=SnapchatSafety_MaxClicks_Search_AllDevices&utm_term=Snapchat_For_Parents&utm_content=Parental_Controls&gad_source=1&gclid=CjwKCAjwtdi_BhACEiwA97y8BJLlf3HHmlN4iB7Fw7rmqDcEArFe7mIPg0frdPeh2zPqnx5KjvxHbxoCPFgQAvD_BwE

[6] https://values.snap.com/safety/safety-center

[7] See, https://help.snapchat.com/hc/en-us/articles/7012304746644-How-do-I-stay-safe-on-Snapchat

[8] See, https://www.snapchat.com/p/9e8a90a8-897f-474b-8d65-e29797b3aa40/760067458648064?lang=en-US

[9] See, https://values.snap.com/news/k2p-launch and https://www.dhs.gov/know2protect

[10] See, https://help.snapchat.com/hc/en-us/articles/13266788358932-What-is-My-AI-on-Snapchat-and-how-do-I-use-it

[11] See this to review Snapchat’s Community Guidelines: https://values.snap.com/policy/policy-community-guidelines

[12] The index leverages the PERNA model, a variation on an established well-being theory [Seligman, M.E.P. (2011) PERMA–A Well-Being Theory], comprising 20 sentiment statements across five categories: Positive Emotion, Engagement, Relationships, Negative Emotion, and Achievement. In PERNA, Negative Emotion replaces Meaning, and Achievement replaces Accomplishment.

[13] See this link for a list of all 20 DWBI statements: https://images.ctfassets.net/kw9k15zxztrs/4L5iG6MCaykySxyNyOtyLm/cf7102b4d41f93612f8c2d35645a332c/img2.png

[14] For full results for years 1 through 3, see this link: https://values.snap.com/safety/dwbi

[15] See this blog for details: https://www.weprotect.org/blog/two-thirds-of-gen-z-targeted-for-online-sextortion-new-snap-research/

[16] See this blog for details: https://values.snap.com/news/new-sextortion-research-gen-z

[17] See, https://www.thorn.org/blog/thorn-research-understanding-sexually-explicit-images-self-produced-by-children/

[18] Reporting appears to have gotten a bit of a “bad rap” over the years, as young people have come to normalize exposure to problematic content and conduct online, or equate reporting with snitching or tattletaling. For more see this post and related research: https://values.snap.com/news/back-to-school-safety