DOI: 10.65398/FOGH2829
Joachim von Braun (PAS) and Britta Holmberg (World Childhood Foundation)
Introduction and concept
The Pontifical Academy of Sciences, in collaboration with the World Childhood Foundation, chaired by Queen Silvia of Sweden, and the Institute of Anthropology of the Pontifical Gregorian University, convened a high-level workshop to address the opportunities and risks AI presents for children. The conference included political leaders, AI expert scientists, youth representatives, United Nations and international organizations, civil society, corporate leaders, survivors of abuse, and church officials.
The Impact of AI on the Safety and Dignity of Children
Artificial Intelligence (AI) is profoundly transforming the digital experiences of children, presenting both opportunities and risks. The surge in AI innovation brings hope for solving complex problems, enhancing children’s digital safety and dignity. However, the rapid development of AI, exacerbated by the pandemic-induced surge in digital usage, has also exposed children to increased online risks such as deepfakes, cyberbullying and sexual exploitation.
While AI-driven tools for analyzing and managing text, images and large datasets can enhance online safety, they also raise significant concerns regarding misuse, privacy violations and algorithmic bias. Ensuring the intentional and ethical deployment of AI is essential to safeguarding children’s well-being in an increasingly digital environment, including within social media ecosystems. AI is now deeply embedded in digital infrastructure and integral to the functioning of social platforms. However, AI technologies can be exploited to generate and disseminate child sexual abuse material, facilitate online grooming, and enable human trafficking.
The deployment of AI in surveillance and monitoring, while often intended to enhance safety, can produce unintended consequences. Excessive monitoring risks infringing upon children’s fundamental right to privacy, fostering a culture of surveillance that undermines their sense of autonomy and dignity. Moreover, algorithmic bias within AI systems can perpetuate, and in some cases intensify, existing social and structural inequalities.
Despite these challenges, AI has the potential to significantly enhance the opportunities and safety of children in digital environments. One of these opportunities lies in AI’s real-time detection and prevention of online threats. Advanced AI algorithms can monitor behavioural patterns across digital platforms, identify potentially harmful interactions, and remove inappropriate or illegal content.
It is imperative to address the associated risks through robust regulatory frameworks, ethical AI development practices, and sustained dialogue among stakeholders. Ensuring that AI technologies are designed and implemented with the best interests of children at their core is essential to harnessing their benefits while mitigating potential harms. As AI continues to evolve, a balanced approach that prioritizes both safety and dignity will be essential for safeguarding children’s well-being in the digital age.
In the Global South, where access to traditional education and child protection services may be limited, AI offers the potential to deliver innovative solutions to increase the reach and quality of educational opportunities and essential.
The Conference Organizers
The conference organizers and their invited experts combined related scientific insights with long standing experiences in child protection.
The Pontifical Academy of Sciences (PAS) has held various conferences on AI and humanity over the last few years.[1] The goals of PAS include, “Stimulating an interdisciplinary approach to scientific knowledge, Encouraging international interaction, Furthering participation in the benefits of science and technology by the greatest number of people and peoples, Ensuring that science works to advance of the human and moral dimension of people, Achieving a role for science which involves the promotion of justice, development, solidarity, peace, and the resolution of conflict, Fostering interaction between faith and reason and encouraging dialogue between science and spiritual, cultural, philosophical and religious values…”.
With 25 years of pioneering efforts, the World Childhood Foundation, founded and spearheaded by Her Majesty Queen Silvia of Sweden, has played a crucial role in preventing child sexual abuse and exploitation. Throughout its history, Childhood has consistently championed projects that prioritize children’s safety and dignity, ranging from groundbreaking AI initiatives to innovative community-based programs.
The Institute of Anthropology. Interdisciplinary Studies on Human Dignity and Care (IADC) of the Pontifical Gregorian University conducts research and provides education on issues related to dignity and care through an intercultural and interdisciplinary approach, grounded in an anthropological understanding of human life informed by a Christian perspective. Established in 2012 as the Centre for Child Protection, the Institute contributes to the development of a common model for creating a safer world, and raises awareness of the urgent need to protect minors and vulnerable adults from harm.
Outcomes of the Workshop
The meeting emphasized the shared responsibility of all stakeholders to collaborate effectively within their respective spheres of influence to protect children. Its primary aim was to foster a common understanding of the risks and opportunities that AI presents for child safety and development, and to outline the responsibilities of each stakeholder in advancing a joint commitment to safeguarding children. The key outcomes of the conference include:
- Shared Understanding: An enhanced collective awareness of the current state of AI development, including risks and opportunities related to the safety and dignity of children.
- Defined Responsibilities: A clear delineation of the roles and responsibilities of science, the tech industry, religious communities (including the Catholic Church), researchers, civil society, children and young people, and survivors of abuse.
- Joint Commitment: A joint statement outlining specific, actionable steps each stakeholder may consider to enhance child wellbeing through AI, expand educational opportunities for children and youth, and combat child sexual abuse and exploitation using AI, including regular monitoring and evaluation of progress.
The outcome statement (see below), drawing on insights shared during the workshop, emphasizes five key actions:
- Strengthening international governance and regulatory frameworks to protect children from AI-driven harm.
- Developing AI systems that prioritize children’s safety, privacy, and dignity.
- Promoting the benefits of AI while ensuring transparency, accountability, and fairness.
- Elevating youth voices in AI governance discussions, and in the design and development of AI systems and products.
- Fostering global collaboration to ensure that AI serves, rather than endangers, the intellectual potential, dignity and well-being of every child in the digital age.
AI must be developed responsibly, prioritizing children’s well-being. Therefore, ethical regulations must be further developed and rigorously enforced. In light of the rapid pace of AI innovation, the co-organizers reaffirm their commitment to keeping the theme of AI and child welfare high on their agendas.
[1] Power and Limits of Artificial Intelligence. Antonio Battro, Stanislas Dehaene (Eds.). Proceedings of the Workshop 2016, Scripta Varia 132, Vatican City, 2017; Robotics, AI and Humanity: Science, Ethics and Policy. von Braun, J., Archer, M.S., Reichberg, G.M., Sánchez Sorondo, M. (Eds.) Springer Nature 2021, Open Access; Children and Sustainable Development. Ecological Education in a Globalized World, 2015. Springer 2017; Science for Sustainability and Wellbeing in the Anthropocene: Opportunities, Challenges, and AI. Plenary Session | 23-25 September 2024.