DOI: 10.65398/FHVB7362
Josiah Kiarie Kimani AI for Science MSc Student, African Institute for Mathematical Sciences (AIMS), South Africa
Protect, Provide, Participate: Children First in the Artificial Intelligence Era
(Elliott, 2019) Artificial Intelligence (AI) is rapidly transforming different sectors and landscapes of our lives across the globe, offering both unprecedented opportunities and profound risks especially for children. This paper, drawing on my experiences as a youth and AI for Science MSc Student, explores the insights and ethical imperatives of protecting, providing for, and enabling the participation of children in the AI era. I examine the duality of AI – its capacity to enhance safety, education, and creativity/innovation, as well as its potential to harm and exacerbate risks such as privacy violation, exploitation, and manipulation. I propose actionable recommendations for a children-first approach by emphasizing the need for robust safeguards, equitable access, and youth involvement in AI governance and policy-making. This perspective aligns with the outcomes of the 2025 Vatican conference Risks and Opportunities of AI for Children: A Common Commitment for Safeguarding Children, held at Casina Pio IV in the Vatican City in March 2025, advocating for collaborative, ethical, and inclusive AI development that promotes and prioritizes the dignity and well-being of every child.
1. Introduction
It is important to be cognisant of how AI is reshaping and influencing childhood by providing opportunities for growth as well as risks that particularly, if not addressed are disastrous. (Zhou, 2025) Childhood in the current AI era is different in ways unimaginable a decade ago. AI’s impact on children is multifaceted as the effect is age-group specific spanning early childhood development to adolescence. From personalized learning platforms to AI-powered safety tools, children are increasingly interacting with intelligent systems that influence their education, socialization, and well-being. However, these technological advances also introduce complex risks-ranging from privacy breaches to psychological harm that disproportionately affect young users.
This paper articulates a youth perspective on how we can protect, provide for, and empower children in the AI era, in addition to the conference outcomes and current research and development.
2. Protect: Designing AI to Safeguard Children
AI must be designed with proactive, robust safeguards to ensure the safety and dignity of children. As a young African researcher, I observe that children across Kenya, the continent, and the world increasingly interact with AI-driven systems – particularly through social media, educational platforms, and digital games. These systems, while offering benefits, expose children to threats such as cyberbullying, online grooming, data exploitation, and manipulative content.
Recommendation algorithms, for instance, often prioritize engagement and commercial metrics over user well-being, inadvertently exposing young users to harmful material or addictive behaviours (Montag & Diefenbach, 2023). Social media platforms must be held accountable for the unintended consequences of AI-driven personalization that prioritize profits over protection.
Protective strategies must include:
- Transparent algorithms: Developers should publish the objectives and consequences of recommendation engines, especially those interacting with minors.
- Safety-by-design principles: Platforms must integrate child safety features as core functionalities, not as optional add-ons.
- Real-time harm detection: AI can be leveraged to detect predatory behaviours or cyberbullying patterns and alert guardians or moderators promptly. Griffeye Analyze DI Pro and Project Arachnid are examples of how AI can be used in law enforcement to distinguish real vs AI-generated CSAM (UNICRI, 2024) as well as detect new CSAM variants (PAS, 2025).
- Privacy by Design: Children’s data must be protected through stringent privacy measures, minimizing data collection and ensuring informed consent. Regulatory frameworks should mandate data minimization and clear parental controls.
- Strict assessments for educational AI tools: EU AI Act (2024) requires watermarking AI-generated content to combat deepfakes, promote transparency and accountability.
- Regular monitoring and evaluation of AI systems: Ethical frameworks that guide AI development should mandate regular monitoring and evaluation of systems for potential harm.
Globally, and particularly across the Global South where regulatory frameworks may be weaker, these protections are vital to prevent disproportionate harm to vulnerable populations (Pontifical Academy of Sciences, 2025). As AI evolves, continuous adaptation of protective frameworks remains crucial to balance innovation with child safety.
3. Provide: Equitable Access and Inclusive Innovation
Beyond protection, AI must provide opportunities for all children to thrive. In regions like Africa, AI holds the potential to democratize access to quality education, healthcare, and social services. Adaptive learning systems can personalize education for children in remote or under-resourced communities, and assistive technologies can empower children with disabilities (Innocenti, 2023).
At AIMS, we see firsthand how AI can transform education for African students, enabling access to world-class resources and personalized learning experiences. However, the promise of AI can only be realized if intentional efforts are made to bridge the digital divide. Without deliberate investment, AI risks exacerbating inequalities between children with abundant resources and those without. Reliable internet access, affordable devices, and culturally relevant content remain scarce for millions of African children.
Proposals for ensuring equitable provision include:
- Development of low-bandwidth AI educational tools: Tailored for rural areas with limited connectivity.
- Public-private partnerships: Governments and technology companies must collaborate to subsidize digital infrastructure in underserved regions.
- Open-access resources: Sharing of educational AI models under open licenses to promote inclusivity.
If designed and deployed equitably, AI can be a force for good, expanding children’s access to knowledge, health services, and creative expression regardless of geography or socio-economic status.
4. Participate: Empowering Children and Youth in Shaping AI
Children and young people are not passive users of technology – they are active agents who experience, adapt, and shape digital realities. However, they are often excluded from decision-making spaces where technologies that impact their lives are designed and regulated.
Youth participation must move beyond tokenism. As digital natives, young people bring invaluable insights into how AI systems affect mental health, social relationships, and learning. Involving youth in AI ethics committees, governance bodies, and product design processes ensures more relevant, ethical, and user-centred outcomes (5Rights Foundation, 2024).
Key strategies to foster meaningful youth participation include:
- Institutionalizing youth participation in advisory boards within AI research institutes, technology companies, and policy frameworks.
- Integrating digital and AI literacy programs in education curricula that empower children to critically engage with AI. UNCEF’s AI Literacy Toolkit teaches 12-17-year-olds to identify deepfakes through interactive simulations (UNICEF, 2020).
- Funding youth-led innovation projects that explore ethical, inclusive AI development. A notable example is the (World, 2025) AI for Good Scholarship 2025, offered by The Brandtech Group in partnership with One Young World, which selects and funds five exceptional young innovators each year who are using AI to address social equity, environmental sustainability, and the responsible use of technology. Similar efforts, such as the Commonwealth AI Innovation Fund proposal and (UNESCO, 2023) UNESCO’s Global Youth Grant Scheme, also provide grants, mentorship, and capacity-building for youth-led projects, emphasizing the importance of empowering young people to create AI solutions that are ethical, inclusive, and aligned with sustainable development goals.
My own academic journey at AIMS – a pan-African network nurturing STEM excellence – has demonstrated the transformative power of empowering young people. Extending such opportunities globally can ensure that children’s voices are not an afterthought, but a driving force in shaping ethical AI.
5. Conclusion: A Common Commitment for a Child-First Digital Future
The rapid evolution of AI demands that we act decisively to centre children’s rights and well-being at the heart of technological innovation. Protecting children requires proactive safeguards; providing for them demands equitable and inclusive deployment; enabling their participation ensures that AI serves rather than dominates humanity’s future.
Echoing the Vatican conference’s final call to action, I emphasize that protecting, providing, and promoting participation are not optional luxuries – they are urgent ethical imperatives. As a young scientist, I commit to contributing to an AI future that upholds human dignity, fosters opportunity, and empowers every child to flourish in the digital age.
As we move forward, let us remember that the true measure of technological progress is not in what it makes possible, but in how it serves the most vulnerable among us-especially children.
AI must serve children – not the other way around.
References
(UNICEF), U.N. (2020). Virtual consultation workshop report: AI and children in East Asia and the Pacific. United Nations Children’s Fund (UNICEF), Office of Global Insight and Policy.
(UNICRI), U.N. (2024). AI for Safer Children. https://unicri.org/topics/AI-for-Safer-Children. UNICRI.
5Rights Foundation. (2024). The Digital World of Children: Policy Recommendations for AI Development. https://5rightsfoundation.com/
Elliott, A. (2019). The culture of AI: Everyday life and the digital revolution. Routledge. London: Routledge.
Innocenti, U. (2023). AI and Children: Opportunities, Risks and Policy Recommendations. Florence, Italy: UNICEF Office of Research-Innocenti. Retrieved from https://www.unicef-irc.org/
Malgieri, G. (2024). Vulnerability in the EU AI Act: building an interpretation. Available at SSRN 5058591.
PAS. (2025). AI enhances detection of child abuse cases. Neuroscience News. https://neurosciencenews.com/ai-child-abuse-detection-28741/
UNESCO. (2023). Financing youth innovation for green solutions. https://www.unesco.org/en/articles/financing-youth-innovation-green-solutions
World., T.B. (2025). AI for Good Scholarship 2025: Empowering young innovators to address global challenges. https://opportunitiesforyouth.org/2025/04/28/ai-for-good-scholarship-2025-empowering-young-innovators-to-address-global-challenges/
Zhou, X. (2025). Innovation and Reconstruction of Early Childhood Education Models Driven by Artificial Intelligence Technology. Journal of Artificial Intelligence Practice, ISSN 2371-8412 Vol. 8 Num. 1.