Prof. Dr. Virginia Dignum, Professor of Responsible Artificial Intelligence at Umeå University (Dept of Computing Science) and Director of the AI Policy Lab. Member of United Nations High Level Advisory Body in AI

Rethinking AI for Children: From Technological Potential to Human Responsibility

As artificial intelligence systems increasingly shape the environments in which children grow, learn, and interact, the central question is no longer what AI can do, but whether and when it should be used at all. Drawing on global literature, policy frameworks, and the author’s work with UNICEF, this paper explores the ethical, social, and developmental implications of AI in children’s lives. It argues for a rights-based, inclusive approach to AI development, grounded in children’s best interests. By first surveying the current state of AI applications in childhood and their attendant risks, we build toward a framework for responsible design and governance. We conclude with actionable policy and design recommendations that ask a fundamental question: Is AI needed here?

1. Introduction

Artificial Intelligence (AI) technologies are now deeply embedded in children’s daily lives, through voice assistants, learning apps, health chatbots, and more. Yet, children are rarely part of AI policy debates or design processes as is evidenced in major AI guidelines such as those by the OECD,[1] the European Union,[2] UNESCO,[3] or the, now revoked, U.S.[4] that frequently reference general human rights but rarely mention children explicitly or consider their specific needs in AI governance.

These realities demand not only inclusion but urgent ethical scrutiny: what does it mean for children to grow up under the influence of AI systems they neither helped design nor can fully understand – systems that may not only exclude them, but also misrepresent, profile, or actively harm them through biased classifications, opaque decisions, or intrusive interventions? This paper seeks to move beyond the binary of AI as either innovation or intrusion and instead frame it as a matter of human responsibility. We begin by surveying existing research on the opportunities and risks AI poses to children across development, equity, education, and health. Building on this foundation, we examine the ethical and design challenges that AI introduces into childhood contexts and propose a values-led approach to AI governance. Ultimately, we argue that AI should be deployed only when it demonstrably upholds the rights and dignity of children.

2. Background

Artificial intelligence is reshaping childhood. From algorithmically curated content to AI-driven learning systems and health diagnostics, children are increasingly subject to digital systems they cannot fully comprehend or question [9, 12, 24]. Overexposure to AI systems may lead children to overtrust machines, blur distinctions between human and artificial agents, and adopt biased or commercialized worldviews embedded in training data [1]. This concern is heightened by the fact that children’s unique developmental pathways make them especially vulnerable to such influences. Children are not miniature adults; their developmental trajectories are shaped by distinct cognitive, emotional, and social processes that unfold over time and are highly sensitive to environmental inputs. Moreover, children are still forming critical faculties such as judgment, skepticism, and self-regulation. This makes them particularly vulnerable to the affordances and limitations of AI systems, which often are presented as authoritative, responsive, or even relational entities. Recent literature emphasizes that AI is not a passive tool that merely supports childhood development, it is an active agent that shapes how children think, learn, interact, and understand the world [21, 27]. For example, AI-powered recommendation systems and conversational agents can influence what children see, how they reason, and even who they trust. As such, one pressing concern is that children may develop overtrust in machines, especially when AI systems present themselves as friendly, reliable, or omniscient. This can blur the distinction between human and artificial agents, undermining children’s understanding of social relationships, accountability, and the difference between opinion and algorithmic inference. Additionally, when AI systems are trained on biased or commercialized data, they may subtly reinforce harmful stereotypes, materialistic values, or culturally narrow perspectives, shaping children’s worldviews in ways that are neither transparent nor easily reversible.

More worryingly, this technological transformation and its risks are not equally distributed. As UNICEF and ITU document,[5] nearly two-thirds of the world’s school-age children lack home internet access. AI-enhanced educational tools, mental health support systems, and child welfare analytics are mostly deployed in digitally privileged settings, leaving behind vast populations without access or influence. This means that the benefits of AI, whether in education, healthcare, or social support, are accessible only to a minority, and the design and data driving these tools rarely reflect the needs or realities of the global majority. This digital divide introduces a compounding inequity: not only are marginalized children excluded from potentially helpful technologies, but the systems themselves may embed assumptions that render these children invisible, misrepresented, or misjudged. And even worse, in some cases, these systems do not simply ignore children, they may actively harm them by misclassifying behaviors, reinforcing bias, or triggering inappropriate interventions, as seen in predictive child welfare algorithms or automated content moderation systems [8]. Researchers have argued that if equity and representation are not proactively embedded, AI could amplify rather than reduce social disparities [3, 16]. Training data and benchmarks remain predominantly Western, urban, and English-speaking, leading to a systemic lack of cultural and contextual relevance in how AI classifies, supports, or evaluates children globally.

In education, adaptive learning systems and intelligent tutoring technologies offer customization and scalability. Yet these systems raise questions about pedagogical oversight, transparency, and teacher displacement [20]. Studies suggest that while some AI systems can enhance learning outcomes, their long-term impact on motivation, social-emotional learning, and classroom dynamics is still under-researched. Similarly, in healthcare, AI shows promise in pediatric diagnostics, developmental screening, and mental health triage [14, 25]. But lack of clinical validation for diverse populations and concerns over data misuse remain critical barriers. Taken together, the literature paints a picture of both promise and peril. If AI is to genuinely enhance children’s lives, its development must be grounded in empirical evidence, child development theory, and a commitment to justice. This requires not just technical safeguards but a participatory, child-centered paradigm for digital innovation.

3. Child-Centered Approaches to AI Governance

Ensuring that AI respects children’s rights and developmental needs requires more than well-intentioned guidelines, it demands critical reflection on whose values are being encoded into these systems. As the push for child-centered AI governance grows, we must confront a foundational issue: the value assumptions embedded in AI models often reflect the priorities of a narrow segment of the global population. As illustrated in the 2022’s Mozilla Internet Health Report,[6] datasets used to train AI systems overwhelmingly originate from a handful of countries – most notably the United States. This concentration distorts the digital landscape: entire regions, particularly in the Global South, are effectively invisible in the data that shapes AI. As a result, the values, norms, and assumptions embedded in these systems reflect the perspectives of a narrow, Western-centric segment of the world. This “WEIRD” bias (Western, Educated, Industrialized, Rich, Democratic) [10] risks reinforcing algorithmic colonialism and marginalizing the lived experiences of most of the world’s children. If AI is to support diverse childhoods equitably, value pluralism must be a foundational design and governance principle, not an afterthought.

In particular, the development of AI systems that interact with or affect children demands frameworks that are not only technically robust but also ethically grounded in child rights and developmental theory. A growing body of research emphasizes that children must be treated not merely as passive recipients of AI systems but as active stakeholders in their design and governance [11, 17]. Across domains, a common thread is the recognition of children’s agency and digital rights. Research and international policy increasingly underscore that children have a right to participate in decisions affecting their digital lives and to access technologies that enhance rather than compromise their well-being [23].

The design of AI-powered social agents, such as the increasing use of educational or therapeutic assistants or robots, illustrates the need for socially embedded frameworks. These must account for children’s cognitive and emotional development, emphasizing empathy, transparency, and safe interaction models [26, 6, 4]. Ensuring that these systems respect children’s autonomy while supporting healthy socialization requires careful attention to design features such as embodiment, interactivity, and context awareness.

Commitment to participatory design has emerged as a best practice in child-computer interaction research. Involving children directly in co-design processes has been shown to improve relevance, inclusivity, and transparency in AI systems [19, 11]. Rather than tokenistic consultation, this approach seeks to center children’s perspectives, enabling them to shape technologies in ways that align with their evolving capacities and lived experiences.

Equally promising mechanism is the use of regulatory sandboxes, that is, controlled, real-world environments in which AI technologies can be tested and evaluated for their impacts on children’s rights, safety, and well-being prior to widespread deployment. These sandboxes enable iterative development with stakeholder feedback, allowing AI developers, educators, caregivers, and children to collaboratively assess appropriateness and potential harm [18, 5].

Together, these contributions underscore a shift toward AI governance models that are child-centric by design – integrating empirical insights, participatory ethics, and rights-based safeguards. Rather than simply adapting adult frameworks to child contexts, these approaches demand innovation in how we think about risk, responsibility, and agency in the digital age.

3.1  Participatory Design in Practice

Participatory design, where children are not merely consulted but actively involved as co-designers, has emerged as a cornerstone of ethical AI development for children. Moving beyond adult-centric assumptions, this approach recognizes children’s evolving capacities, contextual knowledge, and lived experience as critical inputs to technology design [11, 19]. Research in child-computer interaction has demonstrated that participatory methods such as cooperative inquiry, storytelling, and iterative prototyping enhance both the usability and relevance of AI systems while fostering digital agency and critical reflection among children [6]. For example, projects like UNICEF’s Innovation Fund[7] pilots include child-led design workshops to shape technologies such as digital identity and safety tools, demonstrating that when supported appropriately, children can offer insightful perspectives on fairness, transparency, and ethical concerns in AI. However, true participation demands more than occasional consultation. It requires developmental adaptation, sustained engagement, and capacity-building for both children and adult facilitators to ensure that children’s contributions are taken seriously and visibly influence outcomes [17].

Institutionalizing participatory design involves integrating it into AI development pipelines, including ethical review processes, funding criteria, and design team structures. As children are increasingly affected by AI systems, ensuring they co-create these systems is not just good practice – it is a matter of justice and rights. Moreover, inclusivity must be a guiding principle. Children from marginalized or underrepresented groups are often excluded due to linguistic, cultural, or infrastructural barriers. A rights-based participatory approach must therefore embed accessibility, equity, and power-sharing into the design process from the outset.

4.  Toward Child-Centered AI: Design and Policy Recommendations

Building on the ethical reflections and developmental concerns raised throughout this paper, we identify a set of interlocking priorities to guide the responsible deployment of AI in children’s lives. These recommendations are grounded in the recognition that AI in childhood presents a double ethical challenge: on one hand, the obligation to protect children from surveillance, misclassification, and harm; on the other, the responsibility to design systems that genuinely support and reflect their diverse needs, rights, and lived experiences. At the heart of this approach lies a foundational ethical checkpoint: not simply whether AI can be used, but whether it should be used at all. This question must precede technical considerations and frame the entire development process. It demands that we shift from a mindset of solutionism to one of reflective necessity. Before reaching for the AI “hammer,” we must ask whether the issue is truly a “nail”. Not all problems in children’s lives require algorithmic solutions, and the presence of powerful technology should never substitute for thoughtful judgment about its appropriateness, impact, and proportionality.

Concerns about cultural bias and representational exclusion are not merely academic but must translate into concrete design and policy responses. If AI systems are to serve children equitably, they must be built on inclusive processes that proactively account for global diversity rather than retrofitting fairness as an afterthought. This means embedding value pluralism as a design principle from the outset, ensuring that training data, development teams, and governance frameworks reflect the multiplicity of childhoods across different cultural, social, and developmental contexts. Inclusive AI for children is not just about avoiding harm; it is about enabling relevance, agency, and justice in the systems we create. Policy must therefore mandate not only transparency and oversight, but also meaningful participation from communities and children themselves in shaping technologies that affect their lives.

4.1  Mitigating Dual Risks: Observation and Estimation

Children’s interaction with AI systems introduces a distinctive and deeply consequential set of dual risks: the risk of being constantly observed, and the risk of being estimated. These are not simply technical design flaws – they represent fundamental social, developmental, and ethical challenges that must be addressed through both governance and system design.

The first of these, observation, refers to the pervasive surveillance embedded in many AI-driven applications and platforms. These systems routinely track children’s behaviors, preferences, locations, and interactions, often under the justification of personalization, safety, or enhanced service delivery. However, such ongoing data collection can compromise children’s privacy and undermine their autonomy. More insidiously, it can normalize surveillance from an early age, habituating children to being constantly watched and disempowering them in their relationship to technology. This risks creating a digital environment in which children learn to self-censor, internalize external scrutiny, and accept asymmetrical power dynamics as a default condition of participation [15, 22].

The second, estimation, emerges from the use of predictive systems, where AI systems are used to infer a child’s behavior, needs, or future outcomes based on their data footprint. These estimations, sometimes called “data doubles”, can shape how children are perceived and treated by educational, welfare, or health systems. When based on incomplete, biased, or context-insensitive data, such profiling can misrepresent a child’s identity or potential, leading to inappropriate categorization, limited opportunities, or unwarranted interventions. Estimation risks are especially pronounced for children from underrepresented or marginalized populations. When children’s identities, behaviors, or contexts are not adequately reflected in training data, AI systems cannot “see” them directly. Instead, such children are more likely to be estimated, subject to algorithmic assumptions drawn from other groups. This increases the likelihood of misclassification and exclusion, particularly in systems that aim to allocate support or assess risk [2]. In effect, the less visible a child is in the data, the more they become a projection of someone else’s profile, shaped by generalized norms that may not apply to their lived reality.

These risks intersect with the broader themes of this paper: the developmental vulnerability of children, the cultural bias in AI training data, and the limited transparency in automated systems. Importantly, they reveal that AI does not merely record or reflect children’s realities, it actively constructs and constrains them through mechanisms that are often opaque and unaccountable. Crucially, these risks are not isolated but interconnected: surveillance feeds estimation, and estimation justifies further surveillance. Together, they reinforce a feedback loop in which children’s realities are interpreted and constructed through opaque and potentially harmful algorithmic logics.

Protecting children from the harms of AI requires more than technical fixes; it calls for a reassertion of human judgment, dignity, and responsibility at every level of the digital ecosystem.

To mitigate the risks above, AI systems involving children must be subject to stringent ethical and developmental standards, as demanded by e.g. UNICEF. Data collection should be strictly minimized, limited to what is necessary and appropriate for the child’s context. Predictive analytics in sensitive domains such as education, child protection, or mental health must be used with extreme caution, only when their fairness, relevance, and accuracy can be empirically validated. Systems must be designed with mechanisms for explainability and contestability, allowing decisions to be questioned, reviewed, and reversed. Above all, human oversight must be preserved at key decision points, ensuring that algorithmic judgments do not override contextual understanding or professional responsibility. Key safeguards should include:

  • Minimize data collection by default, ensuring that only data strictly necessary for a given function is used, and that children’s digital traces are not repurposed beyond their original context.
  • Enable explainability and contestability in systems that affect life outcomes, especially in education and child protection, where misinference can have lasting effects.
  • Embed human oversight and intervention points, particularly where AI outputs may lead to decisions about care, assessment, or risk.
  • Restrict use of profiling and predictive analytics in sensitive domains unless their validity, fairness, and developmental appropriateness can be demonstrated for the child populations they target.

Ultimately, reducing the risks of observation and estimation requires acknowledging children not merely as data subjects but as rights-holding individuals whose evolving capacities demand protection, respect, and voice in digital systems. The responsibility lies with designers, regulators, and institutions to ensure AI serves children – not monitors or molds them into predefined algorithmic categories.

4.2 Centering “Question Zero”: Is AI Needed?

Too often, AI is introduced based on what technology can do, rather than what is genuinely needed. In the context of childhood, this mindset risks prioritizing efficiency, scale, or innovation over human connection, developmental appropriateness, and ethical responsibility. Before any AI system is developed or deployed for children, we must ask the foundational question: Is AI needed here?, also known as “Question Zero”.

This question goes beyond technical feasibility, it demands a shift from solutionism to reflective necessity [13]. It challenges designers, policymakers, and educators to consider whether a human-led, relational approach might better serve children’s needs. In many domains, from education to caregiving to mental health, peer-supported and community-driven models often outperform algorithmic solutions in promoting trust, resilience, and well-being.

“Question Zero” is not a barrier to innovation, it is a safeguard for dignity [7]. It reframes AI not as a default response but as a well-thought resort, used only when it clearly advances children’s rights, and children’s agency and development without introducing disproportionate risk. As emphasized throughout this paper, the ethical deployment of AI in childhood must begin not with the tools we have, but with the values we uphold. In this light, AI should not replace human judgment, but complement it, carefully, transparently, and only when it aligns with the best interests of the child. This perspective reflects a deeper ethical stance: that the presence of technology should never obscure the primacy of the human experience.

4.3  Design and Policy Recommendations

As argued in this paper, building AI systems that serve children justly and safely requires more than abstract ethics or voluntary codes, it demands structural commitments across both governance and design. This section outlines concrete policy and design actions that together form a roadmap for child-centered AI. These recommendations respond to the dual imperative identified throughout this paper: to protect children from harm and to ensure that AI technologies genuinely enhance their rights, well-being, and opportunities. Crucially, these proposals operationalize the principle that ethical reflection must precede technical deployment – beginning with “Question Zero”, and extending through every stage of AI development, regulation, and use.

Policy Recommendations

Effective governance of AI in childhood contexts begins with legally binding safeguards that reflect the unique vulnerabilities and rights of children. Policies must be proactive, not reactive, shaping how AI systems are funded, deployed, and evaluated before harm occurs. This means integrating child-specific criteria into regulatory frameworks, data protection laws, and national AI strategies. It also means investing in the infrastructure and equity mechanisms necessary to ensure that all children – not only those in digitally privileged settings – can benefit from technology in ways that are safe, just, and developmentally appropriate. Concrete measures include:

  • Mandate child rights impact assessments in all AI-related policies and funding programs.
  • Expand data protection legislation with child-specific provisions, including developmental sensitivity and meaningful consent.
  • Invest in equitable digital infrastructure to close access gaps, particularly in underserved regions.
  • Ensure transparency and human oversight for all high-stakes AI applications affecting children.
  • Align national AI strategies with international frameworks (e.g., UNICEF, OECD, UNESCO).

Design Principles

Design is not a neutral process, it encodes values, assumptions, and priorities into the systems that increasingly shape children’s lives. Responsible AI design for children must begin from a recognition of their evolving capacities, diversity, and need for protection and participation. This section outlines principles for embedding children’s rights into the very architecture of AI systems, from inclusive datasets to explainable interfaces, and from participatory co-design to developmentally appropriate safeguards. These are not optional add-ons but essential features for AI that is trustworthy, empowering, and aligned with the lived realities of children globally.

  • Implement “Children’s Rights by Design” as a foundational approach in AI development.
  • Institutionalize participatory design practices involving children, educators, and caregivers from diverse contexts.
  • Prioritize diversity in training data to reflect global childhoods and avoid systemic exclusion.
  • Embed explainability, safety defaults, and opt-out mechanisms in all AI systems used by or for children.
  • Provide AI literacy education for children and communities to foster informed agency and resilience.
  • Establish formal mechanisms for ongoing child consultation and feedback in the AI lifecycle, beyond initial design, through advisory panels, school-based labs, or digital participation platforms.

5.  Operationalizing Child-Centered AI

Translating child-centered AI principles into real-world practice requires structured, institutional, and iterative approaches. While the previous sections established the ethical foundations, the following framework outlines how these principles can be systematically implemented, monitored, and continuously improved across diverse contexts. Towards this, we outline practical steps for governments, developers, educators, and communities to embed children’s rights into every stage of AI development and deployment. As a preliminary proposal, it is intended as a starting point for further discussion, adaptation, and validation across different policy, cultural, and institutional settings. We welcome critique, collaboration, and refinement to ensure its relevance and effectiveness in practice.

  • Implementation pathways: Governments and international bodies should integrate child-specific safeguards into existing AI governance mechanisms. This includes:
    • Requiring Child Rights Impact Assessments (CRIA) as a condition for funding or regulatory approval.
    • Embedding “Children’s Rights by Design” in national AI strategies, modeled after existing frameworks like UNICEF’s policy guidance.
    • Mandating AI audits for education, health, and welfare systems that affect children.
  • Phased adoption model: Recognizing that capacities vary, institutions can adopt a four-phase model for implementing child-centered AI:
    • Phase 1: Awareness and capacity building: Train stakeholders on children’s rights in AI contexts.
    • Phase 2: Ethical and legal alignment: Integrate rights-based criteria into procurement, standards, and governance.
    • Phase 3: Participatory Development: Institutionalize child co-design and stakeholder consultation throughout AI development.
    • Phase 4: Auditing and Iteration: Establish continuous feedback, monitoring, and course correction mechanisms.
  • Institutional embedding: To prevent child participation and oversight from becoming tokenistic, dedicated mechanisms are essential:
    • Youth advisory councils within regulatory and policy bodies.
    • Ethics and co-design panels embedded in EdTech and pediatric AI initiatives.
    • Public registers of child-facing AI systems subject to oversight and transparency.
  • Monitoring and evaluation: Operational success requires clear metrics and feedback loops. Suggested indicators include:
    • Percentage of AI projects with documented CRIA.
    • Representation of marginalized children in design processes.
    • Rate of contestation or reversal of AI-driven decisions involving children.
    • Measured impact on child well-being, trust, and understanding.
  • Global coordination and benchmarking: To ensure progress is shared and accountable, international bodies such as UNICEF, UNESCO, and OECD should:
    • Facilitate peer learning across countries.
    • Maintain open benchmarking databases of child-related AI initiatives.
    • Support under-resourced regions in capacity building and implementation.

6.  Conclusion

The integration of AI into children’s lives is not merely a technological shift, it is a societal turning point that demands deep ethical, developmental, and policy reflection. As this paper has argued, the central challenge is not how to make AI more powerful or pervasive, but how to ensure it is justified, rights-respecting, and child-centered in both design and deployment.

Protecting children from the potential harms of AI cannot be addressed through technical fixes alone. It requires a broader reorientation toward human judgment, dignity, and responsibility – from the earliest design stages to final implementation. Crucially, it demands that we ask “Question Zero”: not whether AI can be used, but whether it should be used in the first place.

Responsible AI for children is not a constraint on innovation, it is its ethical foundation. By embedding children’s rights, voices, and diverse developmental needs into the core of AI governance and design, we can shape technologies that truly serve the best interests of every child – across cultures, contexts, and generations.

Upholding children’s rights includes not only shielding them from harm but also amplifying their voices as co-creators of their digital futures.

References

[1] Valentina Andries and Judy Robertson. “alexa doesn’t have that many feelings”: Children’s understanding of ai through interactions with smart speakers in their homes. Computers and Education: Artificial Intelligence, 4:100100, 2023.

[2] Ryan S Baker and Aaron Hawn. Algorithmic bias in education. International Journal of Artificial Intelligence in Education, pages 1–41, 2022.

[3] Abeba Birhane. Algorithmic injustice: a relational ethics approach. Patterns, 2(2):100205, 2021.

[4] Cynthia Breazeal, Kerstin Dautenhahn, and Takayuki Kanda. Social robots that teach. Science Robotics, 1(1), 2016.

[5] Vicky Charisi and Virginia Dignum. Operationalizing ai regulatory sandboxes for children’s rights and well-being. In Responsible AI in Practice. Taylor & Francis, 2023.

[6] Vicky Charisi, Laura Malinverni, Marie-Monique Schaper, and Elisa Rubegni. Creating opportunities for children’s critical reflections on ai, robotics and other intelligent technologies. In Proceedings of the 2020 ACM interaction design and children conference: extended abstracts, pages 89–95, 2020.

[7] Virginia Dignum. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Cham, 2019.

[8] Virginia Eubanks. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, New York, 2018.

[9] Dagmar Mercedes Heeg and Lucy Avraamidou. Young children’s understanding of ai. Education and Information Technologies, pages 1–24, 2024.

[10] Joseph Henrich. The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous. Farrar, Straus and Giroux, New York, 2020.

[11] Ole Sejer Iversen, Rachel Charlotte Smith, and Christian Dindler. Child as protagonist: Expanding the role of children in participatory design. In Interaction Design and Children 2017, pages 27–37. Association for Computing Machinery, 2017.

[12] Philip Hui Li and John Chi-Kin Lee. Ai, brain, and child: navigating the intersection of artificial intelligence, neuroscience, and child development, 2025.

[13] Simon Lindgren and Virginia Dignum. Beyond ai solutionism: toward a multidisciplinary approach to artificial intelligence in society. In Handbook of critical studies of artificial intelligence, pages 163–172. Edward Elgar Publishing, 2023.

[14] Hannah Lonsdale, Ali Jalali, Luis Ahumada, and Clyde Matava. Machine learning and artificial intelligence in pediatric research: current state, future prospects, and examples in perioperative and critical care. The Journal of Pediatrics, 221:S3–S10, 2020.

[15] Deborah Lupton and Ben Williamson. The datafied child: The dataveillance of children and implications for their rights. New Media & Society, 19(5):780–794, 2017.

[16] Safiya Umoja Noble. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, New York, 2018.

[17] Rahmonova Sanoat Shuhrat Qizi, Toshpulatova Shakhnoza Shukhratovna, and Musayeva Amina Karamatovna. Implementation of education and protection of children’s rights in the age of technology. SPAST Reports, 1(7), 2024.

[18] Inioluwa Deborah Raji, Andrew Smart, Rebecca White, et al. Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 33–44, 2020.

[19] Janet C. Read and Tilde Bekker. The nature of child computer interaction. In Proceedings of the International Conference on Interaction Design and Children, pages 1–8, 2011.

[20] Neil Selwyn. The future of AI and education: Some cautionary notes. European Journal of Education, 57(4):542–547, 2022.

[21] Gazal Shekhawat and Sonia Livingstone. Ai and children’s rights: a guide to the transnational guidance. Parenting for a Digital Future, 2023.

[22] Emmeline Taylor. The rise of the surveillance school. In Routledge Handbook of Surveillance Studies, pages 225–231. Routledge, 2012.

[23] The Alan Turing Institute. Mapping study on the rights of the child and artificial intelligence: Legal frameworks that address AI in the context of children’s rights. Technical report, November 2024. Approved by the Steering Committee for the Rights of the Child (CDENF) during its 9th plenary meeting, Strasbourg, 28–30 May 2024.

[24] Simone van der Hof and Eva Lievens. The child’s right to informational self-determination in the digital age: Challenges and perspectives. Human Rights and International Legal Discourse, 12:26–44, 2018.

[25] Gerrit van Schalkwyk. Artificial intelligence in pediatric behavioral health. Child and Adolescent Psychiatry and Mental Health, 17(1):38, 2023.

[26] Joachim von Braun, Margaret S Archer, Gregory M Reichberg, and Marcelo Sánchez Sorondo. Ai, robotics, and humanity: opportunities, risks, and implications for ethics and policy. Robotics, AI, and Humanity: Science, Ethics, and Policy, pages 1–13, 2021.

[27] Ying Xu, Yenda Prado, Rachel L Severson, Silvia Lovato, and Justine Cassell. Growing up with artificial intelligence: Implications for child development. In Handbook of Children and Screens: Digital Media, Development, and Well-Being from Birth Through Adolescence, pages 611–617. Springer Nature Switzerland Cham, 2024.

 

[1] https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

[2] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

[3] https://unesdoc.unesco.org/ark:/48223/pf0000381137

[4] https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/

[5] https://www.unicef.org/reports/two-thirds-world-school-age-children-have-no-internet-access-home-2020

[6] https://www.mozillafoundation.org/en/research/library/internet-health-report-2022/

[7] https://www.unicef.org/innovation/FundCallDataAI