DOI: 10.65398/CCFQ1845
Sagar Vishnoi, Parishrut Jassal & Alisha Butala, Future Shift Labs
Panchatantra 2.0: A Culturally Grounded Approach for Children’s Digital Rights and Online Safety
Enhancing children’s safety within the evolving digital landscape is paramount for fostering future-ready education. This paper directly addresses this imperative by synthesizing an AI-enhanced digital literacy framework aimed at cultivating critical engagement and ethical understanding among children, thereby contributing to their safety and preparedness for the future. The primary approach detailed herein is a comprehensive framework, which emphasises data rights and safeguarding in the digital age. This is complemented by a culturally grounded implementation strategy, an integral part of our approach, which notably leverages Indian ancestral stories, narratives from the Panchatantra, to make complex concepts relatable and instill ethical values. Key objectives of the framework include developing a nuanced understanding of data, data trails, promoting awareness of data collection’s personal implications, knowledge of pertinent legal data rights (such as India’s DPDP Act, 2023, GDPR, and UNCRC), and empowering children with ownership and agency over their personal data. The proposed framework is analyzed in relation to existing frameworks, including those by the Digital Education Council and UNESCO. Furthermore, the paper explores the dual role of AI, both as a potent educational tool for digital literacy and as a technology whose ethical deployment in educational settings requires stringent safeguards, particularly concerning data protection and bias mitigation, crucial for children’s safety. Concluding with policy recommendations, strategies for curriculum development based on this framework, and future research directions, this work aims to contribute significantly to enhancing children’s digital safety and equipping them as responsible, future-ready citizens.
1. Introduction
The pervasive integration of digital technologies and Artificial Intelligence (AI) into daily life has fundamentally reshaped how children learn, interact, and perceive the world, bringing forth unprecedented opportunities alongside significant challenges to their safety and well-being online (UNICEF, 2021). Ensuring children are future-ready necessitates a robust approach focused on enhancing their safety in these digital environments, particularly concerning data privacy, algorithmic influence, and the cultivation of critical engagement skills. Children are increasingly navigating complex online environments, often without a comprehensive understanding of their digital footprint or the implications of their data being collected and utilized, which directly impacts their safety (Stoilova et al., 2021). To address this critical need for enhanced safety and future-readiness, this paper proposes a new AI-enhanced digital literacy framework. This approach, centered on the proposed framework, moves beyond mere technical proficiency to foster deep, critical, and ethically informed engagement with digital technologies among children. A core component of this approach is a culturally grounded implementation strategy, which leverages ancestral stories, such as those from the Panchatantra, to make the tenets of digital literacy and safety more accessible and impactful.
This report details the objectives of this proposed framework, with a particular emphasis on data rights and safeguarding as foundational elements for children’s online safety. It further explores the culturally grounded implementation strategy, focusing on the Indian context. The report also examines existing digital literacy frameworks, such as those developed by the Digital Education Council and UNESCO, to identify synergies and distinctions with our proposed framework. Finally, it discusses the role of AI both as an educational tool for teaching these concepts and as a technology whose own deployment in educational settings requires careful ethical consideration and safeguarding to protect children. The ultimate aim of this framework is to equip children not only with the skills to use technology but also with the wisdom to navigate its complexities with agency, critical awareness, and a strong sense of digital safety.
2. Core Objectives of the AI-Enhanced Digital Literacy Framework
The proposed AI-enhanced digital literacy framework is designed around key objectives that transcend technical skills, aiming to cultivate a profound and critical understanding of digital technologies among children, thereby enhancing their online safety and agency. The framework prioritizes the following:
● 2.1. Understanding Data Trails
A primary objective is to enable students to comprehend how their personal data is captured, used, and stored across various digital platforms, as understanding this is crucial for their safety. This involves making the invisible visible, helping children understand that every online action, from website visits to social media posts and app usage, contributes to a “data trail”. This concept can be effectively taught by comparing digital trails to footprints left in the sand, a tangible analogy for younger learners. A practical exercise to achieve this is the “daily digital traceback”, where students reflect on their app usage and identify the types of data (e.g., location, images, behavioral patterns) these platforms collect. The “Data Diary” exercise, for example, requires students to log their digital activities over 24 hours, noting instances where sensors or electronic surveillance record their actions or information about them. This exercise helps students realize the frequency and extent of data collection in their daily lives. It is crucial to differentiate between general observation and actual data recording; for instance, “a camera recorded me entering the store” is valid data for this exercise, whereas “someone saw me going into the store” is not.
● 2.2. Data Awareness and Implications
Building on the understanding of data trails, the framework encourages children to explore the personal implications of data collection for their choices, autonomy, privacy, and safety. This involves fostering an awareness of how collected data affects their exposure to algorithmic influence. AI systems, integral to many digital platforms, require vast amounts of data to learn and improve, incentivising mass data collection, including sensitive information about children. This data can be shared or sold and may follow children throughout their lives. AI functions like surveillance, profiling, and automated decision-making are increasingly common in children’s lives, with applications that can range from “low-stakes” areas like targeted ads to “high-stakes” areas such as university admissions or child protective services, which can have significant safety implications. The complexity of AI can make it difficult to understand or contest these algorithmic decisions, potentially leading to unfair or discriminatory outcomes. The framework aims to unpack the concept of "autonomy" in this algorithmic age and how it is affected when data is constantly being tracked and monetized, potentially undermining a child’s ability to make independent choices.
● 2.3. Knowledge of Legal Rights
A critical component of the framework is introducing age-appropriate content about national and international legal frameworks governing data protection, which are essential for their safety and rights. This includes familiarizing students with their rights around consent, access to their data, and mechanisms for redress. In the Indian context, this involves understanding the Digital Personal Data Protection Act, 2023 (DPDP Act). The DPDP Act defines a child as an individual under 18 and mandates verifiable parental consent for processing their data. It also prohibits tracking, behavioral monitoring, or targeted advertising aimed at children, and emphasizes that data processing must not be detrimental to a child’s well-being or safety (Ministry of Law and Justice, 2023). Data Fiduciaries have obligations to ensure data is erased when no longer needed and upon withdrawal of consent. Parents or guardians, acting as data principals for children, have the right to access details about processing activities, and request correction, updating, and erasure of personal data. The Act also establishes the Data Protection Board of India for grievance redressal (Ministry of Law and Justice, 2023). Internationally, frameworks like the General Data Protection Regulation (GDPR) in Europe and the UN Convention on the Rights of the Child (UNCRC) provide broader context. GDPR-K, the sections of GDPR pertaining to children, affords special protections due to children’s lesser awareness of risks and establishes rules around the age of consent and the need for verifiable parental consent. UNCRC Article 16 specifically upholds a child’s right to privacy, covering their personal space, conversations, and data, emphasizing the need to balance protection with respect for growing independence in the digital landscape for their overall safety. General Comment No. 25 of the UNCRC further elaborates on states’ obligations to protect child rights in the digital environment (United Nations Committee on the Rights of the Child, 2021). Teaching these rights involves using age-appropriate language and methods, such as simple explanations for younger children and more in-depth discussions for older ones. Child-centric consent mechanisms, like using comics and videos, are particularly relevant in India, considering varying literacy rates.
● 2.4. Ownership and Agency over Data
The framework aims to empower children to identify and assert ownership of their data, which is key to their digital safety. This involves teaching them to recognise manipulative digital practices and make informed choices about sharing or withholding personal information. Fostering digital awareness and agency is crucial, and this can be achieved by co-creating privacy and tech-use policies with students, families, and educators. Children need to understand that control over their digital footprint equates to control over their future opportunities, reputation, and safety. Practical exercises, like playing an open-source intelligence (OSINT) game, can vividly illustrate how scattered digital crumbs form a complete personal portrait, making the concept of data ownership tangible. The goal is to shift children from mindless sharing to deliberate and thoughtful online engagement, respecting both their own and others’ privacy. This includes teaching them to manage privacy settings on various platforms and to think critically before posting. Ultimately, this objective seeks to cultivate a sense of empowerment, enabling children to navigate the digital world not as passive consumers but as active, informed agents who can protect their data, assert their rights, and ensure their safety.
3. Glance at Pivotal Digital Literacy Frameworks
To refine the proposed AI-enhanced digital literacy framework, particularly its focus on data rights, safeguarding, and enhancing children’s safety, an analysis of existing prominent frameworks is essential. The Digital Education Council’s AI Literacy Framework and UNESCO’s AI Competency Framework for Students offer valuable perspectives.
● 3.1. Digital Education Council (DEC) AI Literacy Framework.
The Digital Education Council (DEC) AI Literacy Framework adopts a human-centred approach, emphasising human skills like critical thinking, creativity, and emotional intelligence alongside AI competencies. It is designed to provide higher education institutions with structured guidance for developing AI literacy. The framework outlines five key dimensions of AI literacy: Understanding AI and Data, Critical Thinking and Judgement, Ethical and Responsible AI Use, Human-centricity, Emotional Intelligence, and Creativity, and Domain Expertise (Digital Education Council, 2023). Its “Ethical and Responsible AI Use” dimension directly addresses aspects of data rights and safeguarding by emphasising privacy principles and risk recognition. This aligns with the proposed framework’s objectives concerning data awareness, legal rights, agency, and overall digital safety. However, the DEC framework, being targeted at higher education, may require adaptation for younger learners. The proposed framework’s explicit focus on exercises like “daily digital traceback” and understanding specific data trails from a child’s perspective offers a more granular approach for younger age groups compared to the broader principles outlined by DEC.
● 3.2. UNESCO’s AI Competency Framework for Students
UNESCO’s AI Competency Framework for Students aims to empower students to be skilled and responsible users of AI, as well as active contributors to inclusive and sustainable AI systems, fostering their safety and positive contribution. It emphasises critical judgment of AI solutions, awareness of citizenship responsibilities in the AI era, foundational AI knowledge, and inclusive, sustainable AI design, grounded in human rights and fundamental freedoms (UNESCO, 2022). By highlighting inclusivity, human agency, non-discrimination, and cultural sensitivity, the UNESCO framework aims to equip students with skills and values for responsible citizenship in an AI-driven world (UNESCO, 2022). This resonates strongly with the proposed framework’s emphasis on data awareness, legal rights, agency over data, and ensuring children’s safety. While the UNESCO framework mentions “Ethics of AI” as a core dimension, the proposed framework provides more specific objectives like “Understanding Data Trails” and “Ownership and Agency over Data”, which offer concrete areas for curriculum development concerning data rights and digital safety.
4. Panchatantra 2.0: Culturally Grounded Digital Literacy Through Storytelling
To ensure children’s understanding of digital rights and privacy is both engaging and deeply internalised, our framework integrates Panchatantra 2.0, a reimagined storytelling approach that draws on India’s rich narrative heritage to make abstract digital concepts relatable. Building on the centuries-old Panchatantra tradition, which used fables to teach moral reasoning (nīti) and practical life skills (Olivelle, 1997), this strategy translates modern digital challenges into familiar, culturally resonant story formats. This narrative-based pedagogy functions on multiple levels. It simplifies complex ideas, fosters emotional and ethical engagement, and enables active learning. Educational research confirms that by embedding concepts in character-driven stories, learners can more easily grasp abstract principles and improve knowledge retention (Haven, 2007). This culturally grounded method not only improves knowledge retention but also roots digital safety in a context that feels familiar and meaningful, bridging the cognitive gap between a child’s offline moral universe and their emerging online life to cultivate informed, ethical, and resilient digital citizens.
5. Role of AI in Digital Literacy and Safeguarding
AI serves a dual role in this context: it is both a powerful tool for teaching digital literacy and a complex technology that itself requires stringent ethical oversight. On the educational side, AI can personalise learning pathways, gamify complex exercises, and provide real-time feedback. However, the deployment of AI in classrooms necessitates robust safeguards around algorithmic bias, student surveillance, and data protection to ensure children’s safety is never compromised (Schiff, 2021). This duality is central to the framework, which advocates for using AI to teach about AI, while simultaneously demanding ethical transparency in its application.
6. Policy Recommendations and Curriculum Development
● Policy Recommendations: Mandate age-appropriate digital safety and data literacy education as a core component of national school curricula. Require digital platforms, services, and educational technologies that process children’s data to incorporate “Privacy by Design” principles, ensuring that data protection is the default setting, not an afterthought (Cavoukian, 2009). Establish clear regulatory oversight for the ethical use of AI systems in educational settings, with specific guidelines on data governance, transparency, and accountability.
● Curriculum Strategies: Develop tiered learning modules tailored to different age groups. Integrate practical, hands-on exercises like the “Data Diary” and collaborative OSINT games. Use culturally resonant storytelling, such as the proposed Panchatantra 2.0, to explain abstract legal rights and ethical principles. Provide comprehensive training for teachers and engagement resources for parents to create a supportive ecosystem for children’s digital safety.
7. Conclusion and Future Directions
The proposed AI-enhanced digital literacy framework offers a comprehensive, ethically grounded, and culturally resonant approach to enhancing children’s digital safety. This paper has argued that traditional digital literacy, which often prioritizes technical skill, is no longer sufficient to prepare children for the complex realities of a data-driven world. In response, our framework establishes a new paradigm centered on the indispensable pillars of understanding personal data trails, developing a critical awareness of algorithmic implications, acquiring knowledge of essential legal rights under frameworks like India’s DPDP Act and GDPR, and fostering a true sense of data ownership and agency.
A cornerstone of our approach is ‘Panchatantra 2.0’, an innovative pedagogical strategy that demonstrates how the rich heritage of storytelling can be reimagined to make abstract digital concepts accessible and ethically resonant. This culturally grounded method bridges the gap between timeless moral reasoning and contemporary digital challenges, offering a more granular and child-centric approach than existing high-level frameworks. The implications of this work extend far beyond the classroom; it is a blueprint for cultivating a new generation of empowered digital citizens who can navigate misinformation, protect their autonomy, and assert their rights. This represents a crucial shift from a strategy of mere protectionism to one of proactive empowerment, fostering critical consciousness over passive consumption of technology.
This research serves as a call to action for educators, policymakers, technology developers, and families to collaborate in building a world where children are equipped not just to use technology, but to command it with awareness and responsibility.
References
Cavoukian, A. (2011). Privacy by Design: The 7 Foundational Principles. Information and Privacy Commissioner of Ontario. https://www.datatilsynet.no/globalassets/global/bilder/rettigheter-og-plikter/innebygd-personvern/7foundationalprinciples_anncavoukian2.pdf
Digital Education Council. (2023). AI Literacy Framework. Retrieved from https://www.digitaleducationcouncil.com/ai-literacy-framework
Stoilova, M., Livingstone, S. & Khazbak, R. (2021). Investigating the risks and opportunities for children in a digital world: A rapid review of the evidence. UNICEF Office of Research – Innocenti. Retrieved from https://www.unicef.org/innocenti/media/5621/file/UNICEF-Investigating-Risks-Opportunities-Children-Digital-World-2021.pdf
Haven, K. (2007). Story proof: The science behind the startling power of story. Libraries Unlimited. DOI:10.5040/9798216019312
Ministry of Law and Justice, Government of India. (2023). The Digital Personal Data Protection Act, 2023 (No. 22 of 2023). The Gazette of India.
Olivelle, P . (1997). The Pañcatantra: The book of India’s folk wisdom. Oxford University Press. https://global.oup.com/academic/product/pacatantra-9780199555758?cc=in&lang=en&
Schiff, D. (2021). Out of the laboratory and into the classroom: The future of artificial intelligence in education. AI & SOCIETY, 36(1), 331–348. https://doi.org/10.1007/s00146-020-01033-8
UNESCO. (2022). K-12 AI curricula: A mapping of government-endorsed AI curricula. UNESCO. https://doi.org/10.54675/ELYF6010
UNICEF. (2021). The state of the world’s children 2021: On my mind – Promoting, protecting and caring for children’s mental health. UNICEF. https://www.unicef.org/reports/state-worlds-children-2021
United Nations Committee on the Rights of the Child. (2021). General comment No. 25 (2021) on children’s rights in relation to the digital environment. CRC/C/GC/25. https://www.unicef.org/bulgaria/en/media/10596/file