Prof. Dr. Octaviana Trujillo, Chair and Professor Emerita in the Department of Applied Indigenous Studies at Northern Arizona University

Artificial Intelligence, Traditional Ecological Knowledge and Indigenous Peoples

The world is changing rapidly, but it is seemingly not as fast as the advancements in artificial technology and artificial intelligence (AI). AI’s proliferation has material impacts on nearly all aspects of society, both positive and negative. On the one hand, AI has the potential to improve monitoring and predicting climate change impacts, such as wildfires, droughts and floods. However, it is also increasingly used to automate decision-making, including environmental activities and decisions. On March 14 of this year, the United States led a motion called “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development” that was backed by 120 other Member States, that calls upon member states to work with various stakeholders to develop “safe, secure and trustworthy” artificial intelligence (AI) systems. Although Indigenous communities were not explicitly mentioned in that call to action, my message here today is that Indigenous Traditional Ecological Knowledge and stewardship are unique, critical, and indeed crucial perspectives to bring to these issues. As AI continues to advance, there is a need to ensure Indigenous Knowledge Systems, including TEK, are incorporated into AI’s design and use processes in ways that respect Indigenous peoples’ rights to self-determination, knowledge protection and data sovereignty. Building off the themes discussed in the Pontifical Academy of Sciences’ Workshop on Indigenous Peoples’ Knowledge and the Sciences that also took place on March 14th this year, I will share some of the current challenges and opportunities presented by AI for TEK and Indigenous peoples.

Before meeting with you today, I explored past published research studies in Scopus, the largest repository of peer-reviewed research. The first paper published connecting indigenous knowledge was from a 2012 conference included the Nganyi clan of Western Kenya’s perspectives on developing a seasonal climate forecast with artificial intelligence. To date 133 papers have been published, with papers published in 2023 doubling that of 2024, and 2024 on track to double that of 2023. Conferences and conversations like those of today are vital to ensuring Indigenous voices are included in scientific discussions. A key theme of our conversation in March was that of “braiding.” This is a unique form of cooperation where the knowledge of

Indigenous Peoples and scientific insights intertwine, each preserving its distinct identity. AI systems are changing the way science and Indigenous Peoples interact, and other actors like governments inform their policies and decisions that impact all of us. Of the existing Scopus indexed academic literature connecting Indigenous communities and AI, many focused on important issues like native language preservation, access to government services, and data sovereignty. However, there is potential to expand the focus to grand issues like developing nature-based solutions for climate change impact and resilience discussed today. Next, I will highlight how this conversation around AI and ITEK can relate to the conference themes of Recognition and Dialogue, Collaborative Policy and Decision-Making Involving Indigenous Peoples and Scientific Communities, and Critical Action Areas for Collaboration in Biodiversity, Food, Climate and Health from a North American context.

Recognition and Dialogue

The relationship between colonialism and Indigenous knowledge systems has long been fraught with tension and marginalization. Colonial powers systematically suppressed Indigenous cultures, languages, and knowledge, often deeming them inferior to Western scientific paradigms. This historical context casts a long shadow over contemporary efforts to integrate Indigenous Traditional Ecological Knowledge (TEK) into modern technologies like Artificial Intelligence (AI). As AI rapidly evolves, it presents both risks and opportunities for Indigenous communities. AI’s reliance on data-driven models, algorithms, and automated decision-making processes could inadvertently perpetuate the same colonial patterns that have long excluded Indigenous voices. However, with deliberate and thoughtful engagement, AI could also serve as a tool to challenge these patterns, amplifying and respecting the distinctiveness of TEK.

Indigenous Traditional Ecological Knowledge (TEK) should not be viewed as a static set of practices but a dynamic, complex system deeply rooted in the relationships between Indigenous peoples and their environments with a highly local context. TEK encompasses a holistic understanding of ecosystems, developed over millennia through direct interaction with the land. This knowledge is inherently place-based, contextual, and often transmitted orally across generations. The challenge with AI is that it tends to abstract and generalize, potentially overlooking the nuances and specificities that are central to TEK. AI models, by their nature, are designed to find patterns in large datasets, which can lead to the oversimplification of the rich, contextual knowledge that TEK embodies. As such, there is a real concern that AI could inadvertently reduce TEK to mere data points that are generalizable across contexts, stripping away its cultural and spiritual significance.

Despite these challenges, there are emerging platforms and initiatives aimed at fostering continuous, respectful dialogue between Indigenous communities and AI developers or other stakeholders. One such initiative is a series of roundtable discussions soon to be launched by the Commission for Environmental Cooperation (CEC). These roundtables bring together Indigenous leaders, AI experts, policymakers, and environmentalists to explore how AI can be harnessed in ways that respect and incorporate TEK. The CEC roundtable initiative is a promising step towards ensuring that AI development is not just inclusive but also genuinely collaborative. These discussions emphasize the need for Indigenous communities to have a seat at the table, not just as participants but as equal partners in the decision-making process.

From an environmental perspective, AI presents both significant opportunities and pressing concerns for Indigenous peoples and TEK. For example, the increased reliance on AI-driven modeling, drones, and sensors for monitoring wildlife, environmental conditions, and pollution could have profound implications. On the one hand, these technologies offer unprecedented capabilities for tracking and predicting environmental changes, which could enhance conservation efforts. On the other hand, there is a risk that these technologies could marginalize Indigenous peoples’ traditional roles in environmental stewardship. The data-driven nature of these technologies could lead to a decrease in direct engagement with Indigenous communities, as decision-makers might rely more on AI models than on the lived experiences and insights of Indigenous peoples. This shift could undermine the participatory approaches that are crucial for effective and equitable environmental governance.

One of the key opportunities for Indigenous communities lies in the potential for AI to support the preservation and revitalization of TEK. AI can be used to document and archive TEK, ensuring that this knowledge is not lost to future generations. Moreover, AI-driven tools can help Indigenous communities manage and protect their lands more effectively, by providing real-time data on environmental changes and threats. However, for these tools to be truly effective, they must be developed in close collaboration with Indigenous communities, ensuring that they align with the values and priorities of those they are intended to serve.

In this context, it is important to highlight the role of Canada and Mexico as members of the Digital Nations group. This coalition of countries is committed to using digital technology to improve public services, and it has established guidelines for the ethical use of AI in decision-making. Both Canada and Mexico require that federal governments clearly communicate how AI tools are used in decision-making contexts, setting a standard for transparency and accountability. However, these requirements should not be limited to federal governments alone. There is a pressing need for other levels of government, as well as private organizations, to adopt similar practices. This would help ensure that the use of AI in environmental management is transparent and that Indigenous communities are fully informed and involved in the processes that affect their lands and livelihoods.

While there are legitimate concerns about the potential for AI to perpetuate colonial patterns, there are also significant opportunities to use AI as a tool for preserving and revitalizing TEK. To realize these opportunities, it is essential to foster continuous, respectful dialogue between Indigenous communities and AI developers, ensuring that AI is developed and deployed in ways that respect Indigenous rights and knowledge systems. The initiatives like the CEC roundtables are a promising start, but much work remains to be done to ensure that AI serves as a force for good in the ongoing relationship between Indigenous peoples and the environment.

Collaborative Policy and Decision-Making Involving Indigenous Peoples and Scientific Communities

Indigenous data sovereignty is increasingly crucial in discussions surrounding the ethical use of AI and the inclusion of Indigenous peoples in policy and decision-making processes. Indigenous data sovereignty asserts the rights of Indigenous communities to govern the collection, management, and use of data concerning their people, lands, and knowledge systems. This principle is particularly important in the context of AI, where data is often the foundation upon which technologies are built. Despite its significance, current AI policies often fall short in addressing the unique needs and rights of Indigenous communities.

One of the most pressing concerns is that many AI policies and initiatives have been developed without meaningful input from Indigenous communities, leading to ethical concerns about how Indigenous data is used and protected. For instance, a review of AI-related initiatives in Canada shows that while there are efforts to incorporate Indigenous perspectives, these efforts are not always consistent or sufficient. Some projects have inadvertently or deliberately bypassed Indigenous data sovereignty principles, resulting in tensions and mistrust between Indigenous communities and AI developers. In response advocacy groups like the First Nations Information Governance Centre (FNIGC) has been instrumental in advocating for the principles of Ownership, Control, Access, and Possession (OCAP) in the context of data sovereignty. As described by their founder, there is a need for advocacy on far pressing issues like AI to move beyond project-based advocacy, toward a broader vision. A major concern of Canadian indigenous scholars like Dr. David Gaertner of University of British Columbia, is classifying contemporary AI data collection as extractivist AI. Extractivism has strong environmental and colonialist connotations for indigenous communities. Vital resources were not valued the same way between settlers and first nations people resulting in the pillaging of environments, rather than co-creating through a systems thinking approach. If data is collected and harvested without clear OCAP principles, this cycle can continue into the digital space.

Similarly, in the United States, the Native American Rights Fund (NARF) has been involved in shaping AI policies that respect Indigenous data sovereignty. NARF has collaborated with various stakeholders to develop guidelines that ensure AI technologies do not infringe upon the rights of Indigenous communities. These guidelines emphasize the importance of free, prior, and informed consent (FPIC) and the protection of Indigenous intellectual property rights. The collaboration between NARF and AI developers has led to more ethical AI applications, particularly in areas such as environmental monitoring and natural resource management (NARF, 2021). It essential to state that consulting AI tools for Indigenous perspectives on environmental issues, or any issues, does not and cannot replace true consultations with real communities.

Despite these positive examples, challenges remain. In Mexico, for example, the use of AI in environmental management has sometimes overlooked the rights of Indigenous communities. With the support of Google.org charity, the World Wild Life Fund launched the ManglarIA, “AI for Mangroves” in Spanish, project. This project involves installing multiple cameras, remote sensors, and autonomous drones to help set up and monitor the AI-powered equipment. Historically, WWF worked directly with local Mexican indigenous communities for this effort, and this current project continues to work with the same communities who help install, maintain, and collect data from the project sensors. Fortunately, this project was developed with guidance from the Global Indigenous Data Alliance to ensure data sovereignty best practices for local communities. As these AI projects continue to scale in scope and complexity, it is vital to recognize that AI cannot and should not replace the ongoing input and inclusion of Indigenous communities in environmental stewardship and consultations.

These examples highlight the importance of co-creation in AI policy and development. Co-creation involves Indigenous communities working alongside AI developers, scientists, and policymakers from the outset to ensure that their knowledge, values, and rights are integral to the development process. However, co-creation is not without its challenges. One of the primary obstacles is the need for capacity building within Indigenous communities to engage with AI technologies. This includes providing education and training on AI, as well as ensuring that Indigenous communities have access to the necessary resources and infrastructure. Additionally, there is a need for greater awareness and understanding among AI developers and policymakers of the unique needs and rights of Indigenous peoples. Without this understanding, there is a risk that AI initiatives will continue to perpetuate the exclusion and marginalization of Indigenous communities.

The ongoing discussions and actions around Indigenous data sovereignty and AI highlight the importance of collaborative policy and decision-making involving Indigenous peoples and scientific communities. While there are promising examples of successful collaborations in North America, much work remains to be done to ensure that AI policies are developed in ways that respect and uphold Indigenous rights. By continuing to foster dialogue and co-creation, we can work towards a future where AI can be used to support and empower Indigenous communities in the spirit of braiding.