Prof. Virginia Dignum, Professor of Responsible AI and Director of the AI Policy Lab AI Policy Lab, Umeå University, Sweden

Sustainability and Wellbeing in the Context of AI: A Paradox

My upcoming book, The AI Paradox, explores the complexities and challenges of artificial intelligence (AI), emphasising the need for thoughtful reflection on its role in society. In this paper, I will focus on the two aspects relevant to this conference: sustainability and well-being.

When discussing sustainability and AI, often the focus is on the environmental costs of AI. AI systems, especially those powered by large datasets and complex algorithms, require significant computational resources. This results in substantial energy consumption and environmental costs, particularly in terms of electricity and water use for cooling large data centers. The paradox lies in the fact that while AI is seen as a tool to solve many societal challenges, including those related to environmental sustainability, its own development and deployment contribute to environmental degradation.

However, sustainability is not only an environmental issue, but foremost a societal issue. Societal sustainability refers to the ability of a society to maintain social structures, equity, and justice in the face of technological and economic changes. AI’s rapid advancement has the potential to disrupt labour markets, exacerbate inequalities, and concentrate power in the hands of a few technology corporations. Of particular concern here is the risk of unequal distribution of AI’s benefits. As AI systems become integrated into various sectors, some groups may be more vulnerable to replacement by AI, while others, particularly those who control AI development and deployment, stand to gain disproportionately. This disparity raises ethical questions about fairness and justice in the AI-driven future. Who benefits from AI, and who might be left behind? Unchecked AI development will lead to increased societal imbalances, where the most vulnerable are marginalized further. Ensuring societal sustainability in an AI-driven world requires addressing these imbalances. Policies must be developed to ensure that AI benefits are distributed equitably and that no group is disproportionately harmed by AI’s deployment. This involves not only regulating AI’s impact on the labor market but also fostering inclusive development processes that involve diverse perspectives in AI design and implementation.

AI’s reliance on data-driven decision-making creates significant challenges, as data collection and processing are resource-intensive activities, both in the human as in the environmental sense. AI systems require constant updates, continuous learning, and large-scale computation, all of which increase their environmental and societal footprints. This extensive use of resources not only contributes to environmental degradation but also risks deepening societal inequalities, as access to and control over data and AI technologies are often concentrated among a few powerful entities. Addressing these challenges requires innovations that reduce the energy and material demands of AI, alongside policies that promote equitable access and use of AI technologies. These policies must ensure that AI is developed and deployed in ways that support both environmental sustainability and social equity, without exacerbating the issues it aims to resolve.

Moreover, AI, despite its ability to process vast amounts of data and perform complex tasks, has inherent limitations when it comes to replicating human intelligence. In critical areas such as emotional understanding, ethical reasoning, and creative problem-solving, AI falls short. While AI can efficiently analyze patterns and offer solutions, it lacks the empathy, moral judgment, and contextual awareness that humans bring to decision-making processes. This underscores the importance of keeping humans at the forefront of AI development and use, ensuring that AI serves as a tool to complement and augment human intelligence, rather than replace it.

As AI systems become more advanced, the balance between technological progress and human control becomes increasingly important. While automation and efficiency improve, there are growing concerns about the potential loss of human autonomy. AI systems operate based on patterns and statistical correlations, often without a deep understanding of context or causality. This limitation makes it difficult for AI to make truly informed and empathetic decisions, which are crucial in many fields. To safeguard human well-being, it is vital to maintain human oversight and control over AI systems. In areas like healthcare, for example, AI can assist with diagnostics, but it cannot replace the nuanced understanding and holistic care that human doctors provide. As AI continues to advance, it is essential to ensure that its development does not undermine the very human qualities that are fundamental to well-being – empathy, ethical judgment, and adaptability to complex social contexts.

Rather than following the mainstream direction, we need thought-provoking reflection on the future role of AI in fostering a sustainable and equitable society. As AI continues to evolve, it is crucial to develop systems that are mindful of their environmental impact. AI must be designed and deployed to address real-world problems without contributing to new environmental challenges. This requires a deliberate focus on minimizing the energy and resource consumption associated with AI technologies, ensuring that they become part of the solution to global sustainability challenges rather than exacerbating them. Equally important is the impact of AI on human well-being. The core challenge lies in harnessing AI to enhance human capabilities while preserving the unique value of human intelligence and empathy. AI’s ability to process vast amounts of data and optimize efficiency should be used to support human decision-making, not replace it. Empathy, ethical reasoning, and the capacity to adapt to complex social contexts are irreplaceable human traits that must remain central in AI's development and use.

Various paradoxes emerge as AI advances. These paradoxes serve as reminders that as AI grows more powerful, it becomes even more important to safeguard the irreplaceable aspects of human judgment and control. AI should not be seen as a replacement for human insight but as a tool that complements and enhances human abilities. For AI to contribute meaningfully to a sustainable and equitable future, it must be integrated with a deep respect for the essential qualities that define humanity.