DOI: 10.65398/EZWL9307
Dr. Brian Patrick Green, Director of Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University, USA
Ethical Resources for Helping Artificial Intelligence Benefit the Common Good
In her speech to the Academy, Mariana Rozo-Paz, mentioned the importance of hope, and of not giving in to hopelessness. This caught my attention, because the day before that she had asked me “What gives you hope?” After pondering a moment, I replied: “What gives me hope is that all of humanity’s problems can be solved.” And all of humanity’s problems can be solved – we have the technology, the skill, the power, and more. But there’s just one problem: we have to want to solve them.
Therefore, our problem is not one of capacity, but of motivation. Motivation requires believing that one is in a hopeful situation, not a hopeless one. We have to believe that we are in a situation that can actually be solved in a good way, because if we are hopeless we become demotivated and inactivated, and then we have already lost – lost before we have even tried.
Indeed, Pope Francis chose hope as the theme of the 2025 Jubilee Year. In 2015 the Jubilee theme was mercy, but now he diagnosed that hope is what humanity needs most.
What does hope have to do with ethics? I assert that ethics gives hope.
Ethics provides hope in several ways, for example, by building social trust, by building social community and fellow-feeling, and also in a particular way that I try to promote: by taking seemingly intractable ethical problems and making them tractable through the “tools” of applied ethical thinking.
Here is a thought experiment. Imagine a different world, one where everyone just does the right thing all the time, with no prompting. Free individuals would simply always choose to do the right thing, with no external regulation of behavior necessary. With a few other changes, this world might seem heaven-like.
However, we do not live in such a world. In our world, people make mistakes and even intentionally do wrong, and so we do need external forms of regulation to govern behavior. Brought into the very specific context we are examining here; this means that the technology industry cannot be perfect on its own. In fact, many leaders in the industry are not really trying. Self-governance – just choosing to do the right thing – has failed. And if self-governance fails, then higher levels of regulation should step in.
However, following the principles of subsidiarity, self-regulation is still something of an ideal that ought to be pursued, because the people closest to a problem are often the ones best placed to solve it. Some or even much of the regulation can come from within the tech industry itself, from among those who are willing, whether individual employees, leaders, or institutionalized practices. Government can even mandate self-regulation, as some professional societies do, like medical doctors, lawyers, and some engineers.
But if self-regulation of behavior – or any regulation of behavior – is ever to have any hope of succeeding, people need to have the right tools at their disposal for considering and solving their own ethical dilemmas, and for providing guidance to others who may ask them for help. My work at the Markkula Center for Applied Ethics at Santa Clara University is to give people the practical, useful, resources that they need in order to make better decisions in whatever context they find themselves in – but particularly in the technology industry, and even more particularly with AI.
For example, the Markkula Center Framework for Ethical Decision Making is a process for taking any ethical issue and making it tractable, and hopefully resolvable in a way that not only makes sense to many people, but also is actually good.[1] It has five steps:
- Recognize the ethical problem – what values are at risk?
- Get the facts about the case – like in a courtroom, the facts matter
- Evaluate options through six ethical lenses: human rights, justice, utilitarianism, common good, virtue ethics, and the ethics of care
- Mentally test the option you think is best – can it respond to challenges?
- Implement your decision in the best way that you can.
The Framework is an easy tool for almost any ethical situation. But if you are working in technology, there are some other more specific tools that can help too.
For those actually developing technologies, the Ethics in Technology Practice resources provide best practices, case studies, and a seven-part toolkit for thinking about ethics when designing technology. The seven tools are ethical risk sweeping, ethical post-mortems and pre-mortems, expanding the ethical circle, case-based analysis, remembering the benefits of creative work, thinking about the terrible people, and closing the ethical loop.[2] This toolkit was integrated at some Alphabet companies in 2018.[3]
Next, diving deeper into the corporate context, we have three World Economic Forum case studies on ethical practices at Microsoft,[4] IBM,[5] and Salesforce.[6] These cases, from 2021 and 2022, are one snapshot in time at these companies, describing how they did corporate tech ethics at that time. Indeed, in the years since writing these cases much has changed in the tech industry, and not all of it for the better, as competition, employee layoffs and disempowerment, and changing political winds have altered the corporate landscape.
Thanks to these cases being developed, as well as many other resources, in 2023 the Markkula Center published Ethics in the Age of Disruptive Technologies: An Operation Roadmap.[7] Known more briefly as The ITEC Handbook, this is a resource for companies wanting to integrate ethical thinking about technology from top to bottom in their organization. Like the Framework for Ethical Decision Making, The ITEC Handbook has five steps, which we characterize as going on a road trip:
First, leadership must choose to go on this journey: to integrate ethics into their corporation. Without this step, without leadership on board, ethics efforts are doomed to failure.
Second, you need to figure out where you are: do a corporate ethical culture assessment. This assessment gives you a measure of what employees think about ethics and culture in your organization, as well as a baseline for beginning work.
Third, you must decide where you want to go: establish your goals/destination and what ethical success looks like. This might involve principles, trainings, applications, product goals, etc.
Fourth, determine your route and start driving: implement and institutionalize changes in technology design and human resources. Anyone can say ethical words, but actually operationalizing those good intentions is an entire additional layer of very hard work.
Fifth, remain vigilant that you are making progress: get measurable metrics for improvement and move towards them.
The ITEC Handbook was written with the blessing of the Dicastery for Culture and Education, but it is argued completely secularly, as technology companies are.
As a last point, I would return to the Vatican’s work on AI ethics and mention again what Cardinal Parolin mentioned previously. Antiqua et Nova is a major next step for the Vatican’s dialogue on AI.[8] Built upon years of reflection by Pope Francis and others, the document lays out some basics for thinking about right and wrong when it comes to AI, as well as the deeper anthropological and theological issues raised by AI. And if you read Antiqua et Nova and want even more, then I would suggest Encountering Artificial Intelligence: Ethical and Anthropological Investigations, written by the AI Research Group of the Centre for Digital Culture if the Dicastery for Education of the Holy See.[9]
These books and tools might seem like slim grounds for hope, but these tools for making ethics tractable have themselves gotten traction with those developing new technologies. Some of these tools have been scaled up at companies totaling 160,000 employees, and others have been read by US congressmen; so I think there might be at least some grounds for hope. And I hope that you might find the tools useful – and even hopeful – too.
Acknowledgements
The author would like to thank the Pontifical Academy of Sciences and its wonderful leadership and staff, its supporters and patrons, and all of the attendees for their support and interest in ethics.
References
Dicastery for the Doctrine of the Faith and Dicastery for Culture and Education, “Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence,” The Holy See website, January 28, 2025, https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html
Flahaux, Jose Roger, Brian Patrick Green, and Ann Gregg Skeet, Ethics in the Age of Disruptive Technologies: An Operational Roadmap (The ITEC Handbook), Markkula Center for Applied Ethics, 2023, www.scu.edu/institute-for-technology-ethics-and-culture/itec-book-pdf
Gaudet, Matthew J., Noreen Herzfeld, Paul Scherz, Jordan J. Wales, eds., and the AI Research Group of the Centre for Digital Culture of the Dicastery for Culture and Education, Encountering Artificial Intelligence: Ethical and Anthropological Investigations, Theological Investigations of Artificial Intelligence Book Series, Vol. 1, Pickwick / Wipf & Stock, Eugene, Oregon, 14 December 2023, https://jmt.scholasticahq.com/article/91230-encountering-artificial-intelligence-ethical-and-anthropological-investigations
Markkula Center for Applied Ethics, “A Framework for Ethical Decision Making,” Markkula Center website, November 8, 2021, https://www.scu.edu/ethics/ethics-resources/a-framework-for-ethical-decision-making/
Vallor, Shannon, Brian Green, and Irina Raicu, “Ethics in Technology Practice,” Markkula Center website, June 22, 2018, https://www.scu.edu/ethics-in-technology-practice/
Walker, Kent, “Google AI Principles updates, six months in,” Google Blog, December 18, 2018, https://www.blog.google/technology/ai/google-ai-principles-updates-six-months/
World Economic Forum and Markkula Center for Applied Ethics at Santa Clara University. “Responsible Use of Technology: The Microsoft Case Study,” The World Economic Forum website, February 2021, https://www.weforum.org/whitepapers/responsible-use-of-technology-the-microsoft-case-study
World Economic Forum and Markkula Center for Applied Ethics at Santa Clara University. “Responsible Use of Technology: The IBM Case Study,” The World Economic Forum website, September 2021, https://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology_The_IBM_Case_Study_2021.pdf
World Economic Forum and Markkula Center for Applied Ethics at Santa Clara University. “Responsible Use of Technology: The Salesforce Case Study,” The World Economic Forum website, September 2022, https://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology_Salesforce_Case_Study_2022.pdf
[1] Markkula Center for Applied Ethics, “A Framework for Ethical Decision Making,” Markkula Center website, November 8, 2021, https://www.scu.edu/ethics/ethics-resources/a-framework-for-ethical-decision-making/
[2] Shannon Vallor, Brian Green, and Irina Raicu, “Ethics in Technology Practice,” Markkula Center website, June 22, 2018, https://www.scu.edu/ethics-in-technology-practice/
[3] Kent Walker, “Google AI Principles updates, six months in,” Google Blog, December 18, 2018, https://www.blog.google/technology/ai/google-ai-principles-updates-six-months/
[4] World Economic Forum and Markkula Center for Applied Ethics at Santa Clara University. “Responsible Use of Technology: The Microsoft Case Study,” The World Economic Forum website, February 2021, https://www.weforum.org/whitepapers/responsible-use-of-technology-the-microsoft-case-study
[5] World Economic Forum and Markkula Center for Applied Ethics at Santa Clara University. “Responsible Use of Technology: The IBM Case Study,” The World Economic Forum website, September 2021, https://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology_The_IBM_Case_Study_2021.pdf
[6] World Economic Forum and Markkula Center for Applied Ethics at Santa Clara University. “Responsible Use of Technology: The Salesforce Case Study,” The World Economic Forum website, September 2022, https://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology_Salesforce_Case_Study_2022.pdf
[7] Jose Roger Flahaux, Brian Patrick Green, and Ann Gregg Skeet, Ethics in the Age of Disruptive Technologies: An Operational Roadmap (The ITEC Handbook), Markkula Center for Applied Ethics, 2023, www.scu.edu/institute-for-technology-ethics-and-culture/itec-book-pdf
[8] Dicastery for the Doctrine of the Faith and Dicastery for Culture and Education, “Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence,” The Holy See website, January 28, 2025, https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html
[9] Matthew J. Gaudet, Noreen Herzfeld, Paul Scherz, Jordan J. Wales, eds., and the AI Research Group of the Centre for Digital Culture of the Dicastery for Culture and Education, Encountering Artificial Intelligence: Ethical and Anthropological Investigations, Theological Investigations of Artificial Intelligence Book Series, Vol. 1, Pickwick / Wipf & Stock, Eugene, Oregon, 14 December 2023, https://jmt.scholasticahq.com/article/91230-encountering-artificial-intelligence-ethical-and-anthropological-investigations