How can an interreligious and interdisciplinary focus on compassion address concerns about aligning artificial intelligence (AI) with human values and flourishing? Five experts at AI & Faith examine this issue in a recently published editorial in the journal Theology and Science. They describe how religious wisdom around compassion can inform future developments of AI and ML (machine learning) to align them with human values and flourishing. Mark Graves, Jane Compson, Ali-Reza Bhojani, Cyrus Olsen, and Thomas Arnold hope to expand current ethical attention on responsible AI by emphasizing compassionate responses to human suffering in a global context from a pluralist foundation.
In the article, they argue compassionate AI would address many issues raised by the “alignment problem.” Like many revolutionary technologies, AI raises the question of how we ensure technological advances follow ethical guidelines directed in a beneficial direction. In addition, the growing ability of AI to create more AI systems, and even more advanced versions of itself, raises a relatively new question of how to ensure that after numerous generations of AI building AI that the resulting systems still align with human ethical values. The AI & Faith authors argue for building upon the collected wisdom of world religions to emphasize compassion as a core value, which they explore from Christian, Islamic, Buddhist, and scientific perspectives. AI with a core value of compassion directly addresses fears that future AI would increase human suffering, supplies a hopeful direction for AI to promote flourishing, and grounds an ethical framework for AI in the universality of human suffering.
Although reason and rationality are typically privileged in deliberations on AI and AI ethics, incorporating compassion as a fundamental value for AI points beyond a narrow interpretation of reason that excludes the primary importance of values in how people reason and in what people mean by reasonable and rational. The world’s religions generally define ethical guardrails for human behavior (take the Ten Commandments, for example), but they also point to some meaning beyond human suffering that can guide aspirational ethical behavior. Although even current AI can be designed or trained on a range of human values, a focus on efficiency and speed to market often excludes other human values from consideration. For instance, Christian Thomistic moral psychology views compassion as a virtuous act of charity. In Islam, compassion (rahma) is a divine quality emphasizing giving and selflessness, crucial to spiritual ethics. Buddhism differentiates compassion from pity and cruelty, and it is seen as integral to the Buddhist path. From a scientific standpoint, compassion is analyzed through cognitive, affective, and motivational dimensions. These all have the potential for informing AI development.
The Theology and Science article posits that integrating an understanding of compassion into AI is a necessary first step toward developing technology that is truly aligned with human values. This approach could lead to AI systems that not only adhere to human preferences and social values but also act responsibly to address human suffering. Compassion can be an aspirational journey for people and can require religious and ethical guidance about what might be possible as compassionate action. If we withhold that knowledge from AI’s design and training, how can we hope AI systems would alleviate rather than cause suffering? By widening the inclusivity of AI alignment to encompass religious and ethical insights into compassion, we can guide AI development in a more holistic and beneficial direction.
Please read the full editorial article at, https://www.tandfonline.com/doi/full/10.1080/14746700.2023.2292921