New! Become A member
Subscribe to our newsletter

Ethical Artificial Intelligence: Religious Leaders Discuss the Impact of AI on Human Flourishing

Podcast by the Templeton World Charity Staff

Nine leaders from diverse faith traditions share their insights into the societal implications of A.I., and the role of religious and moral leaders in shaping the ethical framework for its development and use for the future of human flourishing.

The interviewees include:

Dr. David Zvi Kalman is Scholar In Residence and Director of New Media at the Shalom Hartman Institute of North America and one of the experts at AI and Faith.

Revd. Dr. Harriet Harris, University Chaplain, University of Edinburgh.

Dr. Junaid Qadir, Professor of Computer Engineering at Qatar University

Rt. Revd. Dr. Steven Croft, Bishop of Oxford, Founding Board Member, Centre for Data Ethics and Innovation

Rabbi Geoffrey A. Mitelman, Founding Director, Sinai and Synapses, elaborates on the issue of misinformation.

Chokyi Nyima Rinpoche, Abbot, Ka Nying Shedrub Ling Monastery, Kathmandu, Nepal.

Dr. Muhammad Aurangzeb Ahmad, University of Washington Professor of Computer Science,

Father Philip Larrey, Chair of Logic and Epistemology at the Pontifical Lateran University in the Vatican,

Father Paolo Benanti, Professor of Ethics and Moral Theology, Pontifical Gregorian University in the Vatican, Scientific Director at the renAIssance Foundation and advisor to Pope Francis.

The latter three interviewees are all part of the renAIssance Foundation, a group of organizations launched in 2020, led by the Vatican. The Foundation launched The Rome Call for AI Ethics, a multi-faith, global coalition seeking to promote a sense of responsibility around the ethical development of AI technologies. So far, the call’s signatories include major tech companies, world governments, universities, and representatives from three of the world’s major religions.

Dr. Ahmad, opening the podcast, warns about delegating human responsibilities to machines. He sees the advent of AI­ as an opportunity for people to “come together” and “learn from each other.”

Father Philip Larrey argues that it is our responsibility to calculate existential risks and unintended consequences. He wants to see the human person and human dignity at the center of AI designs rather than seeing the human being exploited as something “merely material”. He sees the three Abrahamic faiths as an answer to the timeless fundamentals of moral life “do good and avoid evil.”

He shares insights into the genesis of the Rome Call for AI ethics. The initiative comes from the mutual need from tech experts and religious leaders for “understand the moral ramifications of the technology which we are creating.”

Father Benanti says that “Every technological artifact is a displacement of power,” comparing it to how the invention of the press in the 15th century reshaped global powers. He highlights six considerations of the Rome Call for AI ethics in the “impact areas” of ethics, education and human rights:

  • Transparency (understandable by all)
  • Inclusion (systems should not discriminate)
  • Accountability (someone has to be responsible for what the machine does)
  • Impartiality (the absence of bias)
  • Reliability
  • Security and Privacy

He warns about the dangers of AI that can shape people’s behavior: it can become “the most powerful instrument of control,” which people can use AI as a propaganda instrument, a source of authority, or even as a new religious leader or an oracle. While AI can provide people “powerful tools in the pocket of everyone,” he argues that “we haven’t built the culture to handle that kind of tools” because of the fast pace of technological advances. Recalling how religions are instrumentalized in wars to fuel conflicts – and how AI can worsen this problem, he stresses that it is important for all stakeholders to contribute to the conversation around this global problem.

Reverend Dr. Harriet Harris agrees that although new technologies bring “great benefits,” they all also bring “great destruction”. It is therefore important for “religious leaders, scientists of faith, and anyone with a concern for the future” to call to the “positive usages” of AI in order to create things “that can bring benefits.” Linking creativity with divine qualities, and thus raising questions similar to those which “come up in faith and religious arenas for centuries,” she deplores that we don’t “know what to make of it.”

Bishop Croft advocates for a “greater” and “more general understanding” of these technologies. He recalls meeting with scientists saying “please do not leave the key ethical decisions about how these technologies are deployed and governed to the scientists (…) We do not feel qualified as scientists alone to be making these huge and enormous decisions. There need to be other voices at the table, because the issues at stake are so enormous for the future of work and family and institutions and good governance and communication.”

He sees AI as an opportunity for us to reflect on the question: “What it is to be human?” He foresees a 30-year long “identity crisis” to which the Church can contribute as the question has been at the center of the conversation for “thousands of years.” He sees faith-based leaders’ perspective as a perfect complement to scientists’ in the field of ethical decision-making and deeper insights into human life.

Stressing how data start to encroach and shape various aspects of human life such as behaviors, the economy, voting patterns, he fears a future where democracies will be driven by “the marketplace, multinational tech companies” or “totalitarian states.” He thus calls for more transparency: “Often the issue is not that people’s data has been taken away from them. It’s that we are being incited to give it up without knowing the full consequences of that.”

Dr. Kalman sees religions as “moral forces” that help “people understand and incorporate into their lives moral ideas,” which is even more important in the face of new issues the world is “throwing at us.” He highlights a paradox where while “global big tech holds in their hands one of the most powerful tools humanity has ever known, its creators are simultaneously working to make sense of it.” Due to “the urge to get products out as fast as possible,” regulations are not always adequate and need to catch up with unexpected consequences. As technological advances occurring at a pace unprecedented in human history, people are forced to make novel ethical decisions uncomfortably fast; which is where religious leaders, communities, and institutions can help as they already offer adaptable moral frameworks.

Chokyi Nyima Rinpoche fears that humans are giving their power to technology; and that the combination of nihilism with technology would be disastrous; he sees religion as an antidote and advocates for slowing down the development of AI.

Rabbi Mitelman similarly agrees with slowing down the current “exponential” progress of AI with the example of “hallucinations” (incorrect or false facts generated by the software) which could spread as “wildfire.” He stresses the importance of words: “In Judaism, God creates the universe using words. The words that we use have impact,” and the difficulty to correct some of these hallucinations when people have accepted them as facts.

Dr. Qadir joins this view as foundation models such as ChatGPT are used as “search engines” rather than chatbots: “They’re accepting the text chat GPT generates as a trusted source of information.” He warns that people are “not systemic in the way we analyze how use technology” which has the potential for collateral damage, especially as “our own attention and our own thoughts are now fragmented.” He argues that people need to be “very deliberate” about how they use technology. He also stresses that “technology tends to concentrate wealth and power” and therefore, there is a need for “perfect equality in terms of where the technology is developed, who is developing the technology, how the technology is being designed, for what purpose the technology is designed.”

At the end of the podcast, Dr. Ahmed and Father Larrey share optimistic views about the control and regulation of AI, comparing it to how gunpowder and nuclear bombs “haven’t destroyed the world”, and more specifically about “human nature”: “we’ll eventually find a way to coexist, regulate these technologies for human betterment.”

William Barylo

is a sociologist, photographer and filmmaker and postdoctoral researcher in Sociology at the University of Warwick, UK where he studies how young Muslims in Europe and North America navigate race, class and gender barriers from a decolonial and restorative perspective. William also produces films on social issues to disseminate knowledge to wider audiences, including documentary and fiction films to equip students and community organizers with practical tools for navigating the world, and help businesses nurture better work cultures. He holds a PhD from the School for Advanced Studies in the Social Sciences (EHESS) in Paris, France.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter