New! Become A member
Subscribe to our newsletter
Interview

Gleanings from Union Seminary’s January AI Conference by our Founding Expert Faculty There

On January 30, Union Theological Seminary, in conjunction with Riverside Church, the Jewish Theological Seminary of America, and The Greater Good Initiative, presented an all-day program in New York City on Artificial Intelligence: Implications for Ethics and Religion.  Remarkably, five of our Founding Experts ended up as panelists and speakers with no effort on our part – an encouraging sign that we are achieving networking effects!  The fifth Founding Expert panelist was Vikram Modgil, and a sixth panelist, Rev. Dr. Ted Peters, has just joined us as a Founding Expert (see News in this issue). 

One of our key goals at AI and Faith is “plussing up” the discussion in faith circles around AI ethics.  Toward that end, here are takeaways from the conference from five of our Founding Experts who spoke.  They are:

  • Jason Thacker (JT) is Associate Research Fellow and Creative Director at the Ethics and Religious Liberty Commission of the Southern Baptist Convention, and author of The Age of AI, forthcoming from Zondervan on March 3 (see review in this Newsletter).
  • Brian Green (BG) is Director of Technology Ethics at the Markkula Center for Applied Ethics, Santa Clara University.
  • Michael Quinn (MQ) is Dean of the College of Science and Engineering at Seattle University, a longtime computer science professor, and author of the college text Ethics for the Information Age, now in its 8th edition with Pearson Publishing.
  • Thomas Arnold (TA) is a Research Associate at the Human-Robot Interaction Lab and lecturer in the Computer Science Department at Tufts University, and wrapping up his PhD dissertation at Harvard University’s Committee on the Study of Religion.
  • Ted Peters (TFP) is the Distinguished Research Professor of Systematic Theology and Ethics at Pacific Lutheran Theological Seminary and the Graduate Theological Union in Berkeley, California, and co-founder and co-editor of the journal Theology and Science.

 

1. This was a maiden effort by Union and its co-sponsors. As more faith institutions speak into this ethics space, what worked or did not work in this program that can serve as a helpful guide going forward?

JT: The event was an encouraging first step for interfaith dialogue on some of the most pressing and complex ethical questions that arise out of our interactions with AI in society. I thoroughly enjoyed the various faiths and perspectives represented and the chance to dialogue on the panels. My only hope was more time to engage with one another and those in the audience throughout the day.

BG:  Two pieces of advice: 1. Get the logistics right – never underestimate the importance of logistics! Get a good schedule, invite a big audience, get a great room, provide tasty food, etc. Union did a great job with this! 2. Get the right speakers. Union and their organizers pulled in the best folks from across the country. For other places wanting to put on events like this, my advice is to make sure you are getting speakers who know what they are talking about – research them, read their work, watch their videos, listen to their podcast interviews, get recommendations from people that you trust, and so on.

MQ: I found the engagement of people from different disciplines to be exciting. It was good for me, as someone with a Ph.D. in computer science, to learn more about how advances in AI technology are perceived by theologians and the implications they see for contemporary religion.  What I found difficult at times was understanding the points made by some of the theologians because they were using terminology that was unfamiliar to me. At those times I felt the theologians were talking to each other and not to the general audience.

TA:  We need many different models for this kind of effort, ideally without repeating any too often.  I applaud Union/JTS for this venture into the territory, and it was rewarding to see such a large audience. The format gave panelists a chance to air out their work and perspective, and the moderators also offered some substantive comments. As with many of these events, the eyes of interest can be bigger than the stomachs of stamina— it was wonderfully ambitious, but it was a great deal to take in and digest. Still, again, a very important effort of outreach and scholarly responsibility.

 

2. What are some key gateway questions that can promote a strong discussion from the outset in bringing together faith leaders and AI professionals around AI issues?

JT: I think that we need to be more honest about the presuppositions we bring to the table and seek to dialogue about these differences. Asking why someone believes what they do or what leads them to certain principles and applications will only strengthen our abilities to have a rich dialogue. For example, many AI ethical guidelines and principle documents claim that we must seek fairness for all, but that concept is fairly nebulous and applied differently based on what you believe to be fair. Getting to the heart of what people mean when they say things like this can foster richer dialogue amongst faith leaders and AI professionals, which will inevitably create a more diverse open environment.

BG: Some questions that I think always deserve more discussion are: 1. What are some good things that AI can do for people? How do we encourage those good uses of AI? 2. What are some bad things that AI can do? How can we discourage those bad uses of AI? AI is a dual-use technology, and if we don’t work right now to encourage its good uses and discourage its bad uses we will quickly come to live in a terrible world.

MQ:  If intelligence is what elevates humans above other animals in the eyes of God, how will our status change if we are able to create intelligent machines?  What role do our bodies play in defining who we are? Are our brains simply the hardware on which our minds (software) are operating? If it were possible for me to download (upload?) my mind from my brain into a computer, would I still be me?

TA:  To me it was evident we need to work on including multiple faith perspectives and communities. The Riverside Church has a very interesting tech education program called Wellbotics, and that is the type of engaged effort these discussions need to be hearing more about and thinking through in terms of AI’s disparate impact across society. While the informal discussions allowed for learning about such efforts, the panels did not feature it. I would say that panels were honestly more white and male than the cutting edge work in AI ethics is, and I think the cutting edge work in AI and faith will depend no less on such diversity.  Theologically, I was struck by the scope of analysis and topics, from Hannar Reichel’s work on drones and omniscience to the intimate witness of Jason Thacker on AI and ordinary (and transcendent) dignity.

TFP:  Gateway questions come in three categories: technical, theological, and ethical. First, technical questions. 1. What technological advances can we foresee coming over the next decade? 2. Is Artificial Intelligence actually intelligent in the human sense of the word, or is the public getting hoodwinked with this ambiguous term? 3. Theoretically, if a machine would become intelligent in the human sense, would it also develop a sense of self? Second, theological questions. 4. Recognizing that reason has been considered a component of the imago Dei in the history of Christian thought, does the prospect of Artificial Intelligence impinge on theological anthropology? 5. If a future AI robot were to develop a sense of self, should that self be invited to Baptism? 6. Because the promises of transhumanists for a Utopian posthuman future replete with disembodied intelligence are so extravagant, should we become alert to false Messianism?  Third, ethical questions. 7. How do AI innovators at present think about their moral responsibility when engineering the next gadget? 8. Should we encourage national governments if not the United Nations to establish moral protocols to guide AI technology toward peace rather than war? 9. How can the public theology of the churches provide helpful guidance as moral protocols are developed in business, government, and education?

 

3. What specific AI issues figured most in your interaction here?

JT: A few of the AI focused issues that were of interest were the ethical use of surveillance technology in policing as well as questions of autonomous weapons systems and algorithmic bias. Seeing the various faiths and perspectives represented was an encouraging sign for the potential of future dialogues across these varying worldviews.

BG: As an ethicist deeply interested in a broad range of issues with AI, I can’t pick just one issue that I  was most interested in while at Union. Union’s students, however, were clearly very concerned about the possible oppressive use of AI for surveillance; in particular, how surveillance data will be used. It could be used to protect people, control corrupt government agencies and individuals, etc., but more likely it will be used to further historical oppression. AI is a powerful tool that is likely to be deployed very unequally throughout society, and everyone has a right to demand that it be used in the right way, a way which provides shared benefits to everyone.

MQ:  During the conference there was a lot of attention paid to the Singularity as well as the hopes of some that they will be able to achieve immortality by downloading their minds into machines. One speaker called the latter idea “Do-it-yourself salvation.”

TA:  I would like to see some more specific engagements on technical issues and application domains. I am what I would call, borrowing from Roberto Unger, an anti-necessitarian when it comes to AI. The use of data-driven AI systems, especially deep learning, is only one, limited avenue down which AI can be developed— some of the conversation ceded way too much ground in terms of what machine learning is and what (sometimes said with an ominous tone of voice) “is coming, like it or not.” People of faith can join others in saying we don’t (or do) like it! Some things don’t have to come!

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter