Today we feature an interview with Dr. Robert Geraci. Dr. Geraci is a founding member of AI&Faith as well as a member of our research team. He is a professor of religious studies at Manhattan College and has written four books covering topics including religious analyses of AI, robotics, and transhumanism. His most recent book, Futures of Artificial Intelligence: Perspectives from India and the U.S. (Oxford 2022), describes the arrival and reconfiguration of transhumanist ideas in India and the United States and reveals how the nexus of religion and technology contributes to public life and our modern self-understanding.
How would you describe your experience with AI?
I have only minimal experience in the actual scientific and engineering side of AI, but I have spent the past twenty years as an observer of these and have also studied the public interface with those technologies. I was a visiting researcher at Carnegie Mellon University’s Robotics Lab for one summer and have interviewed roboticists and AI scientists from industry and academia in India, including the Indian Institute of Science, which is India’s premier institute of higher education. My primary interest is in the narratives we tell about AI: what we think it is, what we think it is becoming, and what our relationship is to the technology. In my books and essays, I’ve argued these questions (and their answers) are fundamentally tied to religious practice.
How would you describe your faith background?
I’m a committed Jew, practicing within the reform tradition (though I lean toward what American Jews call conservative practice). That said, I am agnostic on the question of whether any gods exist. My Judaism is absolutely critical to my sense of who I am and in my commitment to justice in the world, but I think these are independent of whether there is a god. I do not have any way to verify the existence of gods (mine or anyone else’s), but as a Jew I am called to bring righteousness into the world and I use our traditions, scriptures, and ritual practices to help me do so.
What led to your interest in the intersection of AI and faith?
My initial interests were entirely academic. My field of specialization is the study of religion, science, and technology. When I began to study robotics and AI for my PhD dissertation about 20 years ago, I picked up books by roboticists and AI folks (e.g. Moravec , Kurzweil , Crevier ). When I read what they were saying about robotics and AI, I could see something deeply religious about what they were doing! More recently, I have been directly interested in the question of AI ethics: how can we develop AI technologies so they lead to human flourishing (and, if AI ever actually equals human capacity, robot flourishing…but that’s a big ‘if’)? I believe that we can use our religious traditions as an important tool in the development of ethical AI technologies. There are cultural and religious values that could and should be implemented in our industrial and government policies for AI. Right now we seem to be letting consumerism, shareholder growth, fear/international conflict, and economic efficiency operate by themselves in conjunction with a faith in technological determinism (that is, technologies determine their own futures). While those values certainly have some merit in economic development, they are not sufficient to determine beneficial outcomes. We need to start getting a clear view of what values will do so!
Why are you involved with AI&F?
I was fortunate to meet David Brenner via Academia.edu, and he invited me to start talking with him about what he was initiating. Those conversations seemed to show a strong convergence in our desired goals. My own work on international policy with the G20 Interfaith Forum’s working group on science and technology led to conversations with experts around the world and policy guidance intended for G20 nations and beyond. David and I decided that our networks and interests had much to offer one another, so when he asked that I join AI&F as a Founding Member (and later a part of the research team) I was delighted to do so.
How does AI&F affect your work outside the organization?
I think it is too early to say. Right now, it mostly just gives me more interesting things to engage!
What open problems in AI are you most interested in?
Right now the two most pressing problems for me are: (1) how do we get more people engaged in conversations about what values and narratives are already baked into our way of thinking about AI, and (2) how do we leverage those conversations to commit ourselves to using AI for a more just future? If we try to understand what we are building and why, we have opportunities to intervene in positive directions. So much of what is happening in AI could be valuable in improving access to information and enhancing almost every facet of our personal and business lives. But if we are going to do that without regard for climate change, without considering the impact of the technologies (everything from mining minerals to building components to deploying finished products), and without a real commitment to a just future, then we will succeed only in making the world worse. My experience with AI folks is that most of them wants to make the world a better place through those technologies, so I hope that AI&F can help engineers, executives, policymakers, and the public get to exactly that place.
A big thanks to Dr. Robert Geraci for his time to carry out this interview. Thanks to Emily Wenger for proofreading, editing, and publishing this work.
Moravec, Hans. Mind children: The future of robot and human intelligence. Harvard University Press, 1988.
Moravec, Hans P. Robot: mere machine to transcendent mind. Oxford University Press on Demand, 2000.
Kurzweil, Ray. The singularity is near: When humans transcend biology. Penguin, 2005.
Kurzweil, Ray, Robert Richter, Ray Kurzweil, and Martin L. Schneider. The age of intelligent machines. Vol. 580. Cambridge: MIT press, 1990.
Crevier, Daniel. AI: the tumultuous history of the search for artificial intelligence. Basic Books, Inc., 1993.