The Inescapable Entanglement of AI and Religion
Dr. Beth Singler’s survey of AI and Religion provides a broad discussion on the intersection of the two topics. At the same time, the case studies in the piece allow us to dig into specific illustrations of how AI and religion influence one another. A recurrent theme of the book is the entanglement of AI and religious practice and how two concepts can form a feedback loop. A related motif is how certain elements of the AI conversation, intended to be wholly secular, show implicit religiosity.
The first half of the book focuses on the paradigm of rejection-adoption-adaptation, exploring how religious groups have interacted with AI in each of those respective ways. Singler mainly couches religious adaptation to AI as part of a larger enmeshment with technology, partially accelerated by the Covid-19 pandemic where many organizations leveraged social and digital media to continue some semblance of normal operations. Likewise, she points to interactive prayer apps and sermons created with generative AI as salient examples of how religion has adopted AI. Perhaps a more striking example from Singler is the recent incorporation of animatronic temple elephants into Hindu rituals, replacing acts previously performed by live elephants. The section on rejection is particularly compelling, with Singler analyzing how renunciation happens both ways. On one hand, certain religious voices strongly reject AI and associate it with larger secular technological trends that lead away from God and towards the end of humanity. On the other hand, influential anti-theists in the AI space dismiss religion as illogical and even as a human-created problem to be eliminated. In spite of this perspective, Singler points out that explicit rejection of religion does not mean its influence is missing.
The latter half of Singler’s work centers on the entanglement among AI, religion, transhumanism, posthumanism, and new religious movements (NRMs). An important case study from the section is about Roko’s Basilisk, a thought experiment that posits that a future AI overlord will not be pleased with anyone who, after being aware of its inevitable future existence, did not aid in its creation. Though intended as a purely secular thought experiment, it exhibits religious undertones. For instance, one commenter on the site LessWrong noted that they learned the same concept in Sunday school: God would punish anyone who heard His Word yet disbelieved. This point leads to a deeper theme surrounding the future envisioned by the transhumanist movement, a future that would be heavily enabled by AI. Namely, it exhibits implicit religious themes. The movement’s goal is to vastly improve the human condition and extend life indefinitely, aims traditionally found in the realm of religion. However, implicit religiosity is not all that exists as some discussion actively explores AI creating religion or being worthy of dedicated praise. Perhaps most notably, a movement called Theta Noir has a stated goal of celebrating a future “superorganism born of code, destined to recreate us”. Singler also expressly addresses posthumanism, the idea that humans will and even should be replaced, which inevitably collides with religion. Does this ideal place us as a god, creating an improved successor? Conversely, does it mean we are a regrettable blip in the evolution of the universe? Either way, the dialogue presents questions that have been historically religious or philosophical in nature.
The message of AI and religious entanglement, whether purposeful or unintentional, repeats throughout the work. This strikes me as an honest reflection of the realities of the joint topic. In addition to providing a full overview of this intersectional topic, it also serves as a springboard for discussion. In that spirit, I will respond to a few discussion questions she poses in the work. My background is in Christian faith, so that is the angle from which I will approach the discussion.
Is ‘sin’ a useful category in discussions of science and technology?
I found this question to be particularly deep, with no way to tie up an answer in a nice bow. From a Christian perspective, sin is doing wrong by God. A more secular definition is that sin is doing wrong to our fellow humans (which is also the result of turning against God). Scientific and technological discoveries have always been controversial, sometimes even labeled as sinful, though myopically in many cases (e.g. Galileo’s persecution for promoting heliocentrism). Science is the fundamental discovery of how the universe works, which is not in conflict with following the Creator of the Universe. Discovery and explanation are not the problem but rather selfish applications, the result of us being flawed humans living in a Fallen world. As Christians, we should see certain misapplications of technology for what they are: sinful (e.g., addictive social media algorithms, deepfake explicit content, illegal surveillance). From a secular perspective, the concept of using science and technology as tools against fellow humans should still resonate. However, history tells us that we should be thoughtful and delineate knowledge from how the knowledge is applied. Science and technology help us understand the “how” of the universe, while religion and philosophy can help us explicate the “why” along with the moral implications of how knowledge is practically used.
Is death a pernicious problem to be fixed or is that transhumanist goal a mistaken response to it?
As a Christian, I believe transhumanists have a correct fundamental belief: this world is not how it should be. Death creates devastation and makes us feel empty, but we are not the rulers of death. Humans can play a role in fixing this world’s problems, but we are not God; we are limited. I do not necessarily believe the transhumanist goal of fixing death is mistaken; rather, I do not think humans can accomplish these intentions in a lasting, meaningful way. “Living forever” in the metaverse is certainly not the same as an embodied immortality. What would happen if natural disasters destroy all computer infrastructure? How would a “digital upload” maintain the essence of what it is like to be me? A digital afterlife is a solution, but one that is hollow. Only God has the power to create everlasting fulfillment. Paradoxically, that fulfillment comes through sacrificial death in Christianity. There is no life without death, no free will without consequence (John 12:24). There is always an equal exchange because evil must be overcome by good; it does not retreat on its own (1 John 4:10).
Is Yuval Harari right in worrying about the ability of AI to manipulate people through religion?
Yuval Harari, an Israeli thinker, has raised this foregoing concern, which I believe is valid. If a future system that is labeled as artificial general intelligence (AGI) provides religious commentary, some people would certainly take it as face value. Today’s leading AI models can be easily manipulated, which means a theoretical AGI may not be impervious to the same influences. Even with the best of guardrails, we cannot state that an AGI would not make influential religious statements. However, my main concern is not with a hypothetical future but with what is presently happening with AI and religion, which does not deal with the output of large language models. In many ways, AI is being treated as a religion today. Many tech leaders promise a glorious future; we simply need to have faith and believe in what they say. Their followers preach the good news repeatedly and strive to increase the flock. How much influence is being wielded to simply perpetuate the hype cycle? This strikes me as a fair question to ask.
The religious background of AI developers and scientists is sometimes unremarked upon; while other times – as in the case of Lemoine – it can become a source of cynicism. Is it helpful to understand the cultural context within which scientists work or is science a neutral project, if such a thing is possible?
The question refers to Blake Lemoine, the Google employee who made headlines for saying the LaMDA language model was sentient in 2022. His training as a priest further complicated how people viewed his ability to make neutral assertions about language models. To address the topic broadly, a distinction between hard and social sciences is important. In physics, for instance, we can achieve comparatively more neutrality because data can often be collected in an unbiased way. In social science, data collection is influenced by existing and dynamic social conditions. Therefore, interpreting such data is, by definition, bound to be influenced by culture and personal preferences. We can certainly be aware of and work to combat such biases, but they are embedded in any social science, including the evaluation of technology. Understanding the cultural context of social science work is, consequently, instructive. Specifically in the case of sentience of language models, the neuroscience community has no consensus definition of consciousness and sentience. This cultural context is vital to assess when someone makes a claim about AI and consciousness: that person is making an assertion that will not have consensus scientific backing and, therefore, will be likely influenced by other factors. (Neuroscience may be considered a “hard” science, but it certainly is not parallel to physics).
In conclusion, Dr. Singler’s work provides a great read for anyone interested in AI, religion, or the entwinement of the two. It further provides an excellent framework for further study and questions. From an AI perspective, we can better anticipate the consequences of tech-inspired movements by detecting both their implicit and express religiosity. From a religious perspective, the AI movement is creating fundamental shifts, both in how religious groups operate and with the messages they need to share to the world.
Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.