Interview with Beth Singler, Cambridge-based researcher on AI and Religion

Beth Singler is the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge, where she explores the social and religious implications of advances in AI and robotics. Dr. Singler is also an associate research fellow at the Leverhulme Centre for the Future of Intelligence and has collaborated on its AI Narratives and Justice programme bringing in social and digital anthropological perspectives.  Her background is as a social anthropologist of New Religious Movements, and her PhD thesis was the first in-depth ethnography of the Indigo Children – a New Age re-conception of both children and adults using the language of both evolution and spirituality.

We are delighted to learn more about how Dr. Singler has come to focus on the “entanglements of religion and technology”, how her work engages ethical deployment of AI, and about her own network of AI professionals, ethicists, and theologians.

Q:  You have produced an extraordinary body of work over the past dozen years, and it seems your pace is picking up!  What sparked your initial interest in the intersection of religion and technology, and what continues to drive you forward around issues related to artificial intelligence?

A: Well, first off, I am just a massive geek! My love of science fiction, technology, and off and online fan cultures has always informed my research interests and fueled my productivity. I really don’t think I would be exploring the themes and ideas I am researching now if I hadn’t saturated my mind as a child with Star Trek: TNG episodes like “The Measure of a Man”, where Commander Data’s personhood is put on trial!

Of course, for many people, the relationship between that genre, and science and technology more broadly, and religion seems very difficult. But it always seemed clear to me that you can’t extract one from the other. One way of thinking about religion is to think of it as how we tell stories about the world. And so when we tell stories about technology, progress, and the kind of future we are creating, we naturally draw on the cultural context we, as a society, have been steeped in. Even the story about how religion and technology are two very different things emerges out of particular cultural contexts – the narrative of the Enlightenment as the beginning of a process of secularization that drew us out of the ‘irrationalities’ of religion. Some of that story is about ‘Western’ culture Othering places where this distinction is not so strongly made. We point to examples of anthropomorphism and contemporary animism as signs of a lack of rationality and advancement, while both actually abound in our ‘Western’ context. I draw on Weber and his theories of disenchantment and routinization a fair deal in my work, but I also describe continuities of enchantment that have admittedly ebbed and flowed with time, but which do not indicate a wholly disenchanted world in which AI, and the stories around it, have arisen.

What also drives me forward in my work is that our stories about AI and robotics tell us so much about what we think the human is. We are constantly working out our anthropologies of the human and the person. But also, such stories, when improperly critiqued or accepted at face value, can be implemented to push those anthropologies in dangerous directions or can distract us from harms already being done in the name of AI progress. AI ethics is a complex field, in part because of the role of charisma and authority – who gets to be an expert and why? In many cases, AI ethics is presented as an ‘add-on’ to the conversation by corporations who have seen the field as a necessary response to concerns without actually embedding those values in their corporate culture.

While I love science fiction, to return to my immediate response, geeking out over a new bit of AI finesse shouldn’t lead us to blindly accepting the narratives about the future we’re being told. We shouldn’t accept representations of AI as being all-knowing or all-capable while ignoring the human behind the curtain pulling strings. This is definitely one of my concerns and drivers for my work – my next book in particular!

 

Q: Let’s talk a little about how technology is transforming the field of anthropology.  You refer in your recent articles to digital anthropology. What are the characteristics and specific purposes of that field?  We at AI&F have been especially interested in the work of Data and Society here in the US as occasionally focusing on religion and AI.  Do you interact with danah boyd and others there?  Who else is doing important work in this area? 

A:  A definition of digital anthropology would be something like ‘the study of the relationship between the human and the digital’. Under that banner, you can have things like ethnographic research at online field sites, considerations of digital artifacts, digitally-enabled kinships, human perceptions of the digital, and the study of ethnographic moments platformed and enhanced by digital locations.

While this work has been going on for as long as community has been possible online – I’m thinking of my first forays into the online chat communities of the late 20th Century – recognition from more established academic spaces that this is valid research has taken some time. I have been asked on occasion to justify why being in these spaces is ‘Anthropology’ at all. Presumably, the image of the Anthropologist in the field jars with the image of me sitting at my laptop all day! Both kinds of Anthropologist can exist and do fruitful work, of course. They can be the same person as well.

There are also often questions about whether religion online is ‘genuine’ at all, with the presumption that anyone can say anything online about themselves – as though that isn’t true of people IRL (in real life)! Some of my work looks at comments online that use parody when making religious comments about AI, and that humour leads some to think such statements are unimportant. But the very fact that those users have reached out for religious tropes and narratives says something about those continuities I mentioned.

I haven’t personally connected with Data and Society yet, but I know danah boyd’s excellent work. Other fantastic people in this digital anthropological and religion space who constantly amaze me with their insights are Damien P. Williams, Jacob Boss, Juli Gittinger, Robert Geraci, and Heidi Campbell. Any of the fantastic contributors to the Cambridge Companion on Religion and AI that I am co-editing with Fraser Watts. There are so many more that I could name!

 

Q:  Your article, An Introduction to Artificial Intelligence and Religion For the Religious Studies Scholar, in the Journal of Implicit Religion, offers three arguments for researching AI and religion, roughly: first, that the potential disruption wrought by AI on society will necessarily also have implications for religion; second, that AI is reinvigorating contemporary religion and may lead to the creation of new religious movements; and third, that AI raises questions about personhood to which traditional religions have applied their theological understandings of personhood.  Do these arguments still frame your work, and do you attach relative importance to them in terms of potential for impact?

A: Yes, I’m still using these three arguments, but a few others have come on board since then. Specifically, I have two publications coming that relate to a fourth: that AI will be entangled in the response of strong or ‘new’ atheism to religion. The study of atheism under the religious studies banner has been promoted by some excellent scholars in the UK and elsewhere. Still, in the context of AI and religion, there’s perhaps a presumption that atheism isn’t also a part of the story. I think in part that comes out of the same narrative of the Enlightenment that presumes a steady decline in religion; atheism is apprehended by some as a neutral position and therefore has no impact on the development of technology or stories about it.

Whereas, I’ve been exploring how some of the stronger atheistic response to religion have adopted AI narratives, such as assumptions about the direction of human progress towards greater and greater intelligence through rational machines, seeing this as logically leading to the end of religion. I’m exploring this idea in two forthcoming book chapters for edited volumes, one which focusses on Dan Brown’s Origin, and the other on Charles Stross’ and Cory Doctorow’s The Rapture of the Nerds, both of which play with ideas around the AI Singularity and the future of humanity, while also having a view of religion as a remnant of the messy process of human evolution. I think Stross and Doctorow are doing something a little bit smarter than Brown, if we compare the two books, but both feed into larger public discourse around AI and the ‘End of Religion’.

On the first three arguments from my article introducing AI and religion as an area of study to the Religious Studies scholar, I think the first might be the most important for the near future. But, in my experience of running workshops on AI with faith leaders, the third is the one that really gets the conversation going. On the second argument, that AI will inspire new religious movements, a lot of the response to that again focuses on the ‘seriousness’ of the adherents – something that I’ve seen in many conversations about new religious movements, no matter their origin.

Q:  Origin stories and creation seem to play a prominent role in your recent work, e.g., your study of the Creation Meme adapted from Michelangelo’s divine/human spark in the Sistine Ceiling in your article, The AI Creation Meme: A Case Study of the New Visibility of Religion in Artificial Intelligence Discourse ,  and your jump off from Dan Brown’s novel in   ‘Origin’ and the End: Artificial Intelligence, Atheism, and Imaginaries of the Future of Religion Do you see a fundamental connection between traditional religions’ 4,000 year old creation stories and our apparent fascination with attempts to replicate ourselves through advanced technology and our tendency to see ourselves in our robotic creations?

A:  Yes, there is a connection. That connection arises from how we need to use our existing dominant narratives about our creation to give shape to this attempt at self-replication. For me, another part of why creation, and the creation of AI, as a concept is so interesting is that we have no one definitive creation moment for ourselves but a plethora of creation stories (which of course, to some people are definitive). But if we achieve that aspiration of creating artificial life, then we will have a provable moment of creation and proof of a Creator – us. Of course, we’ll still debate what that means and whether it’s a form of co-creation with any presumed creator of the universe.

But for now, in the present, we play out that moment of creation in our imaginations time and time again, thinking about both positive and negative outcomes. I’m generally agnostic about whether it could happen – which perhaps reflects my agnosticism about some first creation by a creator. But being agnostic as an anthropologist is quite useful!

But isn’t it interesting that we return to this idea of making other versions of ourselves time and time again? I’ve also made the connection with the parent/child relationship in some of my AI narratives, and many already use the child metaphor to describe a future civilisation of AI that can leave their human parents and take elements of our society and consciousness out into the apparent consciousness-less space of the universe – our’ Mind Children’. And as with individual, human parents, we imagine the future of our offspring and what they’ll be like – both what they’ll be like as people, and how they will treat us, their parents.

Q: I’m intrigued by the way you jump off from cultural artifacts, tropes and popular stories to examine deeper questions they raise or reflect, and your work with the Leverhulme Institute is based on the power of narratives.  From listening to thousands of sermons over the years, I know that the illustrations often linger longer than the theological points they are meant to illustrate. Do you see the artifacts, tropes and narratives you use as bridges for engagement, proof of relevance, or something else?  What can faith leaders seeking to relate AI issues to the lives and beliefs of their congregants learn from your approach?

A:  I think they are both bridges for engagement and proof of relevance. To the first point, it is certainly easier to draw on cultural symbols to highlight a technical point about AI. In my field, it’s all too easy to assume everyone knows about and thinks about AI as much as you do! The truth is that there is a varied AI discourse, both on and offline, in which some people’s knowledge is only of particular popular images of dangerous robots. Pulling on those images and critiquing them where necessary is a way into a conversation with the public that can be informative and entertaining.

As proof of relevance, these cultural artefacts, tropes, and popular stories have an impact because they are familiar, if still disturbing. They’re shaped – including being remixed and updated – by years of sharing and resharing. I wrote about how attention-grabbing or counter-intuitive narratives are more likely to be transmitted (based on work on anthropomorphism by Paul Boyer) in my chapter in the edited volume on AI narratives. Dystopic narratives certainly have that attention-grabbing aspect. I also think you can relate our interest in them to Stephen King’s assessment of the horror genre as a place where we can safely explore the big scary themes of life… and death… before we experience a moment of what he calls ‘reintegration’. That is the feeling at the end of the rollercoaster, the return to normality at the end of the book or movie. However, demonstrating what themes and ideas these fictional narratives pull upon is not an attempt to yank the sheet off the ghost and reveal that it’s just a man in a costume. It’s about noting that AI might be thought of as just another of these liminal entities that we’ve always told stories about around the campfire. If those narratives are employed in the real world to generate affective responses – the man behind the curtain – then that’s of more concern.

Similarly, if faith leaders wanted to employ similar pedagogical approaches in their sermons, I hope that they would also realise the difference between noting cultural trends and using them as material to scare their audiences into a particular perspective. Early in my research career, as a postgrad, I wrote a paper on the Evangelical Christian creation of their own film industry and the use of horror tropes in Rapture films, and some of the ethnographic research I did highlighted the genuine fear some congregants felt upon being shown films like ‘Left Behind’. In the presentation of AI and robotics right now there are already figures using our AI narratives to evoke particular responses and motivate audiences to accept that the future of AI is one particular thing, and that’s can be harmful. In both cases, care is needed.

 

Q:  You are exploring religion and technology in a time many technology leaders see as post-belief, transhumanist, or even post-humanist.  You write not only about origins but also about aeschatology and apocalypse.  And you write about new forms of religion, some of which seem to be based on a purely materialist conception of reality, but which seek legitimacy in some form of tradition or social “blessing”.  In this disrupted setting, do you see value for the “ancient wisdom” of traditional faiths? 

A:   First, I am among those scholars who disagree that we are post-belief. Again, I see that perspective, as I’ve said, as coming from a particular narrative about ‘Western’ history. Similarly, the view that religious beliefs around AI are materialist relies on a division born of that same history. Such religions and ‘materialisms’ sometimes present themselves as an alternative to the evidence-lacking, irrational, traditional faiths (not my words!). Still, I think that they are far closer in nature than their secularised view of the world will allow them to admit. So I don’t put the “ancient wisdom” and the “new rationality” at as great odds as some might.

Conversely, though, I also don’t immediately accept that history equals wisdom. Some of my work on new religious movements was around how we perceive authority and legitimacy. And while some accounts rely on quantifiables like years of existence, or membership numbers, others leave space for new charismatic authorities or rationalities. But again, I come down on the agnostic side when it comes to truth claims! What is valuable is to pay attention to how society is shaped by and shapes religion – whether ‘new’ or ‘old’. With AI being also a product of and producer of society, we can’t help but realise that people will call upon traditional faiths to help them understand or survive changes (my first argument in the ‘Introduction to AI and religion’ article, as mentioned above).

Q:  AI&F is claiming a seat at the discussion table around AI for human flourishing and not destruction, by assembling a cross-faith and multidisciplinary community of AI experts, related professions, ethicists, theologians and philosophers.  You also have organized a community around AI ethics, with which we have several overlapping scholars.  How did your network come together, and what might we at AI&F learn from your experience for effective engagement intra-network and with the broader AI ethics discussion?

A: The Faith and AI project arose out a conjunction of people and aims. I was organising workshops on AI for religious thinkers and faith leaders during my first post-doc at the Faraday Institute for Science and Religion, and I connected with people at the Leverhulme Centre for Intelligence and the Religion Media Centre who were also already planning the same kinds of events. Together, we ran workshops that drew together individuals from many faiths who shared an interest in AI as an ethical, social, and philosophical issue. We also wanted them to bring accurate and pertinent information back to their communities, so we also held talks on the reality of AI technology and shared links to useful and accessible material on the subject.

Connecting with those people and keeping in touch with them has resulted in many collaborations and invitations for our attendees to speak at various panels and to share their knowledge and experience. One of the outcomes for me has been the plan for the Cambridge Companion on Religion and AI, which I’ll discuss next. I think loosely affiliated groups and networks can suffer from attrition, so focusing on bringing people together on a specific project – a workshop, an edited volume, a shared piece of research– is vital.

Q: We’re looking forward to publication of the Cambridge Companion to Religion and AI in 2023.  What is your vision for this anthology and what do you believe will be its distinctives and contribution to the field?

A:  I’m working on this Cambridge Companion with Dr Fraser Watts, who, funnily enough, was one of my undergraduate supervisors sometime back in the last millennium! We come at the subject of religion and AI with slightly different approaches. Although we are both social scientists, Fraser has a more theological and confessional background than I do. But that has been useful for approaching contributors who also have a variety of backgrounds and approaches. We contacted not only people from the Faith and AI Project’s network, who often came from within specific traditions, but also theologians, religious studies scholars, social scientists, historians, and computer scientists. So our Cambridge Companion aims to be broad in disciplinary perspectives and to consider religion as a multivalent thing. We want to note that while some traditions are more instantly recognisable, and there is much to gain in considering them separately, there are other formations of religion interwoven with the ‘Big Five’ of the World Religions paradigm that Western, predominantly Protestant, religious studies scholars have granted us (for more on this see Cotter and Robertson 2016). We have excellent scholars who will bring various methodological lenses to the interactions between AI and religion. A Cambridge Companion is a more descriptive or even textbook-like approach than an edited volume, so it should be of interest and informative to audiences of different levels of knowledge. I can’t wait for you to read it!

Thanks very much, Dr. Singler! 

X