I am researching the ethics of artificial intelligence. This requires understanding the power and meaning of AI-based technologies in everyday life. AI algorithms have the power to create groups, select and prioritize the delivery of information and make a wide variety of suggestions to people. I am particularly interested in the intersection of artificial intelligence with religious information because of the highly personal and influential position religion occupies. My goal is to develop an ethical framework for evaluating AI-based technologies not by reviewing source code, interrogating privacy reports or evaluating corporate mission statements, but by researching the everyday experiences of people. This means understanding how people feel in interactions with the output of today’s AI-based technologies, what they take away from these experiences, and ultimately how AI affects the way they make decisions and live their lives.
I conducted my field research over the summer of 2020 using video conferencing technology and was very fortunate to recruit a wide range of participants from five religions and non-religious backgrounds. Each group interacted with a variety of AI entities and interfaces including web and SMS chatbots, voice assistants and internet search engines. Each participant interacted with the AI entities on a set of core existential and ontological “big questions” (e.g. “What happens when I die?” “Why is there evil in the world?” “How should I treat others?”). The AI entities provided participants with answers harvested from websites claiming to represent each of the worldviews. Participants predominately heard answers consistent with their self-identified religious affiliations as well as a few answers from traditions other than their own. In interviews that followed the experience with technology, participants were asked to rate their responses to the answers and elaborate on their responses and feelings.
I’m currently in the process of analyzing the transcripts from the research sessions. It is too early to make any definitive conclusions but I was struck by an opportunity that appears to be emerging from the data about the role that AI entities can play in awareness and education about faith – both of one’s own faith and about the faith of others.
AI entities may be able to play an important role in education for people about multiple worldviews because they potentially can remove a barrier to asking a person questions that may be interpreted as uncomfortable or offensive.
A study by René Dailey and Nicholas Palomares in 2004 helps us understand that avoiding taboo topics of conversation is often employed in the cultivation of personal relationships. Their research found that topic avoidance was strongly related to the perceived strength of the relationship, that is, topic avoidance always occurred less when relationships were measured as being close or strong:
Topic avoidance is the idea that individuals steer clear of certain topics in conversations with their relational partners and that this avoidance plays a role in their relationships. Individuals avoid topics for relationship-based reasons (e.g., relational maintenance), individual‐based reasons (e.g., such as self‐protection), and information‐based reasons (e.g., a topic is uninteresting or not newsworthy). (p. 473)
A later study by Marcel Harper found that religious people in particular feel uncomfortable and often try to avoid discussing religious topics with non-religious people. Further, Mikkelson and Hesse’s 2009 research found that knowledge of similarity of religious beliefs between two parties mitigated the risk of delving into conversations about religious beliefs.
Several participants in my study said that they would like to put the AI entities into different religious modes (e.g. “Buddha Mode” or “Jesus Mode”) in order to ask questions and learn about worldviews other than their own. While it might be socially awkward or potentially offensive to ask someone about their views of morality or the afterlife, people felt more comfortable asking a machine. Conversely, the answers that were provided by the machine were not interpreted as offensive even when they contradicted the beliefs of the participants. Therefore, they were both more open to asking awkward questions as well as hearing different perspectives than they might in human-to-human communication.
This reaction is consistent with Andrea Guzman’s 2018 research on human-machine communication finding that people place the AI entities in a role that is neither fully human nor fully machine; often they know it is a machine but adorn it with human characteristics. Further, as Schechtman and Horowitz found in their 2003 study, my participants tended to be more cooperative and forgiving with a machine than they might have been with a human, perhaps because they did not feel the need to compete with the machine or convince the machine it was wrong about an idea.
In the present research, the participants viewed the interaction as positive and helpful. That positivity was stronger for participants who had formed a trust relationship with an AI entity such as Amazon Alexa. Conversely, if a participant had formed a less trustful or more distant relationship with an AI entity, that perception carried over to his or her engagement with the information exchanged. This suggests that how people are introduced to a technology and how they initially perceive it will have a profound impact on its ability to engage people and their willingness to accept information from it. On balance, participants in this study appreciated the arms-length relationship with the AI entities that allowed them to ask “stupid” or uncomfortable questions about religious beliefs. Many said that the AI entities provided them with a convenient way to access this information without fear of judgement and provided them with new perspectives they were unlikely to gain otherwise.
Relative to applications in the context of faith practice, this research suggests that AI entities such as SMS chatbots and voice assistants may be useful in engaging or reengaging people in faith dialogue. By lowering some of the social barriers to discussions about faith and reducing the feeling of interpersonal judgment, these technologies may provide a novel and effective mode of communication for missionary, educational, counseling or discipling applications.
Dailey, René M., and Nicholas A. Palomares. 2004. “Strategic topic avoidance: An investigation of topic avoidance frequency, strategies used, and relational correlates.” Communication Monographs 71, no. 4: 471-496.
Guzman, Andrea L. 2018. “What is human-machine communication, anyway.” Human-machine communication: Rethinking communication, technology, and ourselves: 1-28.
Harper, Marcel. 2007. “The stereotyping of nonreligious people by religious students: Contents and subtypes.” Journal for the Scientific Study of Religion 46, no. 4: 539-552.
Mikkelson, Alan C., and Colin Hesse. 2009. “Discussions of religion and relational messages: Differences between comfortable and uncomfortable interactions.” Southern Communication Journal 74, no. 1: 40-56.
Shechtman, Nicole, and Leonard M. Horowitz. 2003. “Media inequality in conversation: how people behave differently when interacting with computers and people.” In Proceedings of the SIGCHI conference on Human factors in computing systems, 281-288.