New! Become A member
Subscribe to our newsletter

The Evangelical Imagination: An Interview with Dr. Karen Swallow Prior


Today we feature an interview with Dr. Karen Swallow Prior. Dr. Prior completed her PhD at SUNY in Buffalo, with an academic focus on British literature in the eighteenth century. She has authored eleven books and her writing has appeared in Christianity Today, New York Times, The Atlantic, and various other places. She is a contributing editor for Comment, a founding member of The Pelican Project, and a Senior Fellow at the Trinity Forum. In this piece we discuss her most recent book, The Evangelical Imagination: How Stories, Images, and Metaphors Created a Culture in Crisis (Brazos, 2023) as well as her experience and views on AI technology.

The Power of Imagination

R. Rex: What inspired you to write The Evangelical Imagination?

K. S. Prior: I am an evangelical, and my research, starting with my PhD, explored a period in England when Evangelicals were rising in influence and in all of culture, including literary culture. I have carried that knowledge with me. All these years I have taught in evangelical institutions, and then in the past several years Evangelicals have become sort of the subject of headlines. The term has become controversial and politicized. At the same time, my own Evangelical students were starting to share some of the experiences that they had growing up in Evangelical subculture that were negative and harmful. I wanted to sit back and take a long view and consider the things that have formed in the imagination of this movement and these people for 300 years. What has been good and bad? And how can we recover some of these good things?

R. Rex: If you could share some main takeaways from your book, The Evangelical Imagination, with our audience, what would they be?

K. S. Prior: Well, I focus on stories for several reasons. Storytelling is my area of teaching and academic expertise. In the book, even in focusing on stories, I am focusing on the imagination more broadly. That includes art and artistic creation as well. So the first takeaway would be simply that the human imagination is much bigger and much more a part of our everyday decision making than we tend to think about it. Our imagination encompasses far more than most people think. Furthermore, our imagination is not always rational or readily perceived. Imagination can influence our lives and society far more than we may think.

The other aspect I speak to in this book is that imagination is an individual capacity that we all have, but that we exist in what the philosopher Charles Taylor calls a “social imaginary” , which is a communal imagination full of concepts, narratives, stories, images, myths, legends, and expectations that we inherit from our cultural and social traditions. These are often unexamined assumptions that are influencing us, and we may not be aware of it. But need to be aware of their presence and the way they influence us in both positive and negative ways.

R. Rex: Have you received any notable feedback or responses from readers that surprised or resonated with you?

K. S. Prior: This is a book where I am testing out some ideas and using a method that I have not seen before. There are a lot of recent publications about Evangelicalism that examine the movement from a sociological, historical, or political perspective. As far as I know, none of these works consider this phenomenon by its imagination and the results of that imagination. One of the main premises of the book is that all language is metaphorical. Metaphors are deeply embedded in our language and our stories. I have heard from people who read my book and tell me that they now see metaphors everywhere or have identified numerous examples of what I wrote about in the book. That is really what I wanted – to model for people how to think about the imagination and our social imaginary, so they can see for themselves far more than I am able to convey in the book. That has been wonderful.

ChatGPT, Imagination, and Education

R. Rex: In The Evangelical Imagination, you discuss how improvement is not always good. Can you share your thoughts on recent advancements in AI technologies, such as ChatGPT, and some of the positive or negative consequences associated with these improvements?

K. S. Prior: I will be completely transparent in saying that I am not well versed in the technology of AI. It is something that is new to me as it is to most people who were not studying it. We are certainly all fascinated by it. I have been greatly influenced by the thinking of Marshall McLuhan , a noted communications theorist, who wrote about technology specifically in the context of media. I suppose AI is also a medium, and McLuhan’s main contribution was that it is all too easy for us to focus on what a technology gives us without considering what it takes away from. All technology gives us something, but also takes something away. McLuhan uses old fashioned examples, like the invention of the automobile which extends our feet but takes away our closeness to the earth. His principle can be applied to all technology. When it comes to AI, I would consider about what can it offer us. It can certainly offer easy access to a lot of data, and quicker access to volumes of information that would be hard to obtain otherwise. But it could take away some of the human elements of creativity.

Let me share my first experience with ChatGPT. It was late last year when, in higher education circles, a lot of articles were coming out about the use of these technologies by students as a form of plagiarism or cheating. This is before this hit the news in the United States. I had an online student from outside the United States who submitted a paper that appeared perfect in every way. I knew nothing about these technologies at that point and the paper seemed inhuman to me, but I had no idea why. I spent a lot of time investigating. I started with plagiarism detection software, but this raised no alarm. I tried searching for portions of the paper on the Internet but could not find anything like it. I gave the student the perfect grade he seemed to deserve, then just a few weeks later I learned about the technology. I had recognized the inhumanity in the paper, but did not know how the student could have achieved this. My concern is that any technology that can enrich what it means to be human can also take away from it. Reading this paper that was technically perfect but somehow lacking in spirit and humanity in a way that I could recognize, but could not put my finger on, is an experience I recall anytime I am thinking about these technologies.

R. Rex: What do you mean when you say you recognized inhumanity in the paper?

K. S. Prior: All the terms, both historical and theoretical, were correct. It seemed like a lot of synonyms were being used, and the grammar and syntax were correct. My first thought was that the student had copied something from the Internet and replaced some of the words with synonyms. This is an established way of cheating. But that did not turn up at anything. The sheer perfection of paper, without any errors whatsoever, but also without any “life” in a way that I can’t quantify, did not make sense to me.

As an author, I have had a couple other experiences with AI. There is an AI-generated workbook being sold on a website for my book, The Evangelical Imagination . As someone who has been teaching students for years, I know how to find plagiarism. But I realized that the description of my book was not plagiarism in any form, but that it had ingeniously represented everything in my book while avoiding any words or sentence structures that could be described as plagiarism.

R. Rex: In The Evangelical Imagination, you express a voice of warning for sentimentality and conversion stories or testimonies that are not sincere, genuine, or authentic. In the context of ChatGPT and other generative technologies, how can creators maintain sincerity in their communication while leveraging the power of these new technologies?

K. S. Prior: I think there are ways this technology can be used that maintains and even increases our humanity. I think that creators are currently learning the basics and creating art that looks very artificial or very weird in some cases. If I might make an analogy, I remember when Wikipedia came out and professors would tell students not to use it. Wikipedia was like the bane of our academic existence. Now Wikipedia has become a tool so that that if we know how to use it well, it can be helpful. It compiles a lot of information and provides the sources. Wikipedia is like a trailhead: someone can follow a set of sources and go down various paths. When the tool first came out, students wanted to simply copy and paste from the main page. Wikipedia has become something that is much more useful, because we are using it differently. I can see how AI could develop in ways that allow people to become more creative, but if it is a short cut to creativity or even a short cut to our humanity; that is, if it becomes an end in itself, rather than a means, then I think we will continue to see these intangibly but undeniably inhuman characteristics.

The Power of Sentimentality

R. Rex: Can you define what you mean by sentimentality? How can readers avoid sentimental deceptions created by artificial stories with this technology?

K. S. Prior: In the chapter on sentimentality, I explain that sentimentalism is when we indulge in emotion or excessive emotion for the sake of the emotion. Emotion is wonderful and beautiful as part of being a human. I am not anti-emotion, but when emotion is carved out and exploited or manipulated just for the experience of the emotion, or used to sell us something, that is when it becomes dangerous. I can see that people may generate content, with television commercials or other forms of advertisement, where they try to manipulate our emotions to sell us something, or to exploit us in some way. Human beings can do that in anyway, but if AI is used to do that, then we will fall prey to the same problem.

I think it is something we should think about, because it is going to be even easier to exploit our emotions and exploit our natural tendency towards sentimentality when there are so many easy ways to create tools that can do that. In terms of a creator avoiding such exploitation, the whole argument of my book is the old truism that knowing the problem or identifying the problem is half of the solution. If we know sentimentality is a vulnerability for us as human beings, and it is not something that should be exploited, then we should not be producing art (via an AI or any other means) that is a cheap shortcut for an interpretation or conclusion. Those who are creating with AI should be extra cautious to avoid these kinds of emotional shortcuts.

R. Rex: Can you share a few examples of good ways to be sentimental?

K. S. Prior: One of the complicated examples that I talk about in the book is the American novel Uncle Tom’s Cabin by Harriet Beecher-Stowe . Rightly or wrongly, this work is often credited with starting the Civil War that ended slavery, and that is a good thing. The novel did so by moving people’s emotions and helping them to be empathetic and sympathetic to African Americans and enslaved people. That is an example of sentimental art, that emotions helped to bring about an end to something that was evil. The complication is that it did so by trafficking in racist stereotypes and tropes that persist to this day. If we use those kinds of shortcuts, and use sentimentality to evoke sympathy and empathy that is not true to reality, then it can come back to haunt us. When real people do not live up to the stereotypes that we have trafficked in, then we might reject them or oppress them. So it can be complicated when we use emotions to accomplish what should be done simply because it is true and just.

The Role of AI in Religion

R. Rex: In your book, you mentioned A Christmas Carol by Charles Dickens several times and noted that the conversion story is not real, but can help promote faith in others. Do you think AI Technology can help create meaningful conversion narratives to help others have conversion experiences in the real world?

K. S. Prior: I think AI certainly could if that were the intention. If AI can create stories that follow any narrative arc, which they do, conversion stories can be created too. I think we will run into some of the same issues found with Uncle Tom’s Cabin: if the stories are just not human enough, they may cause us to think too broadly or myopically in ways that do not match with reality. We must be cautious that we can set a pattern that can be helpful, also habituated to real life, which is always messy and lacks conclusions with neatly tied bows.

R. Rex: What role do you see AI technologies playing in religious communities in the future?

K. S. Prior: I am skeptical of any shortcuts that strip us of any of our humanity. I read an article in Christianity Today recently that considered the aggregation of theological or Biblical texts in databases such that an AI could formulate responses that would be useful for people. I think that is a possibility, and I remember that that one of the takeaways from that article was transparency. I think this is a principle that the Church needs always, even apart from AI – citing the sources, citing even for the bias or the slant. One of the early AI stories that I read in the realm of higher education realm was about a university using AI counselors when students would call for help or therapy . However students did not know they were getting answers from a bot. The ethical issue at that point is not whether processing feelings with a with a bot may or may not be helpful, because maybe it is helpful, but it is the lack of transparency. This is an age in which the Church and its institutions are crumbling because of uniform and pervasive lack of transparency in several areas not even related to AI. I would say that the same principle needs to apply towards transparency. When we are using a tool, there needs to be transparency about what the tool is and is not supposed to.

R. Rex: Do you have any advice for AI developers?

K. S. Prior: Because I am an English professor, and because I teach eighteenth and nineteenth century British literature, I cannot let this opportunity go by without encouraging everyone to read Frankenstein by Mary Shelley, which is considered the first work of science fiction. For anyone who has not read it, or has just seen the bad film adaptations of it, the novel is entirely different from film version. It is a deeply philosophical, theological, sociological, and anthropological reflection on what it means to be human and what it means to play the part of God in developing a technology that can escape our control and harm humanity. It raises the important question of whether we should do something just because are capable of it. With AI the genie is out of the bottle, there is no going back. But we can, and must, still ask important questions, like “what are we using this for?”. It is important not to confuse an end for the means and the means for an end. I know that question will resonate, especially among people of faith. “Telos”, or purpose, what we are here for as human beings, is important to keep in mind. In answering that question, we can better answer the questions about the technology that we might use. We can consider how can these technologies help us as human beings, to ultimately fulfill our purpose for being on this earth.


A big thanks to Dr. Karen Swallow Prior for her insights throughout this interview. To watch the interview of our conversation, see this link.


[[1]] Prior, Karen Swallow. The Evangelical Imagination: How Stories, Images, and Metaphors Created a Culture in Crisis. Baker Books, 2023.

[[2]] Taylor, Charles. Modern social imaginaries. Public culture 14.1 (2002): 91-124.

[[3]] McLuhan, Marshall. Understanding media: The extensions of man. MIT press, 1994.

[[4]] Beecher-Stowe, Harriet. Uncle Tom’s cabin. Ingram, Cooke, 1852.

[[5]] Graber, Adam. Robot ‘Church Fathers’ Might Curate New Canons. Christianity Today, 2023.

[[6]] Reardon, Sara. AI Chatbots Could Help Provide Therapy, but Caution is Needed. Scientific American, 2023.

[[7]] Shelley, Mary. Frankenstein. Penguin Classics, 2012.

Robert Rex

Robert Rex is the Technology Tools Supervisor at the Online Teaching Center of the Missionary Training Center in Provo, Utah where he works on global onboarding of technology tools, builds systems to evaluate performance, and leads research & development. He is also the president of student success at the Charlie Life and Leadership Academy, a 501c(3) non-profit committed to providing world-class leadership education to young adults.  Robert graduated from Brigham Young University with a Bachelor of Science degree in Economics.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter