New! Become A member
Subscribe to our newsletter

The Leveraging Power of Wise and Hopeful AI Stories

The word “storytelling” reminds us of dinner parties and camp fires. But in fact, we cannot not tell stories – it’s who we are. Right now meta-stories about AI as a savior or a monster are locked in fierce competition. My open question is this: how can we harness the power of story within our AI ethics discussions to share the wisdom and hope fundamental to our faith traditions, especially (for me) Christianity?

Storytelling Is Not Optional

In his 2012 book The Storytelling Animal: How Stories Make Us Human, Jonathan Gottschall suggests that storytelling is not optional. Consider the stories we tell only to ourselves – daydreams and night dreams. Gottschall points out that scientific studies “suggest that an average daydream is about 14 seconds long and that we have about 2,000 of them per day. In other words, we spend about half of our waking hours – one-third of our lives on earth – spinning fantasies.” 1 At night, estimated conservatively, “we dream in a vivid and storylike way for about two hours per night, which comes to 51,000 hours over an ordinary life span, or about six solid years of non-stop dreaming.” 2

Add in many more hours per day of stories told by others in magazines, books, movies, advertisements, podcasts, online features, and videos, and it becomes apparent that stories permeate every aspect of our life. Why are stories so pervasive? Gottschall argues that “stories give us pleasure and instruction. They simulate worlds so we can live better in this one. They help bind us into communities and define us as cultures.”

Our Most Common Stories Revolve Around Terror and Hope

If “fiction is an ancient virtual reality technology that specializes in simulating human problems” so we can practice navigating them, as Gotschall suggests, 3 it is not surprising that what we choose to watch or read reflects our aspirations and fears. Most of us prefer a happy ending to a tragedy.

By contrast, the stories where we have no choice – our dreams – are heavily weighted toward threats over pleasures. Again from The Storytelling Animal, researchers estimate we “experience about 1,700 threatening REM dream episodes per year, or almost 5 per night.” Dreams seem to mostly be “’the stage on which bad things are auditioned.” 4

Christopher Booker’s monumental life work The Seven Basic Plots: Why We Tell Stories, bears this out. Booker’s exhaustive analysis of hundreds of stories over thousands of years supports the claim of Jungian psychology that we only tell variations of a few deeply rooted stories, most of which involve epic challenges and opportunities.

Booker’s oldest and deepest plot line is “Overcoming the Monster”. Examples include the ancient Assyrian Epic of Gilgamesh, the Old Testament story of David and Goliath, or Harry Potter and Voldemort. In this plot line, the hero glimpses the monster from a distance and experiences a “call” to confront it; moves through a comfortable “dream” stage of preparation and confidence into a stage of great fear and frustration as the monster’s power and hero’s relative helplessness is revealed; and enters a nightmare stage of a climactic battle in which all seems lost until a dramatic reversal leads to the death of the monster and the reaping of rich rewards for the hero. 5Booker’s second archetypal plot is the Quest. Learning of some priceless goal, the hero must embark on a long hazardous journey, overcoming perils and diversions in an overriding imperative to achieve the goal. The plot resolves with the treasure triumphantly secured.  6

If we see AI as a monster to be overcome, advances in Generative AI in the last six months may place us somewhere between the confident “dream stage” and the fearful “frustration stage”. If we see AI as a treasure, we are just beginning our Quest. Underlying both plot lines is the now-ubiquitous question: will this story end happily, tragically, or somewhere in between?

The AI Story War

For a dramatic face-off between competing epic AI stories, consider the back-to-back essays of renowned venture capitalist Mark Andreesson and environmental-activist-turned- AI- prophet Paul Northcote published in July by The Free Press website.

Andreessen’s AI Will Save the World argues that if only we leave AI’s creators alone, AI will create unprecedented advancements in quality of life and opportunities in every sphere of human endeavor. Andreessen uses a metaphor of the alcohol prohibition in the United States (1920-1923), likening anyone pessimistic about AI to the naïve “Baptists” who only created business opportunities for unprincipled “bootleggers” to seize upon.

By contrast, Northcote’s essay Rage Against the Machine contends that AI is posing both a material and spiritual crisis for mankind. Materialist ethicists like Tristan Harris follow a similar trajectory, but Northcote says “I find I can understand this story better by stepping outside the limiting prism of modern materialism and reverting to premodern (sometimes called “religious” or even “superstitious”) patterns of thinking. Once we do that—once we start to think like our ancestors—we begin to see what those dimensions may be, and why our ancestors told so many stories about them.”

Northcote’s plot line is “Overcoming the Monster” while Andreessen’s embodies a Quest. The heroes in Andreessen’s Quest are not the ethicists but the “legions of engineers . . . involved in the creation of the ideas behind AI— are working to make AI a reality, against a wall of fear-mongering and doomerism that is attempting to paint them as reckless villains.” To the extent that problems arise, Andreessen argues that no new laws are needed, since the only feasible solutions will be technology solutions.

Northcote’s partial solution for overcoming the AI monster is practicing the spiritual response of technological ascesis, translated as self-discipline or self-denial, in our own choices – “of drawing a line, and saying: no further. It is necessary to pass any technologies you do use through a sieve of critical judgment. To ask the right questions. What—or who—do they ultimately serve? Humanity or the Machine? Nature or the technium? God or His adversary? Everything you touch should be interrogated in this way.”

I find Northcote’s story more compelling than Andreesen’s, but also too limited in its response. I would expand it with a third story, told by our own AI&F Adviser Michael Paulus in his short but wide-ranging new book, Artificial Intelligence and the Apocalyptic Imagination: Artificial Agency and Human Hope.  7Michael’s story draws from the rich pool of Jewish and early Christian apocalyptic literature, culminating in the Book of Revelation.

Far from an end-times disaster epic, Paulus writes – citing the theologian N.T. Wright – that the Christian apocalyptic imagination “’opens up a vision of new creation which precisely overlaps with, and radically transforms, the present creation.’” 8This vision can transform our thinking about and use of technology in three ways:

  • affirming an “ethical minimum” for assessing the impacts of technologies, especially concerning political, economic, and social justice.
  • Pointing to “strategies and structures for resisting and reforming unjust systems and technologies”; and
  • Helping us “imagine a better world that is not only a future promise but an emerging present actuality.” 9

Northcote’s story calls for a wise interrogation of AI technologies and self-discipline in our engagement with it. Paulus’s story adds an element of strong hope grounded in systematic analysis and imagination – not just for a future heaven, but a restored present creation.

Beyond these foundational stories in books and online essays, faith leaders are leveraging the power of story in other ways – from high level institutional statements like the Vatican’s Rome Call for AI Ethics to the Southern Baptist Convention’s recent AI Resolution, to sermons across a wide array of churches, synagogues and other places of worship.

I recently had a discussion with several Christian main line denominational pastors and a rabbi who are Advisors for AI and Faith regarding their plans for preaching on AI. All agreed that the principal through-line for their sermons ought to be hope in the face of this uncertain opportunity and threat. Here is how Episcopal Rector Doyt Conn put it in the last of his four-sermon series in May:

“t is because of the emergence of artificial intelligence that we have been provoked to consider what it means to be human…which has caused us to examine our humanity, physical and metaphysical, including heart and soul. And in doing so we remember who we are, how we were made, and why we were made–to be priests and sovereigns upon the earth.”

“Priests and sovereigns” draws on St. Peter’s imagery in 1 Peter 5, as well as evoking heroic figures from works such as the Chronicles of Narnia by C.S. Lewis and The Lord of the Rings by J.R.R. Tolkien.

Finally, this leads to our own personal choices about how we will engage with AI. Whether we see our journey as primarily a battle to overcome the AI monster, a quest to channel AI for the restoration of creation, or a mix of both, the Apostle Paul assures us that while we may endure hardship we will find a happy ending: “Affliction produces endurance, and endurance produces character, and character produces hope, and hope does not put us to shame, because God’s love has been poured into our hearts through the Holy Spirit that has been given to us” (Romans 5:3-8).

Since we cannot help but tell stories, and because we have some choice in the stories we tell, let us tell wise stories about AI, grounded in the hope embodied in the ancient wisdom of our faith – and so make our voices heard together in this epochal moment in time in which we are privileged to live.

  1. Jonathan Gottschall, The Storytelling Animal: How Stories Make Us Human, (Houghton Mifflin Harcourt 2012) at 11.

  2. Id. at 83.

  3. Id. at 59.

  4. The Storytelling Animal at 82.

  5. Christopher Booker, The Seven Basic Plots: Why We Tell Stories (Bloomsbury Books 2005) at 48.

  6. Id. at 69.

  7. Michael J. Paulus, Jr, Artificial Intelligence and the Apocalyptic Imagination: Artificial Agency and Human Hope (Cascade Books 2023).

  8. Id. at 61.

  9. Id. at 72.

David Brenner

David currently serves as the board chair of AI and Faith. For 35 years he practiced law in Seattle and Washington DC, primarily counseling clients and litigating claims related to technology, risk management and insurance. He is a graduate of Stanford University and UC Berkeley’s Law School.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter