New! Become A member
Subscribe to our newsletter
Interview

An Interview with Wired’s Spiritual Advisor Meghan O’Gieblyn on Technology, Transhumanism, and the Afterlife

Meghan O’Gieblyn writes essays, features, and criticism for Harper’s Magazine, The New Yorker, Bookforum, n+1, The Point, The Believer, The Guardian,  The New York Times, Paris Review Daily, and other publications. She is the recipient of three Pushcart Prizes.  One of her essays was included in The Best American Essays 2017; another was a finalist for a 2019 National Magazine Award. Her first book, Interior States, won the 2018 Believer Book Award for nonfiction. She also writes an advice column for Wired. Her book God, Human, Animal, Machine was published by Doubleday in August, 2021Ms. O’Gieblyn is interview by AI and Faith Contributing Fellow Emily Wenger.

 

E: In your essay, “The End” in Interior States, you write “some of us hope that Silicon Valley visionaries will engineer an earthly utopia”. Some people think that tech innovations coming from Silicon Valley will lead us closer to utopia, yet reactions to the recent WSJ “Facebook Files” and other current events (i.e. election misinformation, COVID misinformation, etc) indicate growing discontent with the side effects of tech innovation. What do you think the current outcry against Facebook and other tech giants means for this vision of a “technology-enabled utopia”? Will it change, end, or leave unscathed this vision? 

M: There has definitely been a growing public awareness about how flawed, or even pernicious, some of these technologies can be. I’m thinking especially about social platforms and the spread of misinformation, as you mentioned, as well as how our data is being used. I think the aftermath of the 2016 election was a loss of innocence for many people, when revelations about fake news, election interference, and the Cambridge Analytical scandal really brought home just how high the stakes are. For several years leading up to that, I’d been teaching some essays, in my college writing course, about how Facebook uses data analytics and was doing social experiments on its users. And my students each semester were largely blasé about it. They would say, “So what? I have nothing to hide.” People today seem to be much more well informed about these problems, and I think there’s far more public awareness about the potential for misuse.

At the same time, I’m hesitant to say that we’ve become disillusioned or discontented with technology per se, given that so many of the proposed solutions to these problems are also technological, or involve simply improving the structure of these platforms. On one hand, it’s great that we’re demanding more accountability and transparency from these tech corporations. On the other hand, these demands often strengthen the belief that all our social and political problems as a nation are merely glitches in the system. If our divisions come down to how information is disseminated, then fixing democracy is just a matter of finding a better way to control the flow of information. This subtly reinforces the techno-utopian ambitions that led to some of these problems in the first place.

 

E: Our current obsession with technology as salvific is nothing new — humanity seems always to think the new thing will save us (c.f. the Enlightenment, progressive politics from the 19th century, etc.) Why do you think that is? What fuels our “myth of progress”?

M: I do think the larger modern idea of progress—as a linear narrative, moving forward in time—owes a lot to the Judeo-Christian view of history, particularly to the Christian anticipation of the eschaton. It seems like Western modernity has vacillated between believing, on one hand, that things are going to get better and better, and on the other hand that they’re going to get worse and worse, which really amounts to two different interpretations of the biblical prophecies, which include both apocalypse and the Millennial Kingdom. If you look at other cultures, like ancient Greece, there’s a far more pessimistic view of history, one that is largely circular—civilizations rise and civilizations fall. So it seems as though this linear, progressive understanding of time is not an innate part of human nature. It’s a story we’re telling ourselves.

Technology, over the last two centuries especially, has contributed to the major improvements we’ve experienced, and it’s also been the force behind a great deal of violence and destruction. So I think it’s natural that so many of these narratives about the future hinge on the role of new technologies. One thing that made Kurzweil’s prophecies so convincing to me when I first encountered them, and which made them seem scientific rather than merely speculative, was that they were rooted in principles like Moore’s Law, which holds that the processing capacities of computers increase at an exponential rate. What this means is that technological progress is not only inevitable, it’s accelerating. And once you believe that, essentially anything becomes possible. At some point, the arc of progress is going to extend into a vertical line, and this is when the so-called Singularity is supposed to happen, when computational evolution gives way to an intelligence explosion. I think that’s a compelling story, in part because it is extrapolated from supposedly empirical evidence (Moore’s law has a very official ring to it, as though it were a law of physics, but it’s actually been contested in recent years) and in part because it’s familiar to us from these older narratives.

 

E: A lot of your work centers on the idea of transhumanism and the role it might play in bringing about a technologically-enabled utopia. What does such a technologically-enabled utopia look like? Is the end goal of transhumanism the perfection of the human or the replacement of it? 
M: I think most transhumanists would say that the goal is to perfect the human, or perhaps to transcend the human—to use technology to help us evolve into a new species. One influence on the movement was the Renaissance philosopher Giovanni Pico della Mirandola, who believed that humans can ascend the chain of being, can climb higher than the angels, a little closer to God. Transhumanists themselves have long insisted that their movement is a natural outgrowth of the humanist tradition. But if you read them more closely when they write about things like digital immortality, mind uploading, or resurrecting the dead, what they’re talking about is essentially duplicating the patterns of human consciousness in software. And many of them admit that there’s no guarantee that this will produce anything like subjective consciousness. If you copy a person’s brain perfectly in digital form, you might have an artificial intelligence that talks and acts exactly like that person, but there will be nothing going on between the ears. And the person whose consciousness was duplicated will be dead. So in that sense, what they’re working toward is more so a replacement of humanity—or the seeding of some future technological race.

What’s interesting is that this question about the preservation of identity is precisely what preoccupied the early Church Fathers in their treatises on the Resurrection. One problem that Tertullian and Origen and others kept returning to is: if you reassemble all the parts of a corpse into a glorified body, will it be the same person—with her memories, her sense of self? Or will it be a new person? This question led to all sorts of crazy thought experiments about where, exactly, selfhood begins and ends. Would God have to resurrect the fingernails and hair we’d lost across the course of our lives in order to keep our identity intact? These theologians were basically trying to figure out how a person’s subjectivity could be preserved across that great leap into eternity.

 

E: Being rather unfamiliar with transhumanism myself, I have a few more questions for you on the subject. Is transhumanism religion? What roles can/should faith communities play in discussions of transhumanism? 

M: Most transhumanists are not religious. A good portion of them are atheists, though there are some budding transhumanist religious groups, like the Christian Transhumanist Association, and the Mormon Transhumanist Association. I’ve spent some time talking to Christian transhumanists and it’s an intriguing movement. They’re basically trying to start a conversation about whether things like AI or genetic engineering can have a role in the biblical call to transform and renew creation. I don’t always agree with their ideas, but I admire people who are thinking through these questions and considering how to reconcile them with religious belief.

I suppose I’m more skeptical of those utopian thinkers who insist that they are stone-cold materialists and yet persist in believing that they are going to live forever in the cloud, or that their mind is going to merge with the universe in some kind of digital Parousia. Max Weber, the German sociologist, wrote in a 1917 lecture about the modern impulse to look to science and technology to fulfil those transcendent longings that religion once satisfied in our culture. He predicted the rise of “academic prophets,” visionaries who would present scientific ideas as a new form of revelation. To me, that was a prescient observation, maybe even a premonition of the Silicon Valley luminary who promises that new technologies will solve our moral and ethical problems through social engineering. Weber was very skeptical of this tendency and thought that people were better off looking for it in traditional religious communities to satisfy those desires. In other words, if you want a religious experience, go to church. Don’t look for transcendence in the lab or the lecture hall—or on the stages of TED Talks, for that matter.

 

E: I have just one more question for you. What questions do you think tech users should be asking themselves as AI tools become more integral to everday life? What questions should technologists (i.e. those making the tech) be asking as they bring AI into everyday life?

M: I’d hope we could eventually have a more far-sighted discussion about what constitutes human flourishing and what specifically we want technology to do for us going forward, as opposed to the more short-sighted focus on convenience and profitability that currently dominates conversations about new technologies. Maybe that will depend on establishing a healthier dialogue between technologists and laypeople, or encouraging the everyday user to do the research on the products they’re using: where is your data going? Which technologies are tracking you, and how? This is becoming increasingly difficult because the technologies are so complex, and it’s very hard for the average person to understand how they function. In fact, given the rise of black box technologies like deep learning algorithms, many of the people making the technologies can’t even explain how they work, or what kinds of inferences they’re making from the data they’re fed. It seems like we’re at a crucial juncture where we have to decide whether we want to continue down this path of creating technologies we don’t fully understand and can’t fully control. Which comes back to that question of what we ultimately want from technology. Do we want tools, or are we looking for some kind of oracle, or a form of digital omniscience?

It’s clear that the technologists themselves are thinking through these questions more thoroughly, but it’s hard at times to know whether the larger organizations and corporate interests they work for have our best interests in mind, or whether they’re just thinking about their bottom line. This isn’t a question of motives so much as it is a structural issue. Even idealistic or altruistic impulses get subsumed into this drive for competition and profit. I’m thinking about what happened with OpenAI, which started out as a nonprofit research lab that was devoted to creating a safe path to Artificial General Intelligence that “benefits all of humanity,” but then privatized in order to remain competitive—I think that was the official explanation. It’s great that these corporations pay lip service to ethics and higher ideals. But the evidence suggests that when those ethics come into conflict with profits or threaten their competitive edge, these later motivations win out.


Emily Wenger

is pursuing a PhD in computer science at the University of Chicago with a particular emphasis on machine learning and privacy. Her research explores the limitations, vulnerabilities, and privacy implications of neural networks. Emily worked for two years as a mathematician at the US Department of Defense before beginning her PhD studies in 2018. She graduated from Wheaton College in 2016 with a degree in mathematics and physics.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter