Today, we feature an interview with Andy Crouch, an AI & Faith advisor and well-known thought leader on topics regarding technology and culture. Andy is the author of The Life We’re Looking For (2022), The Tech-Wise Family (2017), Strong and Weak (2016), Playing God (2013) and Culture Making (2008). Andy was executive editor of Christianity Today from 2012 to 2016 and served the John Templeton Foundation as senior strategist for communication in 2017. He serves on the governing board of InterVarsity Christian Fellowship. His work and writing have been featured in the New York Times, the Wall Street Journal and Time—and, most importantly, he received a shout-out in Lecrae’s 2014 single “Non-Fiction.” Andy studied classics at Cornell University and received an M.Div. summa cum laude from Boston University School of Theology.
AI&Faith: Andy, thanks for taking the time to chat with us. Let’s talk about AI, faith, and culture, as you’ve written extensively on these subjects in your books The Life We’re Looking For and The Tech-Wise Family. What’s your take on recent advances in AI like Chat-GPT and Stable Diffusion? How do you think they will affect our culture long-term?
Andy Crouch: Here’s my take: we as a species are on a quest to do magic. We have this hunch that the world could be made to do things with effortless power. Modern technology represents, in some ways, the most successful attempts we have yet seen of getting the world to do magic on our behalf.
This is a very ancient and perennial quest. We consistently find ourselves on the cusp over and over of getting the world to do the magic we think it can do. There’s a phase of elation as it feels like we’re just about to unlock the secret. And then it turns out – every single time so far – that magic, as we conceive it in our imaginations, actually doesn’t exist.
In The Life We’re Looking For, I trace this back in the Western world to alchemy. Alchemy is intertwined with the investigation of the natural world that led to today’s “natural sciences”. Empirically speaking, of course, alchemy is a dead end. There is no philosopher’s stone that will turn every metal into gold or give you immortal, disembodied life. Those were the two ideals that animated the alchemists’ quests. But while the science doesn’t work out, I would say those dreams are actually very much alive.
Now, alchemy led us to chemistry, which produces real usable information about the properties of the natural world. But chemistry is not magic. And we still want to do magic.
AI&Faith: How do you see this “quest for magic” in the development of AI tools today?
AC: The initial reaction is that this is magic. We need to remind ourselves that humans have had this same reaction to every technology so far, from flight to electricity. Long term, I think what we’re going to discover is that AI is, in fact, not magic, but rather that it is useful in certain limited domains. But it is not the transformative singularity that we dream of that fundamentally alters human existence.
I think the rapid adoption has led to people very quickly discovering that these models essentially just regurgitate the text they have ingested. People are quickly running up against AI’s limits and realizing it is not a general-purpose smart machine, let alone a wisdom machine. I call this the “boring robots” phenomenon. Robots are always exciting before they arrive. When I first ordered a robot vacuum cleaner, it seemed like it was going to change my life! But after three weeks with the robot vacuum cleaner, I realized that all I have is a marginally more effective way to clean my house, not a magical transformation. I think this particular round of AI innovation is going to have the same result.
AI&Faith: Technology never seems to give us the magic we want, but we as a species keep hoping it will. Do you think we humans will ever stop thinking that we can make magic through new technology?
AC: I doubt we’re going to stop our human quest for magic anytime soon. We are always going to think that the magic is just around the corner beyond the next breakthrough.
AI&Faith: All right, last question. One interesting characteristic of AI tools is that they are becoming, in many ways, compulsory. For example, if I choose to be on a social media platform, I have to engage with that platform’s AI recommender system. What is your perspective on the compulsory nature of AI?
AC: I’ve been playing around with calling this the “innovation bargain.” I believe that every technological advancement has four components. The first two are the “good news”: (1) “now you can” and (2) “yyou’ll no longer have to”. These represent the expanded capability and reduced burdens offered by these tools.
But, the tools always come with components three and four as well, which tend not to be mentioned in the marketing materials: (3) “you’ll no longer be able to”and (4) “now you’ll have to”. I think these are the compulsory qualities you have identified. These represent the enforced behavior and diminished capability that result from the use of these tools.
Now, I think that every single human invention is a package of all four components to some extent. But certain inventions come with a very heavy compulsory component,or at the cost of a dramatic diminishment of human capabilities. Not all things are equally disabling or coercive, but some of them are profoundly so, particularly when they get scaled up into huge systems (like Facebook). If Facebook is your way to stay in touch with your friends, and Facebook decides to tweak their algorithms, you’re stuck with it because you bought into that platform.
We should absolutely be alert to the ways algorithms can deplete our freedom. We have to. But this can be very hard to gauge, because the four components of “the innovation bargain” are not disclosed at the beginning – it’s all one and two, and we never talk about three and four.
AI&Faith: And sometimes three and four are not known when they are deployed! These technologies that are deployed at such a speed that they’re often unvetted. And then two years later, we realize, “Oh, this is really bad.”
AC: Exactly. A big question for me is: “why do these things get deployed, especially at scale?” When some very large-scale entity holds out the promises “now you’ll be able to” and “you’ll no longer have to”, it’s easy to sign up for because it sounds so good. But why do they deploy it? Craig Gay, a philosopher at Regent College, is really helpful here. He argues that very little technology is deployed for the primary purpose of assisting ordinary, embodied human existence, which is the life all of us are in fact living. Instead technology is deployed largely on the basis of generating economic profit. But there’s not always an economic profit to be found in assisting people to love God and neighbor with heart, soul, mind, and strength.
AI&Faith: Wow, that is thought-provoking. Any final points you’d like to make?
AC: What I’m trying to encourage, at least within my own faith community, is discernment about what bargains we are making and then a movement of resistance against the Mammondriven deployment of technologies that unduly diminish human capabilities. We have to take a stand and say, I’m not going to let you dictate the terms on which I’m human. That, to me, is the great challenge of our moment.
AI&Faith: Thank you for sharing your time and wisdom with us.