Tracy Kidder’s Pulitzer-Prize-winning book, The Soul of a New Machine, came to mind recently as I read an article by Olivia Goldhill provocatively titled “Can Machines be Spiritual?” Inferentially in Kidder’s book, and explicitly in Goldhill’s piece, the issue squarely on the table is whether there is any real difference between humans and (very smart) machines.
Of course, deep down this is the fundamental issue lurking around the edges of almost every piece on AI — whether the breathless ‘full-speed-ahead’ pieces, the ‘we’re all doomed’ pieces, or the ‘just right’ Goldilocks pieces in the middle. In fact, part of the extraordinary appeal and controversy of AI is how much it brings to the fore the question of what it means to be human and, more pointedly, whether there is anything truly special about humankind.
Christianity, Judaism, and parts of Islam, have always had a ‘high view’ of the human race — a conviction that humans are, uniquely, made in ‘the image and likeness of God’ (imago Dei). These traditions believe that humans reflect the character and abilities of God (sort of) like the moon reflects the sun. After the Fall, that divine reflection was compromised. Nevertheless, humans continue to manifest a divine spark that makes them — versus all the rest of created nature — unique, and uniquely valuable. In fact, the entire arena of human rights, which has played such a significant role in the forward progress of humankind, derives from this imago Dei understanding of the intrinsic significance of each and every human being.
AI seems to pose real challenges to this view. In particular, AI development continues to make machines proficient, even superior, at more and more of what had previously been considered specific to human beings. Projecting that trajectory forward raises the question of whether machines will eventually become a higher-grade (more-proficient) version of humans.
Plenty of AI experts are materialists and, therefore, believe such an outcome is inevitable. In their view, physical beings and physical machines are not meaningfully distinct. Both are programmed combinations of hardware and software — ‘thinking machines,’ if you will, nothing more. Daniel Dennett of Tufts University, one of the pre-eminent professors of mind, wrote in a paper on AI consciousness:
“The best reason for believing that robots might someday become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection, and operating according to the same well-understood principles that govern all the other physical processes in living things: digestive and metabolic processes, self-repair and reproductive processes, for instance (emphasis in the original).”
Robert Geraci, religious studies professor at Manhattan College, and a speaker at Stanford University’s recent conference on Apocalyptic AI, similarly believes that humans are formed entirely of physical matter — making building robots with every human capability entirely plausible. Though Geraci believes humans are solely physical beings, he acknowledges that “there’s a lot of unprovable guesswork in there.” He concedes that human consciousness may be made of something non-physical and, therefore, something that can’t be (re)created.
In her summary, Goldhill concludes: “That, in turn, again raises the possibility that humans are the only things in existence capable of spirituality. But, without definitive proof either way, a materialist philosophy is just as valid.” In other words, neither the spiritual, nor the materialist, view of life and humans can be proved — therefore, they are equally valid, and there are no good reasons to prefer one view over the other.
Not so fast, Olivia. Both the spiritual and the materialist views may be unprovable. Nevertheless, very good reasons argue for one over the other, especially this one: no one actually lives as a materialist, not even the materialists.
Consider: a (biochemist) materialist dispassionately, objectively, studies human emotions and concludes they are simply the result of biochemical reactions in the brain. As a result, she judges that emotions have no real significance — they’re just chemistry. Except that at the moment when she (or he) tucks her young daughter into bed at night, and feels that incredible sense of love and devotion welling up inside, she absolutely does not believe she’s experiencing a mere chemical reaction lacking significance or meaning. Instead, even committed (intellectual) materialists, believe that — with respect to their daughter — love is real, transcendent, meaningful . . . true.
Similarly, a materialist necessarily concludes that there is no such thing as right or wrong, moral or immoral. There are only biochemically-generated perceptions of rightness or morality. And yet none of us lives that way. We all, materialists and people of faith alike, believe that right and wrong, moral and immoral, are real and that they matter. Otherwise, there is no meaning, there is only nihilism.
Fortunately, (almost) none of us actually live as nihilists. Instead, moment by moment, decision by decision, our actions confirm that we live in a world of meaning — where love, joy, hope, friendship, purpose (and tragedy) all exist. We live, and act, knowing they’re all true.
AI may well produce machines that mimic, or surpass, a great many human capabilities. But humans are more than a bundle of proficiencies, or of biochemical reactions. They are much more than programmed hardware and software. Tracy Kidder may have (sort of) imagined that if computers became smart enough, they would develop souls. But as Joyce Kilmer concluded with regard to trees, only God can make a soul. And only humans can reflect that extraordinary truth.
has been a long time business leader in commercial real estate and more recently a speaker and author in numerous venues on integration of faith, work and better models for responsible business. Tim is a Red Sox fan from Boston where he graduated from Harvard.
In lieu of comments, we invite you to submit your comments and questions to our contact form.