New! Become A member
Subscribe to our newsletter
Book Review

Review: 2084: Artificial Intelligence and the Future of Humanity by John C Lennox (2020)

John Lennox is an emeritus professor of mathematics at Oxford University, and often ventures into the intersection of faith and technology. In this book Lennox offers an overview of AI in order to establish a framework for discussion, then ventures into the promises and dangers of the technology.

These topics are hardly unique to this book, but that’s not where Lennox offers a fresh perspective. Lennox loves to delve into how Christianity has a unique contribution to make regarding technology and the future of humanity. The best parts of this book aren’t Lennox discussing the technology specifically, but rather engaging with the hopes many pin on AI regarding “transhumanism.” In total, Lennox’s book is a welcome addition as he brings a credible voice from a distinctly Christian worldview to the conversation.

An Accurate Assessment of the Dangers

Before getting to the more philosophical and religious implications, Lennox does a good job at both setting up the conversation (What is AI? What is the difference between Artificial Intelligence and Artificial General Intelligence?) and describing the dangers that the technology itself possesses.

The latter might come as a welcome addition to the conversation for practitioners, some of whom often feel as if the conversation veers too often into the fanciful, Skynet style discussions on AI. Not so with Lennox.

He touches on the main, well defined ethical questions such as the effect on work (job displacement), privacy (data collection), and weaponry. But even when he ventures into the unknown worries that might come as a result of advanced AGI, he frames it (in my opinion) correctly and in an intelligent and thoughtful way. Lennox agrees with Stephen Hawking that “The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble.” (p.49, quoting Hawking, ‘Brief Answers to the Big Questions’)

For those on the outside of the profession, this is (in my experience) the exact thing that concerns practitioners. As UC Berkeley Professor Stuart Russell said, “The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions.” An AI does not have to be conscious to cause trouble. A nuclear bomb can kill you. It doesn’t need to hate you or have an inner life of its own.

Lennox’ framing helps not only correctly identify the dangers, but also where to look for them. Human hubris and a rush to create first can often supersede legitimate concerns and cause them to be brushed aside. If developing a “safe” AGI comes only a few years after an “unsafe” AGI, that’s still a big problem. While he does also touch on the obvious dangers that arise from advanced AI (e.g. state level spying on citizenry), the dangers of a complex system making decisions that or may not line up with our own values (the value alignment problem) gets far too little attention in popular books on the subject. I was glad to see Lennox introduce this concept to readers who may not have thought of the topic in this way.

Theological Critiques

Lennox also takes aim at what he sees as the ultimately fruitless, misguided, and dangerous goals of the “transhumanists”. These are folks who see AI as the future of humanity, as a way for humans to overcome our “limitations”, among which is our own mortality and limited understanding. Through AI, might we achieve immortality and superintelligence, if not omniscience? Might we not be as gods? Such goals are explicitly those of many atheists involved in the transhumanist project, a part of the natural outworking of their naturalist project and a rejection of theism.

Lennox points out that for the Christian, these are misguided aims. Our current scientific understanding is built upon a theistic view of the world; thus, rejecting this perspective is sawing off the branch upon which we sit. Seeking to fulfil such aims will likely reduce, not improve, our understanding of ourselves and the world (so much for omniscience). But further, if the Christian is correct, attempts to conquer death are not only misguided but ultimately futile.

Further still, the transhumanist, in his mind, skips a crucial question that people of faith take as  foundational: what does it mean to be human? What are we? What are we here for? For the Christian, what implication does the fact that God became a man, was resurrected in a body as a man, and one day will return as a man have for us as we try to make gods of ourselves? Were we not made complete and unified with God, while also retaining our humanity, through Jesus Christ? Why are we trying to create a homo deus when we already have the God-man? Is this not an inversion of the Christian story, a cruel and ultimately facile parody of it? Often the biggest lies we encounter aren’t total falsehoods, but perversions of a deep truth. Lennox forcefully argues that this inversion is one such lie.

Unanswered Questions

One item I hope Lennox delves into in the future which wasn’t well addressed in this book is the positive theological aspects of AI technology. We were made to create, were we not? Death is a wage of sin, is it not? What role does our belief in what life on earth was meant to be play into our own creative endeavors and the work we do? Fighting death, ultimately futile as it may be, is a part of what we should do, no? We fight the thorns and thistles that arose from the fall — does AI not have a role to play? Bringing in the theological concepts around the Cultural Mandate would further flesh out just how much people of faith have to say on the topic of AI.


All in all, Lennox provides a solid foundation for the Christian to engage with the ethical and philosophical issues of AI. While fairly high level, his descriptions of the scientific and technical issues are a good starting point. His critiques of the secular naturalist approach to the topic highlight just how much people of faith are needed in order to make sense of the problems and see the ethical issues clearly. His theological prodding illustrates how much Christianity has to contribute to the discussion, especially as it relates to who we are and what we should, and shouldn’t, be looking for AI to do for us. I recommend this book to anyone looking for a solid, easy to read, fair minded representation of how to approach the topic as a Christian.

Tripp Parker

a Founding Expert of AI and Faith, is an applied AI leader currently working on machine learning in the FinTech space for SoFi. Previously he spent over 10 years at Amazon and Microsoft working on AI applications in advertising and healthcare. He holds degrees in Philosophy, Electrical and Computer Engineering, and Computer Science from Duke University.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter