In March of 2023, I attended a data science conference. As one might imagine, the dominant topic was not how to strengthen machine learning infrastructure or squeeze out the predictive performance of a forecasting model. At the time, ChatGPT had recently altered the course of not only my field but society at large. Conjecture about the future preoccupied the sessions and chatter among attendees. I could not help but wonder, “what are we in for…what am I in for?”
On the plane trip back from the conference, my wondering morphed into ruminating. What if we adapt to AI instead of the other way around? What if we make things worse while trying to make them better? What if we do not ask the right questions as we develop increasingly powerful systems?
My pondering turned into action: I decided to write a book. I wanted to scare myself into imagining a future created by a slippery slope no one would want. I envisioned a world where technological progress and humanness fall out of balance. I titled it Again Awake, inspired by the slumbered state we can be led to by technology. The storyline follows a young man’s search for truth in a society where he feels restless and alone.
“John has become disenfranchised by the AI-driven world in which he lives, a culture that has stopped measuring time and where geography is irrelevant. One day, he discovers family artifacts that lead him on an unexpected journey. His exploration unveils revelations about the state of society and puts him in a position he never imaged. Amidst this backdrop, he must grapple with the ever-present loss of his father and navigating newfound love.”
My Christian faith inspired many of the ideas I wanted to explore in the novel. As C.S. Lewis once aptly said, “I believe in Christianity as I believe that the sun has risen: not only because I see it, but because by it I see everything else.”
The main theme of the piece centers on the nature of humanity as viewed from scientific and spiritual perspectives. Is man a box we need to make transparent and replicate in silicon? Did God simply create us from an algorithm? In my view, answering “yes” to such questions produces societal consequences. I do not believe we will be able to fully “figure ourselves out”; instead, we will always have to take a leap of faith since we are not God.T he following excerpt from Again Awake summarizes my thoughts succinctly.
“The fear-mongers were afraid that AI would turn into super-humans that would destroy us. The real risk was in only being able to partially recreate the human. Intelligence without empathy. Creativity without kindness. Insight without humility. The human traits that are buried deep into our soul and that we can’t explain keep our intelligence and other behaviors in check. Without this set of counterweights, things begin to fall out of balance.”
An extremely blurry line between physical and digital realities is a cornerstone of the Again Awake backdrop. Though the story takes place in a fictional world, I clearly observe in present day how we serve AI and technology in ways we may not realize. Perhaps the starkest example of our servitude to AI takes place on social media: we have altered our lives to chase more likes and comments. As stated in Again Awake:
“What’s more, society began to cater to technology and AI, not the other way around. Artists began to work in a way that would fit well with how the AI art assistants operated. Writers focused on authoring instructions to AI rather than penning original text. Programmers began to select challenges that only their AI coding assistants could solve. We began serving the technology, not the other way around. On the surface, it appears AI is catering to us – but the opposite is true. We have adapted to it, choosing a society that can easily be served by AI, technology, and automation rather than fighting for more challenging but worthwhile lifestyles.”
Since completing the book in 2023, I have become more convinced that substantial societal risks exist with the current trajectory of AI, regardless of whether we invent superintelligence. The prevailing discussions on AI alignment focus on how to wrangle a superintelligence. Yet, alignment is needed for the technology we have now. This is not a call for centralization, but rather an appeal to responsible development at large. To paraphrase an article I penned in December 2024:
“When it comes to superintelligent AI, some contradictions seem clear. Let’s take the paper clip optimizer as an example. Would something (intentional word choice) with actual intelligence act so myopically, thinking that only making paperclips is the goal of existence? A system that is capable, but not intelligent, could err in such a way. However, I would assert that blindly optimizing an objective function is not the path a truly intelligent ‘something’ would take. A capable ‘approximate retrieval machine’ might act in such a manner, though. In this way, capable AI is perhaps more frightening than superintelligent AI.”
In my estimation, we are on the road to super-capable AI, not superintelligent AI. The most compelling evidence might be this thought experiment: suppose one trained a large language model (LLM) on data only until 1950 and determined what inventions the model might predict from the past 75 years. I do not believe the results would be encouraging, and I suspect strong proponents of current AI would agree. However, that is not to say LLMs (and other GenAI tools) are not useful and powerful. I would classify them as “capable” rather than “intelligent”. This paradigm still produces strong potential risks outlined in the story: we adapt our lives and societies to a “half-baked” AI that we think is “fully baked”.
In my expedition to better understand the intersection of AI and faith, I have cultivated a deeper appreciation for God’s handiwork. The questions around AI – intelligence, conscious, ethics – are ultimately inquiries into who we are and our purpose. I do not believe humans can be described in a formula and replicated in an algorithm. Our existence is computationally irreducible – we get the opportunity to exercise free will in a world where God creates wonder to keep us seeking Him, even if it is not universally recognized by such terms.
In Ecclesiastes 3:11, we read “He has made everything beautiful in its time. He has also set eternity in the human heart; yet no one can fathom what God has done from beginning to end.” (NIV). The eternity set in our heart is why we have developed AI. If we understand from where this longing springs, we will be better positioned to understand the technology for what it is: an earthly tool.
You can read a free version of Again Awake here.
Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.