New! Become A member
Subscribe to our newsletter
Book Review

Book Review: “A Human Algorithm: How Artificial Intelligence is Redefining Who We Are” by Flynn Coleman

Oppenheimer was awarded seven Oscars at the 96th Academy Awards ceremony. When asked to name the first test of the atomic bomb in 1945, the real Dr. Oppenheimer quoted John Donne’s invocation of the Christian God, “Batter my heart, three-person’d god.” Trinity was its name. When witnessing the Trinity test, he recited a haunting passage from the Bhagavad Gita, a Hindu scripture: “now I am become death, destroyer of worlds…”

The Trinity Test symbolized the profound ethical dilemmas, including those in science and theology, and existential responsibility, faced by the scientist at the dawn of this new technology. The Trinity Test was conducted in a remote desert with a small number of people locked away in isolation. On the other hand, we are all contributing to the advancement of artificial intelligence (AI) all over the world. Most people are indifferent to a moral compass or a conscience as they live through the Internet of Things (IoT). The theologically minded among us pause to reflect. If the blueprint for the future generation is to be (re)defined by AI, would we still be human beings? If we built machines in our image, how would we ensure to infuse AI with as good as (humanly) possible morality? If we were to believe that traits of human beings can be algorithmically changed using artificial generative intelligence (AGI), what would this achievement aim to address? Elimination of human suffering? Elimination of physical death? Pursuit of human happiness? In relation to other human beings, would the further progress of AI and AGI help us become more just, inclusive, compassionate, and empathetic persons? How about relationships to our Creator? Would AGI developers be elevated to creator status? Are we playing God, as many have attempted over the millennia, in the pursuit of advancement and enhancement of AI?

I recently revisited the arguments from a 2019 book by Flynn Coleman: A Human Algorithm: How Artificial Intelligence is Redefining Who We Are1. Coleman begins with chapters on the history and the science of technology, acknowledging that the age of intelligent machines is part of the continuous revolution of our human history. She states that the challenge is to “preserve, protect and expand our humanity in tandem with scientific achievement” (p.46), emphasizing the need to instill ethics and morals into AI systems so that human rights, empathy, and equity become fundamental principles of AI development. Coleman provides limited reference to religion and theology, placing relative weight on reason and science, he states that “we all have a profound responsibility: to map a human algorithm, one that encompasses who we are and who we want to become.” (p.235) Religious reflections on humanity, morality, theology, and tradition, as a source of critical knowledge and truths for humans, could have enriched her arguments. A renowned human rights lawyer, Coleman emphasized the need for robust laws, policies, and oversight mechanisms, ensuring AI safety. However, history has proven repeatedly that oversight by human-made laws and regulations alone has serious limitations. Countless immoral acts have been conducted legally. Given the effects of theology, religious and moral traditions in the development of law, social sciences, economics, sociology, and political science, I would argue that further examination of the heavy reliance on reason and science is now even more important in AI ethics discussions.

Promises and paradoxes of AI technologies and algorithmic optimization present many ethical and moral questions that intersect with theology. Narrow AI, or weak AI, has been ubiquitously used in our daily lives, without much theological and religious reflection. Digital voice assistants, e-commerce, internet search engines, autonomous vehicles, drones, facial and image recognition technologies are examples. With the rise of artificial generative intelligence (AGI) or strong AI, a hypothetical machine that exhibits human cognitive abilities including reasoning, we grapple with the possibility that some of these narrow AI machines, for example drones, might achieve AGI status soon, becoming a significant existential threat to humans. In the process of building AGI, we will be faced with opportunities to ask ourselves many questions. Currently, do we have a sound understanding of the complex interplay between science, religion, and ethics around AI and the weight of existential responsibility? Have we discerned where we are in the trajectory of creation versus annihilation? Would AI play a significant role in advancing human morality? Can AGI and artificial superintelligence possess moral agency akin to humans? If super “smart” AI would not possess adequate moral agency, would the less smart humans be responsible for the machines’ abdication of responsibilities? Or would humans be able to restrict or limit the machine’s actions? How do we preserve, protect, and expand human agency in an AI-driven world? Anthropologists have long recognized that the human body is not just a biological entity. It is a vessel that houses our thoughts, emotions, and souls. Through our bodies, human beings interact with themselves, others, and the world around them. How do we define embodiment for AI? Coleman’s book attempted to describe how AI is redefining who we are. Would AI, non-bodied creatures, redefine who we, bodied creatures, are? Theologians, particularly those who hold Judeo-Christian beliefs, will offer additional invaluable thoughts. Human beings are created in the image of God (Imago Dei), while AI is created in the image of existing humans, by humans. The fundamental difference lies in the indisputable fact: Humans are embodied, ensouled creatures. AI is neither.

As the interdisciplinary community of AI & Faith continues to navigate the fascinating convergence of AI and faith, it is time to revisit the themes that Coleman addressed, with a renewed sense of purpose and hope for the future of humanity, remembering that we are not a tiny group of scientists in a remote desert locked away in isolation but we are the whole population in the middle of a new Trinity Test.

Yuriko Ryan

is a bioethicist-gerontologist with over 20 years of international experience in healthcare ethics and policy research. Based in Vancouver, Canada, she holds a Doctorate in Bioethics from Loyola University of Chicago and is a certified Healthcare Ethics Consultant (HEC-C). She is a contributing writer/member of the AI and Faith Editorial Board. She writes on AI Ethics, Public Health Ethics, Business Ethics, and Healthcare Ethics.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter