AI, Faith, and the Future (forthcoming late May) is a collection of scholarly essays authored by members of a multidisciplinary research group on ethical and theological reflections about Artificial Intelligence at Seattle Pacific University. The intent of the authors is twofold: orient the reader to historical, technical, philosophical, ethical, and theological perspectives on the nature and use of AI; and then offer a series of disciplinary and theological explorations on the impact of AI. These new narratives can become the focus of beliefs which shape how society views its AI future.
This book (edited by Michael J. Paulus Jr and Michael D Langford, Pickwick Publications, 2022), attempts to bring to bear a philosophical framework for discussions of the pragmatic and ethical issues related to AI and the Christian doctrinal lenses of revelation, creation, salvation, and eschatology. For their part, the authors of the essays are successful in “reimagining what is possible” and “participating constructively in the wise design, development, and use of AI. “The first part of the book lays the foundations for a discussion on AI. Michael Paulus begins with a primer on the current state of AI and the need for further “foresight analysis” and better narratives of what is desirable in the AI future we want to create. Paulus seems to suggest that while faith traditions are late in responding to the transformative technologies of AI, they have been at the forefront of other technological changes throughout history.
Carlos R. Arias traces the origins of AI’s development in machine learning, natural language processing, expert systems, computer vision, and robotics. He accepts that any future development in AI will reflect the imperfections and brokenness of humanity and thus it will always require the intervention of human beings to oversee it.
In What’s so ‘Artificial’ and ‘Intelligent’ in Artificial Intelligence? theory of mind philosopher Rebekah L. H. Rice engages in a philosophical critique of logical positivism, prominent during the birth of AI. She discusses how the physicalist worldview that reduces all mental activity to physical properties relates to the creation of Artificial Intelligence. AI can be viewed as an attempt to prove that physicalism is true: beings who value thought and, in particular, individuals who can think, would do well to map the consequences of making a machine that can think.
In A Theological Framework for Reflection on Artificial Intelligence, the book’s co-editor Michael D. Langford demonstrates how Christians can see AI through the lenses of revelation, creation, salvation, and eschatology within the covenantal life of a diverse community of faith and still draw on the natural sciences to gain knowledge and “base its work in love, worship, humility, and obedience.” He posits that while God can use AI within the plan of salvation, the saving only comes from God.
The second part of the book sends the reader on explorations across the sea of automated technology in order to bring back narratives from the New World. Michael D. Langford discovers whether AI systems can ever be elevated to person status. He uses theological personhood, exemplified in the creation story, incarnated in the person of Christ, and manifested through the Pentecost event, to suggest how AI, a “second order” creation of human origin, also can be valued as “very good” within the human community. Langford offers that the Pentecost event furthers the anthropological status of AI in that the “Holy Spirit works through the unexpected,” “the communal personhood of the church is directed toward all creation,” and “God’s reign manifests through AI.” AI can be seen in the light of the plan of salvation as “revelatory in that it helps us to better understand ourselves.”
Turning to science, Philipp M. Baker uncovers the use of reinforcement in AI to induce the behaviors desired by advertisers and the sellers of products online, as neuroscience delves deeply into sociology, philosophy, and psychology to better understand the processes in neurons that result in emotional states and behaviors and how to manipulate them through salience signalers in the brain, like dopamine. Baker warns, “those who control the means of salience control the direction of society,” and advocates for developing a framework to engage with this technology to promote human thriving, equity, and inclusion.
Next, David Wicks and Michael J. Paulus Jr. address the needs of students in the rapidly changing world of automation, specifically through teaching critical thinking, creativity, communication, and collaboration (the 4C’s). The future of work increasingly involves invasive, AI-driven surveillance and profits dependent on algorithms; therefore, students need the 4Cs to adapt. Technology has historically transformed education through teachers “attending to cultural contexts and addressing social inequities.” The individualization of education planned by AI-driven advancements threatens to disadvantage some student populations, dehumanize education, inhibit creativity, and destabilize classes and community cohesion, whereas education that utilizes AI with a focus on the 4Cs can “highlight valuable skills as well as deeper values,” and “help us create and thrive together in this world.”
Michael J. Paulus Jr asks what the future of work in AI-automation looks like, first by delving into the meaning of human work. With a clear consensus that AI will destroy many jobs, an apocalyptic view of the future of work rejects futurism, the future as merely a product of what had come before and injects a “that which is coming” perspective which “opens up a vision of new creation” and “radically transforms the present creation” to create a new world of work.
Sin and Grace closes the discussion with Bruce D. Baker offering valuable insight and perspective on the ways that AI might engender channels for sin that other technologies do not. AI tools and practices are not morally wrong or ethically questionable in and of themselves, but their use on a massive scale to maximize profits and market control have exponentially increased their potential for harm. The doctrine of Sin and Grace offers a means to evaluate these competing goals in AI development and hope in the transformative, redemptive power of Grace, through which God re-establishes the covenantal relationship. AI, Faith, and the Future clearly outlines the stakes and the urgency for thoughtful, reasoned, and prayerfully considered action in matters of AI development and its use by individuals, businesses, governments, churches, and educational institutions. The essay authors – of whom all but two are Advisors of AI and Faith — carefully lay the path to understanding the historical, philosophical, and theological contexts for the explorations to come later. The faith perspectives expounded in its pages do not presume a monoculture of Christian homogeneity, rather the authors draw on early church, Catholic teaching, and several Protestant perspectives to draw from a diverse mix of theologians. The authors’ conclusions drawn from these perspectives do in fact provide a new array of narratives from which future explorations can embark with clarity and reproducible steps into the self-evident New World of AI automation and discovery.
The book ends with a litany of prayerful reflection on the themes presented within. It exhorts the reader to think and feel more deeply about the work of developing AI and invites cultivating a greater awareness of our choices to use AI-driven technologies and their immediate impacts on the community. With the authors, we can remain open to guidance from a higher power about spiritual impacts, avoiding distraction, and the pathways through which a higher power can work through the AI we build and use. One could argue that had this prayer been prayed over our existing AI-driven applications during their creation, we would not be quite as negatively impacted in the first place.