Three Books for Three Questions

I check out quite a few books from public and university libraries.  Often I have to reserve and wait for popular titles or books on popular subjects.  I love it when, without a particular plan, books I have reserved show up at the same time and end up complementing each other in a virtuous learning circle.  This happened to me in November with three titles, each of which addresses an important question AI and Faith is asking.

To grasp what is at stake for society and people of faith in the development of AI, we need to understand:

  • What is AI and why and how is it being developed?
  • How does AI fit into the world of human intelligence and understanding?
  • What is to be gained, what is at risk, and how do people of faith organize to help balance these rewards and risks?

Rebooting AI:  Building Artificial Intelligence We Can Trust (Pantheon Sept. 2019) addresses the first question.  Longtime AI researchers Gary Marcus and Earnest Davis take a step back and a hard look at the AI industry’s deep dive over the past five years into machine learning and neural networks. They document why these approaches have made substantial progress in analyzing certain kinds of problems susceptible to pattern recognition, but suggest the limits of this approach are becoming apparent and will never be a path to achieving general intelligence.  Seeking to brute force insights through the application of massive computing power to huge data pools produces AI agents that are closer to idiot savants than versatile human problem solvers who can apply creativity and judgment.

What’s needed to gain truly trustworthy and useful AI, these authors say, is to combine the virtues of machine learning with a renewed commitment to “classical AI”, in which computer scientists try to make computers emulate human logic and associational thinking.    This will require the help of professionals across many disciplines, and breakthroughs around associative and logical analysis of data that have long proven elusive.

I like the sound of this contention for two reasons.  First, recognition of the extraordinary facility of human intelligence and the limits of AI achievement to date tends to support the faith belief fundamental to the Abrahamic religions that humans are categorically  distinct from anything we ourselves could create.  Second, bringing together both science and humanities expertise around the question of how computers can emulate human thinking reflects AI and Faith’s organizational principle that the development of good AI cannot be achieved by computer scientists alone.

The basic question remains, though, whether machine emulation of human thinking can ever be achieved?  I asked our Founding Member Emily Wenger, working on her PhD in machine learning at the University of Chicago, for her take on Marcus’ and Davies’ critique of current approaches to AI.  Emily agrees that many in the AI research community are focused on narrow tasks, such as training models from scratch rather than drawing on preexisting knowledge, but that’s because developing solutions for machine-based thinking along causal, logical and free floating associational lines has proven really, really difficult.  It’s not that machine learning researchers and advocates are especially myopic, but rather that what they can see before them is actual progress, as opposed to theoretical goals.

Whether artificial general intelligence will ever be achieved, there is obvious value in recognizing the limitations of current research and its associated hype, while goal setting for greater achievement.  It is even more beneficial to ask what analytical capabilities should be required before entrusting and deploying AI-powered agents (whether driverless cars, delivery drones, or algorithmic decisionmakers) with significant autonomy.

2084: AI and the Future of Humanity (Zondervan 2020) by Oxford mathematician and Christian apologist John Lennox, addresses my second and third questions – how should we think about artificially intelligent agents and functions in relation to humans? And how do we balance AI’s risks and rewards in light of the Christian mission of loving God and our neighbor as ourselves?  It’s as though C.S. Lewis wrote a book about AI.  Our Founding Member Tripp Parker will be providing a review of this book in the January AI&F Newsletter.

The key issue that interested me in 2084 is whether it is possible to argue for human distinctiveness and dignity and all that flows from that in AI ethics, absent a basic belief that humans are in some way divinely created rather than randomly evolved?  To me, Lennox makes a convincing case that it is not.  Lennox has long debated questions of evolution and man’s distinctiveness with some of the best-known so-called New Atheists.  Conversations around the subject of evolution v. intelligent design and other attempts at resolving faith beliefs in a sovereign Deity with the geologic and anthropologic record produce such vigorous reactions on both sides.  For that reason, it would be advantageous to keep the creation/evolution debate on the sidelines of how to think about and manage the development of artificial intelligence.  But having read this book, I personally doubt that that is feasible.

The third book in this intriguing, complementary trio is Jewish public policy researcher Yuval Levin’s A Time to Build: From Family and Community to Congress and the Campus, How Recommitting to Our Institutions Can Revive the American Dream (Basic Books 2020).  Levin is a Fellow at both the Ethics and Public Policy Center in Washington, DC and the Trinity Forum (there along with John Lennox).

Levin contends that much of the discord in our public, economic,  and social discourse can be traced to the loss of institutions that shape our character, manage our interactions, and channel our knowledge and energy toward beneficial outcomes.  Instead of working together in institutions, leaders now perform in isolation, often using the institutions they nominally lead as platforms for the enhancement of their own individual brands and positions.  Levin diagnoses the rise of social media as a key contributor to this wholesale shift to a performance oriented society.

Among the institutions Levin focuses on is faith congregations and denominations.  His book raises important questions about the extent to which social media and other forms of broadcast and communications technology allow faith leaders to operate outside their institutions and perform for their own career enhancement.  As such, it addresses a key issue for AI&F around the utilization and boundarying of AI-powered technologies within faith communities.

A Time to Build also applies directly to the continuing formation of AI and Faith as an institution.  What is it that makes our 65 Founding Members better together than apart?  How do we develop this nonprofit institution to be most effective in speaking into the ethics debate over AI and most efficient with the very considerable knowledge and expertise of our people of faith working in AI technology and related fields?  That is a question we are addressing in a new set of Zoom Calls among our Founding Members this week and next, as we continue to organize ourselves for good outcomes.

In the meanwhile, if you ask me, any of these books makes for good reading over the Holidays!