Three mainline traditions met August 12-15 in Seattle to formally begin an expected ongoing conversation about the role of artificial intelligence in the church and society: the Episcopal Church (TEC), the Presbyterian Church (PCUSA), and the Evangelical Lutheran Church in America (ELCA). Importantly, this was the first time leaders from the innovation offices for each denomination and selected church leaders met in person to focus on this far-reaching topic.
Combined with Catholic, Orthodox, United Methodist, and Evangelical speakers and guests, the conference provided a uniquely broad perspective. This breadth included the presence of Father Paolo Benanti, a Franciscan friar and key advisor to the Vatican regarding AI ethics. It also included a morning session held in Microsoft’s offices, which included a Q&A with Daniel Kluttz, the senior director of sensitive uses and emerging technologies at Microsoft’s Office of Responsible AI. Throughout the event, graciously hosted by Epiphany Parish in Seattle, the excitement of purpose grew with the conviction that addressing artificial intelligence with its ethical, theological, and ecclesiological implications requires expertise that no one church body can fully address. Although more questions than answers were raised, there was much shared learning and a concluding sense: we must continue to further this conversation together. Here are some highlights.
Opening keynote: “Augmented Faith Leaders in a World that is Brittle, Anxious, Nonlinear, and Incomprehensible” by Bob Johansen.
Bob Johansen began the four-day conversation with a futurist perspective conceptualizing artificial intelligence as an augmentation for humans. Johansen, Senior Fellow at the Institute for the Future and a longtime consultant with Microsoft argues that AI should be used as a partnering technology and even suggests that we should anthropomorphize it at a certain level. He suggests naming the generative AI chatbot and giving it its own physical space within our workplace, such as a separate monitor. Johansen explains that this enables longer, ongoing prompting sessions with the chatbot and adds that it has become an essential partner in his own writing process.
Johansen suggests that we explore the AI future not by our usual perspective of “Present Forward” (starting where we are and looking ahead), but rather from a “Future Back” perspective (starting in an envisioned future and working back). He calls this using “foresight” and notes that foresight leads to insight that results in action. “We build our future back stories with drivers and signals from today,” he says.
Johansen offers this pragmatic approach to AI as he draws from his futurist work in training leaders such as new three-star generals. He teaches them to engage a world that is not a complex set of variables to be solved but a world that is more often unpredictable and incomprehensible–a reality that the church has practiced language for: mystery. He notes, “GenAI does things that cannot be put into words.” Johansen argues that faith leaders have the advantage of speaking in terms of “clarity,” which he describes as “clear direction” and not just “certainty.”
He describes our era as “BANI” (Brittle, Anxious, Nonlinear; Incomprehensible), and that the required leadership attributes for this BANI era are, respectively:
- Bendable with clarity and resilience;
- Attentive, with active empathy;
- Neuro-adaptive, with practical improvisation;
- Inclusive, with full-spectrum thinking.
Finally, Johansen asks the gathered leaders: How could AI augment you and your role as a faith leader? He urges us to discover opportunities for augmentation by filling in the statement, “I want help… in order to…”
Plenary: “The Developing Landscape and Trends for Christian Missional and Ministry Applications” (Thomas Osborn and Brian Green)
With the memorable introduction by AI and Faith’s founder, David Brenner, who compares the growing market for AI applications to a “greased gold pig,” two other AIF members, Thomas Osborn and Brian Green, describe a few of the trends shaping Christian mission and AI.
Entrepreneur and AIF cofounder Osborn addresses the strength of market forces in shaping the development of AI applications. He gives specific examples of current and potential uses for AI and Christian mission. A Catholic ethicist, Green focuses on AI’s ethical/social implications, such as dependency upon AI and resulting deskilling. He also details the Rome Call for AI Ethics and its six principles: transparency, inclusion, responsibility, impartiality, and reliability.
Panel: “Incarnation in a World of AI” (Father Paolo Benanti, Noreen Herzfeld, and Ben Olson)
This panel touches on a recurring theme within the conference: What does it mean to be human in a world of AI? As a theologian and computer scientist, Noreen Herzfeld focuses on the incarnation and human embodiment–a topic she later elaborates upon in her own session. She distinguishes between humans and AI such as the consistency of human embodiment, the ability to be affected by the environment, and perhaps most importantly, the ability to suffer–”AI cannot suffer.” Herzfeld concludes, “AI cannot be both a tool and a person.”
Father Benanti fears that our development of artificial intelligence is “lowering” our incarnational theology by understanding humans as machines and warns that AI can become a new form of colonialism. Will the global south be left behind? Like Herzfeld but with his own sense of humor, he readily distinguishes the technology of AI from organic life: “Turn off the hair dryer…fine. Turn off the duck…dinner!”
Amidst this distinction between AI and humanity, Ben Olson, who leads Windows Responsible AI and Data Compliance at Microsoft, speaks to the bridging of humanity through multicultural, multilingual tools powered by AI. Olson asks the pragmatic question: “What is the duty of the moment?”
Keynote: “Navigating the AI Revolution: A Call for Ethical Stewardship” (Father Paolo Benanti)
Father Benanti’s call for ethical stewardship with respect to artificial intelligence became instantly clear in his sharing of an image of low bridges along the Long Island Parkway. Robert Moses, a mid-twentieth-century urban planner, deliberately designed the bridges to be low enough to prevent low-income people relying on public buses from accessing affluent Long Island beaches. What low bridges are being built to limit or prohibit access to artificial intelligence?
Presentations at Microsoft
The visit to the Microsoft campus in Redmond began with talks from several Microsoft employees dedicated to enabling the humane use of artificial intelligence. Jon Palmer, a VP and General Counsel at Microsoft, welcomed and encouraged faith leaders to “kick the tires.” Mark Ghazai, a Technology for Social Impact manager at Microsoft, discussed use cases for CoPilot within churches calling generative AI a “transcreation” process. Juan Lavista Ferres, Lab Director for Microsoft’s AI For Good Research Lab, shared multiple examples of how AI can help, such as identifying a rare eye disease in an infant or dramatically speeding up disaster assessment. Naria Santa Lucia, a General Manager for Digital Inclusion and US Community Engagement, her work on “skills for social impact” seeks to bring AI fluency and to “help people thrive in a digital economy.”
Panel at Microsoft: Bob Johansen, Brian Green, Daniel Kluttz
A panel with Bob Johansen, Brian Green, and Daniel Kluttz explored aspects of AI and took questions from the audience.
Kluttz noted that AI augments our tools and our skills, clearly, but it’s the faith area that is dealing with augmenting humans. Replying to a question about beta LLMs being released to the public, Kluttz said that Microsoft has a phased release policy of AI with easy feedback systems to mitigate challenges. Among other stress tests, Microsoft uses “red teaming”, or hiring adversaries to test new systems to refine them.
Johansen said, “if you get your language right about the future it draws you to it.” Several times he advised us to think in terms of building “bounce ropes” for AI (as in a boxing ring) instead of building hard and firm guardrails.
Green discussed how Christians might best influence tech. He recommended using concepts of common morality, natural law discourse, and UN-type language to connect with a non-faith audience. “Phrase theology in secular terms,” he said, sharing “a good question to engage secularists: ‘Is this the kind of world we want to live in?’”
Johansen mentioned that we are hoping to build what he called “Super minds: computers working together with humans.”
Presentation by Juan Lavista Ferres at Microsoft
Next speaking to the group at Microsoft was Juan Lavista Ferres, Chief Scientist and Lab Director of the Microsoft AI For Good Research Lab, where he works with a team of data scientists and researchers in AI, Machine Learning, and statistical modeling, and working across Microsoft AI For Good efforts. These efforts include projects in AI For Earth, AI for Humanitarian Action, AI For Accessibility, and AI For Health.
He shared that “for some world problems, relying on AI is the only option we have… AI expertise alone cannot solve these problems; we need to collaborate with subject matter experts.” He noted, “We put a man on the moon (in 1969) before we added wheels to your luggage (in 1972).” He described the need for developing simple solutions: “If you want to impress people and look intelligent, your solutions can be complex. If you want to have an impact in this world, your solutions need to be simple.” He said that technology has the potential to create profound impacts, yet it is, he said, “essential that we prioritize responsible and safe design to ensure its positive influence on society.”