Introduction
As AI&F welcomes our new partnership with the World Evangelical Alliance (WEA) to promote ethical and responsible uses of AI, it seems appropriate to review the plenary panel, workshops and discussions led by AI&F members as part of the 2025 General Assembly of the WEA in Seoul, South Korea. This every-six-year Assembly brought together two thousand representatives and mission partners of WEA from more than 120 of the 164 countries represented in the Alliance.
Following a Plenary Panel in the full conference meeting led by Brenda Ng, many AI&F leaders spoke at the four subsequent workshop sessions including AI&F founder David Brenner, Brenda Ng, Quintin McGrath, Patricia Shaw, David Hackett, Thomas Osborn, Nick Kim, Manuel David Morales, Levi Checketts, Alan Marty, Sam Kim, DK Jung, Kevin Chong, and Marcus Schwarting.
The Plenary Session entitled Algorithmic Justice: Economic Disruption from Silicon Valley to the Rift Valley, introduced to the whole conference the concepts detailed in the later workshops around what is coming with AI for society and Christian mission; a framework for evaluating a Christian response; approaches to discerning that response; and ways to implement it that also reflect the realities of different missiological communities. Our notable panelists speaking to these questions under Brenda’s moderating guidance were professor and author Christopher Watkin of Monesh University in Melbourne, Australia; Nick Kim, missional AI creator and consultant in Seoul; and pastor and author Sam Kim of New York City.
The four workshops following were well-attended and led important conversations on topics at the intersection of AI, Christianity and evangelization. We review the content and key takeaways from these workshops below. Finally, a special thank you to Donna Wilcox, John and Patty Cosper, and Daniel and Marianne Fong for sponsoring travel for several of our speakers, and to Christopher Lim for making his conferencing software, spf.io, available for AI&F workshops to foster a multilingual interactive experience.
I. The Image Of God Across Digital Divides
In the first workshop we heard from Quintin McGrath, Kelvin Chong, Thomas Osborn, and Chris Watkin on the importance of preserving humanity’s unique place in God’s creation in the context of more capable AI models. Quintin McGrath also introduced the TRUST framework to help Christian developers be more circumspect in their construction and deployment of AI tooling. Both the TRUST framework and the emphasis on the Imago Dei served as an important foundation for later talks, including a panel discussion with experts Nick Kim, Alan Marty, Kelvin Chong, and Thomas Osborn (facilitated by David Hackett). This discussion brought to light important distinctions between how Christians ought to employ AI models, especially when interacting with vulnerable groups such as children and the elderly.
II. Vocation and Creation Care
In the second workshop we first heard from Marcus Schwarting, Levi Checketts, and Manuel David Morales on the environmental impacts of greater AI adoption. These three presentations considered the effects of e-waste, carbon dioxide emissions, water use, and other negative externalities that are especially affecting the Global South as demand for compute increases. During the subsequent panel discussion (facilitated by Quintin McGrath), the speakers grappled with how Christians should respond to these increasing demands on natural resources from the AI sector.
III. Sacred Accountability
During the third workshop, the conversation pivoted to questions on accountability and transparency of AI systems. First, Marcus Schwarting presented on the multifaceted definitions of AI transparency, and considered the type of transparency Christians should expect from large technology firms as well as developers building tools marketed to Christian users. We then heard from Patricia Shaw on the legal and policy aspects of AI transparency, and the challenges around enforcing standards around transparency (mainly in the Eurozone, but internationally as well). Following technical and legal/policy talks, we heard from Chris Watkin about how these issues of accountability and transparency might be interpreted through a Biblical framing. The panel discussion (facilitated by Brenda Ng) that followed led to a discussion of how Christian organizations could safely utilize AI models without potentially leaking sensitive data.
IV. The TRUST Framework and Future Directions
The final workshop of the conference featured Quintin McGrath providing an in-depth analysis and demonstration, with Thomas Osborn, of the TRUST framework for developing AI tools. The acronym “TRUST” stands for Theological alignment, Relational impact, Utility and justice, Stewardship and sustainability, and Transparency and accountability. This framework was demonstrated via TRUST-GPT, which is a GPT-4.0 model that incorporates the TRUST framework via in-context learning. By stepping through these five principles, a Christian developer can evaluate their AI workflow to help detect possible oversights and ethical pitfalls. The workshop concluded with a full group discussion of how Christians should be responding to AI as clergy, laity, user, or developer.
Conclusion
Across two plenary talks and four workshops, AI&F was focused on engaging with WEA attendees on how Christians should respond to AI. We heard from a number of Christians who bring their AI expertise to help inform others in their use and development of AI. We thank the WEA and Sarang Church for continuing to support the efforts of AI&F to inform others about the ethical challenges AI poses.


