New! Become A member
Subscribe to our newsletter
News

14 AI&F Experts appear at Virtuous AI Conferences in Berkeley, Seoul and Rome

­­Can AI be virtuous? Fourteen AI and Faith experts and contributors presented work at the first of three online conferences offered by The Center for Theology and the Natural Sciences (CTNS)’s “Virtuous AI? Cultural Evolution, Artificial Intelligence, and Virtue” project in June and July.

The project, funded in part by the Templeton Foundation, explores a virtue ethics framework as a particular approach to understand the implications of AI for society and culture, as well as AI’s potential for moral enhancement and the development of virtue.

The CTNS summer conferences have taken place in three different time zones, representing three different regions of the world. The first conference took place in mid-June, nominally in CTNS’ home town of Berkeley and scheduled around the Pacific time zone. The second conference on July 11–14 took place on Seoul time . The final conference was based on Rome time on July 24–26, 2023. The conferences employed an innovative approach in which all speakers engaged all three days simultaneously on a Zoom call, open for viewing by anyone who registered to tune in.

A total of fourteen AI&F experts presented at the conferences, all of which were organized around three core questions:

  1. How and to what extent will AI influence the evolution of human culture and virtue?
  2. Can AI assist humans in the acquisition of virtue?
  3. Is AI itself capable of virtue? If so, are those virtues shared with or distinct from human virtues?

Each of the three days of the conferences focused on one of these questions. Speakers on the first day focused on the cultural ­­­evolution and the future of humanity, transhumanism, and AI based on their papers that examined the positives and the negatives of emerging technologies on society, culture, and human nature. We asked the AI&F Advisors, Research Fellows, and Contributing Fellows speaking at Berkeley to summarize their presentations for us, and heard from the following:

  • AI&F Advisor Elias Kruger, based on a joint paper with Brian Sigmon, explored the ways in which AI applications would empower or oppress climate refugees in different hypothetical scenarios set in the year 2045. Their analysis, which looked specifically at facial recognition, decision engines and AI-enabled drones, led them to conclude that AI applications have the potential to both empower as well as oppress climate refugees.
  • AI&F Advisor Michael Paulus discussed the history of the library and used this discussion to explore future possibilities for artificial agency in cultural development. In his words, “My paper, ‘Revisiting the Meaning of the City and the Library,’ explores how forms of artificial agency have been shaping human cultures for millennia. Considering the library as an example of structural agency that humans have managed relatively well over time, I highlight ways libraries develop, along with information technologies, information practices that center human attention, values, hope, and agency. These information practices depend on and can cultivate virtues, and they can inform the development and use of more virtuous systems.”
  • AI&F Advisor Noreen Herzfeld focused more directly on the question of whether or not AI can be aligned with human values. In response to this question, her paper drew upon Hannah Arendt’s examination of virtue in Nazi Germany. Arendt’s analysis illuminates at least three stumbling blocks, Noreen argues. “First, Arendt argues that virtue is not rule-based.  Arendt noted that social codes are insufficient in that they can rapidly change, and that particular cases require particular answers no general rules can predict.  Rather, virtue relies on inner introspection, a dialogue with oneself that determines when one ‘this I cannot do.’ Such introspection requires a level of sentience and theory of mind computers do not yet have. Finally, AI threatens the ultimate value of life itself, through its hidden usage of vast amounts of energy.” In Noreen’s judgment: “As AI moves from a niche application to the general public, it will increasingly contribute to climate instability and change.”

The second day of Berkeley’s conference focused on the influence of AI on spiritual development and the acquisition of virtue:

  • AI&F Research Fellow Cyrus Olsen, along with three of his colleagues, presented their work on AI and ML in the context of medicine. Cy summarized his paper as follows: “AI/ML assists human virtue acquisition within emergency medical practice by augmenting intelligence limitations. Practicing evidence-based medicine requires processing data to decide an optimal course of treatment under conditions of frequently rapid triage in the emergency department (ED). AI/ML can optimize care by serving as an adjunct to human decision-making, facilitating discernment as algorithms help clinicians weigh predictable uncertainties. Explanations will be provided from task- specific cases in radiology and extracorporeal membrane oxygenation (ECMO) where it serves as a clinical decision (CDS) tool fitting within the category of software as medical devices (SaMDs).”
  • AI&F Contributing Fellow Thomas Arnold’s paper put Stanley Cavell’s work on ordinary language, perfectionism, and responsibility into conversation with human-robot interaction (HRI) and machine ethics. In Thomas’ view, “to speak of virtue’ with interactive systems, not to mention in the systems themselves, calls for more scrutiny of how language and intentionality is being attributed to systems. Responding to the emergence of LLM interfaces like ChatGPT, I reflect on the particular force of Cavell’s work for AI interactions geared toward virtues of care.”

Picking upon similar themes, the final day of the Berkeley conference focused on the question of whether or not AI technology can be virtuous and, if so, what that might entail. The papers delivered provided multiple perspectives on virtue, including:

  • AI&F Research Director Mark Graves’ paper, “Gracing of Sociotechnical Virtues,” examined whether AI can be virtuous by considering virtue in the theological context of grace. Here is Mark’s summary: “I first explain that considering AI and humans in isolation does not address the interleaved development of AI technology and social systems that actually occurs. I then argue that even when considering virtue as something requiring grace to fully develop, then one can still consider AI as virtuous in that context. In particular, I describe the gracing of virtue from the Protestant perspective of John Wesley, the Roman Catholic perspective of Karl Rahner, and the Orthodox Christian perspective of John Chrysostam.”
  • AI&F Research Fellow Ali-Reza Bhojani’s paper “offered a preliminary exploration of the potential use of moral language to describe AI considering Islamic meta-ethical thinking. Within voluntarist Islamic meta-ethical frameworks AI can be appraised on teleological or utilitarian grounds, but not in terms of divine reward and punishment. For proponents of stronger moral rationalisms there seems to be further scope for the use of moral language to describe AI applications. If actions soundly ascribed to an AI agent are rationally praiseworthy or blameworthy, they are praiseworthy or blameworthy in the sight of God. I argue this opens up space for reconsidering ideas of trans-human moral agency, something not alien to the Quran’s discussion of non-human species such as jinns, angels and animals. Furthermore, it may also prompt reconsideration of the priority given in the Islamic tradition to reason and rationality in the conception of what it means to be human.”

We asked our experts what themes especially struck them from the interaction at the conferences, which as of this summary were limited to Berkeley and Seoul. Noreen Herzfeld noted what seemed to be “general agreement that alignment was a hard problem and that AI programs as we know them, particularly LLMs, cannot be virtuous.”

Levi Checketts, who presented his work at the Seoul conference, similarly suggested that “most scholars think the question ‘Can AI become virtuous’ is misleading, at least presently. It’s better to ask what the AI is doing to us at present and how that is facilitating our development of vice or virtue.” Levi further argues that “one of AI’s most direct impacts on the development of our moral landscape lies in its ideological power. If we define sophisticated algorithms as ‘intelligent,’ we often assume that what they are doing is good and right, and subsequently we denigrate people whose ways of thinking differ from the machine.”

Those who participated or attended the Berkeley conference commented on the quality of the papers and the rich discussions they inspired. Elias Kruger, for instance, found that the conference “highlighted the breadth of the discussion around the area of AI and virtue. I was impressed by the wide range of topics and disciplines represented.”

With so many experts from AI and Faith participating, the conference provided an opportunity for AI and Faith experts to converse and get to know each other. Thomas Arnold, for instance, saw the conference as an opportunity to meet scholars in a novel formation. For him, as someone with a background in religious studies, it was a return of sorts, “from theology to social sciences to intellectual history, the workshop catalyzed some vibrant questions that many AI ‘thought-leaders’ fail to recognize or at least acknowledge. These new connections will require patience in cultivation, as the breadth of interest and method means digesting and learning some unexpected material.”

Cyrus Olsen, similarly noted that the conference “first and foremost built a community, which is no small feat these days, and over Zoom no less.” He also appreciated the fact that the papers were exchanged in advance, which allowed participants to engage more fully with the ideas and arguments in circulation. In his view, “interdisciplinary community-building provides us with ongoing support for the challenging work ahead.  Add Templeton’s emphasis on open-source sharing and publishing of results to ensure science-religion research is maximally available to the public, and we had an uplifting experience.”

Michael Paulus reflected that “our world is being shaped by and for AI, and we are distributing moral responsibilities in new ways within an information environment that includes both human and artificial agents. The papers discussed at the Berkeley conference explored the place and importance of virtues in our emerging multi-agent information environment. In aggregate, the papers and discussion demonstrate the need for virtuous design and use of AI as well as virtuous AI—or at least AI that helps cultivate virtues rather than vices.”

For Ali Reza Bhojani, “the Berkeley conference brought together an impressive array of diverse ideas, both theoretical and applied. It afforded critical, comparative, and inter-disciplinary engagement both within and across different traditions of thought. Most importantly, this was conducted with an admirable spirit of collegiality throughout. The breadth and depth of discussion emphasized the imperative of engaging the many philosophical, theological, and ethical issues arising around questions of ‘virtuous AI’ from plural perspectives.”

Other AI&F Experts presenting at Berkeley were Advisors Brian Green, Shannon French and Ted Peters, and Research Fellow Gretchen Huizinga. AI&F experts presenting at the Rome conference, occurring on July 24–26, 2023, are Research Fellows Robert Geraci and Nicoleta Acatrinei and Advisor Derek Schuurman More information about the conference can be found on the Project’s webpage, here.

David Brenner, Board Chair of AI and Faith, who was an observer at the Berkeley conference saw it “in many ways as aligning with AI and Faith’s own mission–cross faith, cross-cultural, respectful discussion, a blend of intellectual ideas and models with specific applications.”

Kudos to Braden Malhoek at CTNS for organizing this innovative set of regional conferences and to all our experts as well as other excellent presenters in this important consideration of the role virtue ethics can play in the broader AI ethics conversation.

AI&F Contributing Fellow Sara-Jo Swietek is a member of our Editorial Team.


Sara-Jo Swiatek

Is an ethicist who specializes in Christian ethics, Kantian moral philosophy, and technology ethics. She holds a Masters in religious studies and PhD in religious ethics from the University of Chicago Divinity School. Her research is centered on the question of moral agency, and what it means to have moral faith in a time of rapid technological change and innovation. She has taught courses in computer ethics, theological ethics, and religious studies and will be teaching theology courses at Seattle University during the 2023-2024 academic year.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter