New! Become A member
Subscribe to our newsletter
Insights

COVID-19, AI Ethics, and Vocation

At the beginning of April, the Stanford Institute for Human-Centered Artificial Intelligence held a virtual conference on COVID-19 and AI. The conference focused on how artificial intelligence is being used to research, manage, and react to this pandemic. It also revealed that COVID-19 is the most important test of the present possibilities and limits of AI.

The possibilities of AI may have been outrunning our imagination previously, but right now AI technologies—making use of big public and corporate datasets, as well as smart devices such phones and sensors—are being applied to very real and practical problems. These include: content authentication and moderation; processing scientific literature for insights and trends; identifying people at risk; mapping and forecasting the spread of the disease; managing and diagnosing patients; analyzing viruses and drugs; monitoring social behaviors and policy responses; and manipulating behaviors such as social or physical distancing. These activities and others likely will lead to more open data, automated decision-making, complex forecasting models, healthcare robots, environmental surveillance systems, and social behavioral changes.

This largely public and global use of AI reveals how these technologies increasingly intersect with and touch nearly every dimension of our lives. Although the present limits of AI are more clearly being exposed (e.g., in content authentication), near-future developments will create AI systems that are better at reading our words and images and tracing us through time and space. Uses of AI to respond to COVID-19, consequently, become an important case study not just of AI but of AI ethics.

How do we avoid spreading misinformation or disinformation in the midst of an infodemic? How do we balance personal privacy and public health surveillance? How do we preserve individual autonomy and agency in automated decision-making systems? How do we ensure that AI systems are fair and equitable?

There are existing ethical approaches, complementary and overlapping, which are being applied to AI development currently. These include:

  • data and information ethics, focusing on data and information authenticity, access, property, privacy, security, curation, and use;
  • bioethics, focusing on autonomy, nonmaleficence, beneficence, justice; and
  • social justice, focusing on fairness, accountability, transparency.

 

Luciano Floridi et al. bring many of these ethical concerns together in “How to Design AI for Social Good: Seven Essential Factors,” which identifies these key factors and best practices for designing and using AI for the advancement of social good (AI4G):

  • Falsifiability and incremental deployment: “Identify falsifiable requirements and test them in incremental steps from the lab to the ‘outside world.’”
  • Safeguards against the manipulation of predictors: “Adopt safeguards which (i) ensure that non-causal indicators do not inappropriately skew interventions, and (ii) limit, when appropriate, knowledge of how inputs affect outputs from AI4SG systems, to prevent manipulation.”
  • Receiver-contextualised intervention: “Build decision-making systems in consultation with users interacting with and impacted by these systems; with understanding of users’ characteristics, the methods of coordination, the purposes and effects of an intervention; and with respect for users’ right to ignore or modify interventions.”
  • Receiver-contextualized explanation and transparent purposes: “Choose a Level of Abstraction for AI explanation that fulfils the desired explanatory purpose and is appropriate to the system and the receivers; then deploy arguments that are rationally and suitably persuasive for the receivers to deliver the explanation; and ensure that the goal (the system’s purpose) for which an AI4SG system is developed and deployed is knowable to receivers of its outputs by default.”
  • Privacy protection and data subject consent: “Respect the threshold of consent established for the processing of datasets of personal data.”
  • Situational fairness: “Remove from relevant datasets variables and proxies that are irrelevant to an outcome, except when their inclusion supports inclusivity, safety, or other ethical imperatives.”
  • Human-friendly semanticization: “Do not hinder the ability for people to semanticise (that is, to give meaning to, and make sense of) something.”

(See this table for factors, best practices, and corresponding ethical principles.)

As thorough as this approach is, there are larger questions left unanswered. Some of these are raised in “Ethics, Computing, and AI,” a set of commentaries written for MIT’s new Schwarzman College of Computing: “What kind of world do we want to make? Will the future be humane and livable? Will it be fair and just? What knowledge and values can sustain and guide us?”

With these questions, we are deep into the domains of philosophy and theology, and find ourselves needing to move from a list of ethical principles to a principled way of living. One concept that can be helpful for thinking about this shift is the idea of vocation.

Vocation is typically used theologically—a calling often presupposes a divine caller, and in the past sacred occupations were of particular interest—but motivations for living one’s life in a particular way may have other sources. Today, vocation can be used to describe a sacred or secular call that can be specific (e.g., to serve in an ordained position) or general (e.g., to care for the earth).

In “The Scholar’s Vocation,” Chad Wellmon describes Max Weber’s reflections on vocation in a rationalistic and mechanistic culture. In the early twentieth-century world of bureaucratic constraints, Weber saw vocation not only as an economic necessity and social responsibility but also as a solution to the problem of meaning and “the need to conceive of one’s life as a coherent whole.” In a vocation, one pursued “a new form of intellectual and spiritual work” that involved acting on the belief “that one had, in fact, been called to do and serve something in particular.” To fulfill one’s calling required a commitment to truth, paying attention to the world, taking responsibility for the future, and exercising one’s agency.

With such an understanding of vocation, ethical questions—about what we should do—are preceded by at least two larger questions: What can we know, and for what can we hope?

The first question is about epistemology, how we know what we know. When I taught classes on history and archives, I used a sources continuum to talk about archival epistemology and the functional primacy of sources. Now, when I teach about theology and technology, I refer to the apocalyptic imagination to speak about what N. T. Wright calls an epistemology of love. In both contexts, the challenge is to discern sources of knowledge and how to attend to them.

The second question is about eschatology or our ultimate hopes. In the Christian church, this is in the season of Lent. One of the lectionary texts for this time is John 11, about the death and resurrection of Lazarus. This text is both hard and hopeful. When Jesus hears his friend Lazarus is ill, he doesn’t rush to save him and Lazarus dies. When it’s clear that Lazarus is dead, Jesus arrives, is deeply troubled and weeps, and then raises Lazarus from the dead. Though everyone in this story will die (Lazarus a second time), Jesus has manifested a glimpse of resurrection hope. The tension this text calls us to live into was echoed by N. T. Wright recently in a question and answer session about “Christian Leadership in the Midst of a Pandemic.” Wright said we must, as always, embrace both suffering and hope in the midst of our current crisis. In time, as we have in the past, we will learn and gain a wiser and more integrated vision of world.

Our sources and vision for the future of our lives, work, society, and world shape our sense of vocation and understanding of what we should do—with AI and everything else. Our vocational understanding can ground our moral imagination as we engage with ethical principles.

Adapted with permission from Michael’s blog, Digital Wisdom, which appears on Patheos Evangelical. 


Michael Paulus

is University Librarian, Assistant Provost for Educational Technology, and Director and Associate Professor of Information Studies at Seattle Pacific University. His administrative, teaching, and scholarly interests focus on the history and future of information and communication technologies.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter