I had perfect vision until I turned 50 and then my visual perception began to blur. Since getting my first pair of glasses a dozen years ago, I’ve found I actually enjoy peering through the optical machine while the assistant clicks through various lenses to update my prescription. I like watching the images fade in and out of focus until I land on the optimal one, even though it’s often hard to decide precisely which lens is best. This is, I think, a good metaphor for our work in AI and Faith.
We are making the case that ethics derived from faith beliefs and values belong in the national and international discussion of “good AI”. Along the way, we are learning how many ways of seeing exist within our own community, including perspectives dictated by profession and training, faith tradition, and priorities for different areas of concern and benefit. We think this diversity is part of our strength, since it matches the diversity of perspectives in the broader secular conversation over AI and ethics. This diversity of perspectives and experience is reflected in the three other feature essays in this Newsletter.
The Faith Lens Adds Value
As the power of artificial intelligence to reshape society has come under scrutiny, many different lenses are available to assess and perceive the benefits and risks of AI-powered applications. By no means do we claim that values derived from the world’s prevalent faiths provide the only lens for this perceptual task, or even the best lens. But we do state as our most basic claim that such a lens should at least be included in the line-up of lenses in the ethics-testing mechanisms of our broader society.
Over the past 28 months since we launched AI and Faith, we have asserted and tested that claim through a variety of means.
On our website we have asserted that the fact many sophisticated AI technologists are religious in itself justifies consideration of how their beliefs can shape their work in technology.
Over the past two years we have supported this assertion by gathering a large, increasingly international community of such experts, now numbering over 60, whom we call Founding Members. Along the way, we have tested the ability of these experts to translate their faith perspectives into relevant commentary on ethical choices related to artificial intelligence through dozens of essays on our website and in our monthly Newsletter, by reporting on their contributions in conferences and publications around the nation and world, and by helping to place them in programs of others and our own creation.
Though we do not yet have empirical data to test the effectiveness of these programs and writing, the body of work speaks for itself. In each instance it is safe to say our authors and speakers had a lot to contribute to the conversations and subjects they addressed, that audiences paid attention, and we are beginning to raise the profile for faith-derived ethics having a place at the discussion table. At some point, we need to evaluate the quality of our contributions. That is part of the reason our Board decided in January to convert our organizational approach from a free-flowing channel to a “think tank” model to facilitate such evaluation as well as create new ethical resources.
Currently, we are testing through two new projects how faith beliefs can translate into ethical positions on AI-powered technologies. Both projects grow out of our Board’s decision in January to concentrate our mission on providing “faith-informed ethics resources to people of faith seeking to make ethical decisions while working or preparing to work in the AI arena.”
The first project is to hold a series of “Company Focused Dialogues” with workers in the AI arena through interaction with faith “employee resource groups” in those major technology companies with AI research and development programs. Such ERGs provide a ready means to reach AI workers with faith beliefs at large companies. Our hypothesis is that some number of the members of such ERGs want assistance in incorporating ethical decisionmaking into their daily work and that we can help address this. But we need to learn whether that is true and how best to be of assistance.
Our work in learning more about the rapidly expanding world of company-recognized faith ERGs led to the article in this month’s newsletter by our Founding Member Admiral (ret) Margaret Kibben on the potential for such groups to provide a basis for moral coding. Margaret sees that world through the lens of her thirty-year career as a military chaplain, where she translated faith values within a secular organization while moving up the ranks to be the first Chief Chaplain for the Marine Corps and the Navy. Intrinsic to her work was the ability to minister across faiths, which is also a key part of the success of faith ERGs in the technology world, and equally a distinctive of our organization. Margaret also applies a strong theology lens as an ordained Presbyterian minister and holder of a Doctorate in Ministry.
Our second current project seeks to reach out to technology workers and students through the faith congregations they attend, by developing a curriculum around AI and faith values on a digital platform. From this work, we seek to learn how to translate the complexities of AI technology for a lay audience while simultaneously introducing AI workers in those congregations to the process of deriving ethical positions from their faith beliefs.
It was in the course of outlining this curriculum that our Founding Members Michael Paulus and Nathan Colaner generated the other two feature essays in this month’s newsletter. Each essay illustrates a different lens for viewing the relevance of faith values for the AI ethics discussion. Michael, the Dean of the Library at Seattle Pacific and a professor of information science, engages the ethical dimension of AI in his article through a vocational lens. Nathan, who teaches business ethics at Seattle University’s Albers School of Business, brings the lens of a PhD in Philosophy to his understanding of ethics, limiting the role of faith to a traditionally narrower band of risks.
My role in AI and Faith is mainly one of networker and translator. As a lay Christian and attorney with a long career in insurance and risk management, and unhampered (?!) by any formal courses in ethics or theology, my own lens on the role of faith in ethics is wide and simple: faith informs all parts of life; faith lived out translates into values; those values are expressed in particular situations to promote good and not harm through ethical decisionmaking. This may not stand up to an intensive academic analysis by philosophers and professional ethicists, but it works well for me in the task of translating our work for technology workers, students, and lay people in all walks of life. And I’m very grateful to have the more intensive acuity of academically trained Founding Members like Michael, Nathan and Margaret when the particular context calls for it.
Other Lenses for Future Consideration
Margaret, Michael, Nathan and I all happen to come from the Christian faith tradition. We are privileged to have numerous other traditions represented among our Founding Members, and indeed in this Newsletter we announce two new Members who bring an Islamic lens to their ethical work. In a future issue, we’ll explore how that faith diversity contributes to both the substantive richness of our Founding Members’ perspectives and the willingness of others to hear them.
Another area for future consideration is our prioritization of different values as they relate to different AI-powered technologies. Our current summary of Human Values and AI Technologies is listed on the Focus Areas page of our website. Similar to the Board’s narrowing of our primary target audience, in the months ahead as we develop our “think tank” model, it will be important to revisit this formulation, and prioritize areas that are distinctly important for faith-derived perspectives. For that, we will continue to apply the rich experience and diverse perspectives of our Founding Members and our Partner institutions as well as what we learn from our market research currently underway in the form of Company Focused Dialogues and the Digital Curriculum.