New! Become A member
Subscribe to our newsletter
Interview

Meet Founding Expert Jane Compson, Teaching Buddhist Thought and Applied Ethics at UW Tacoma

Hi Jane. Please tell us about your academic background and interests.

Hello.  My academic background is in religious studies and philosophy, with some applied ethics mixed in.  I have a  Master’s Degree on Religion in the Contemporary World. My PhD was a comparative study of some Buddhist and Christian teachings on how each respectively deal with other religious groups and teachings.  After my PhD, I became interested in applied ethics and came to the US from the UK to get an MA in  Philosophy (Bioethics). My interests have continued to be interdisciplinary.    I’m now an Associate Professor at the University of Washington, Tacoma, where the courses  I teach include Philosophy, Religion and the Environment, Buddhist Thought, Comparative Religion, and Case Studies in Medical Ethics.

Broadly, so far my research agenda looks at the application of Buddhist contemplative techniques in contemporary secular contexts. I’ve written about mindfulness, and some of the questions and debates that come up when Buddhist teachings are decontextualized. I’m also interested in how insights from contemporary psychological sciences – such as awareness of the prevalence of childhood trauma – can be applied to meditation practices, in both ‘religious’ and secular contexts. My interest in AI is part of this pattern of interests in how Buddhist teachings can and do engage with contemporary ethical questions.

 

What motivated your interest in the AI ethics arena?

To be honest, I’m very new to these conversations, and am grateful for being invited to join them! I have only just dabbed my toes in the water, and definitely feel like a rookie with lots to learn.  That is an exciting place to be, though, and the more I explore the more interested I am in this topic. One of the things that appeals to me is that the scope for exploration and discussion about ethics in this arena is so vast. So much of the territory is new, and calls for creative and adaptive thinking.  I’m also intrigued by how many of the issues appear similar to some of the ethical questions in other applied ethics fields, like environmental, animal, or medical ethics. So much is speculative about AI ethics, and we really don’t have any precedents to work with, yet there are some closely analogous discussions in other ethics fields.  For example, one of my heroes in environmental ethics is Aldo Leopold.  When I listen to AI ethics debates I am sometimes reminded of Leopold’s famous Land Ethic essay, where he called for an expansion of ethics from being exclusively human-focused, to including “land and the animals and plants which grow upon it.” He described this as the third step in an ethical sequence, with step one being the relationship between individuals, and step two being the relationship between individuals and human society.  I can imagine AI ethics becoming part of step four of the ethical sequence.

 

I know that, among other subjects, you teach classes on both environmental ethics and biomedical ethics. Are there ways in which those more-established ethics arenas prove helpful in thinking about AI?

Yes, I think so.  In both areas, there are key questions that also can be applied to the AI arena.  For example, what qualities does a being or entity need to possess to be morally considerable – i.e. to ‘count’ in moral equations? Sentience?  Intelligence? Self-awareness? In environmental ethics, for instance, we might ask whether trees have moral standing, and if so, on what grounds? Do we have direct or indirect duties to trees? In other words, do we have moral obligations towards them in themselves because they have intrinsic value, or only insofar as they are of instrumental value to humans? I think these kinds of questions have clear relevance to AI.  Will there be a point at which we have moral obligations to AI, as well as concerns about what AI can do?

In biomedical ethics now, many of the moral questions that arise relate to the use of technology, and the extent to which it is appropriately applied. This could be in the arenas of life-prolonging technologies used at the end of life, genetic modification and testing, xenotransplantation, reproductive technologies, and so on.  Just because, through advancing technology, we can do something, does it mean that we should?  This question is no doubt familiar to anyone working in AI ethics, so the overlaps here are obvious. There are existing ethical debates and theories in other disciplines of applied ethics, then, that can usefully be applied to AI discussions – it is not a question of completely starting from scratch.

 

What do you think people of faith, and/or faith communities, can bring to AI ethics?

I suspect that what it means to be a person of faith or in a faith community will mean different things to different people.  I’m taking the term to mean prioritizing certain values and commitments about what it means to be a human (or, indeed, a non-human!), how we should treat one another, what a flourishing community looks like, how should we encounter suffering, and so on.  The theologian Karen Armstrong makes a distinction between two ways of encountering truth found in most pre-modern cultures –  logos and mythos.  Logos or reason/science describes the practical ways of controlling our environment; technology is a contemporary example of this. Mythos uses narratives, myths and rituals to help navigate our psyche and deep existential questions or meaning that logos cannot help us with. Logos might be able to tell us how a loved one died, for example, but it is most likely to the realm of mythos that we will turn to make sense of and cope with our bereavement; a faith tradition is an example of this kind of meaning-making.   It is not that one way of knowing is better than the other, but they each serve different, useful purposes.  Armstrong argues that in the modern world, mythos has been discredited by logos, and that we are impoverished by this imbalance.

I find Armstrong’s analysis very helpful when thinking about faith and AI ethics – faith communities help us to keep mythos in mind and heart.  They may prompt those working in AI to consider questions like ‘why are we doing this?’, ‘who does it serve?’, ‘who should it serve?’, ‘how does this innovation sit with our professed ethical and social values?’.

 

 

More specifically, tell us about how you see Buddhist faith traditions contributing to AI ethics?

As with other faith traditions, there are many different expressions of Buddhism, so there are probably as many different ideas about Buddhist traditions’ role in relation to AI as there are Buddhists!  I think it’s fair to say, though, that all of them acknowledge the existence of suffering and advocate wisdom, practice and ethical conduct to lead to alleviation, and eventual cessation of suffering, ultimately for all beings. There is a shared prizing (albeit to differing degrees) of the values of wisdom and compassion.  What counts as ethical action is determined by the extent to which it grounded in the pro-social motivations which support the alleviation of suffering.

Let’s look at the example of intention, one of the factors of the eightfold path (the others are view, speech, action, livelihood, effort, mindfulness and concentration). Intention is ‘right’, ‘wholesome’ or ‘skillful’ when it is rooted in motivations of wisdom, compassion and generosity. It is ‘wrong’ when it is rooted in greed, hatred and ignorance; actions inspired by these are likely to perpetuate, rather than interrupt, cycles of suffering.

Approaching AI ethics through this lens of ‘is this consistent with right intention?’ encourages us to look at our motivations for using AI. Artificial intelligence is goal driven; do these goals support the alleviation of suffering, the expression of wisdom, compassion and generosity? Or are they in the service of satisfying greed, for example? Buddhist visions of the ‘wholesome’ or ‘right’ expression of the eightfold path can thus serve as a set of guiding moral principles, a prism through which to evaluate AI project.

Another area where Buddhist thought can really add an interesting dimension to AI ethics is in its concern for the alleviation of suffering for all beings who experience it.  It helps us to widen the moral lens when we think about who or what AI impacts, because all suffering beings are worthy of concern – not just humans.  Another interesting implication is that if an AI were to become sentient or self-aware, then Buddhist teachings, the Dharma, would be applicable to it, too.

 

You’ve mentioned elsewhere that you believe Buddhist ethics are very context dependent. How beneficially, or not, does that translate into the AI ethics arena?

Knowing which actions are skillful or unskillful at any particular time depends on having a good understanding of the particulars of the situation. Rules can be problematic, because there may always be a time when an exception to the rule might be the most skillful course of action.  To make these decisions requires wise discernment. I find it interesting to think about whether AGI might be developed which is better than humans at wise discernment.

However, if one were trying to program AI to have ‘wise discernment’, how could this be done? An enlightened mind is free of the ‘defilements’ of greed, hatred and ignorance.  Is it possible to have an AI that is freed from these factors? Can such an AI be designed by humans who themselves are not free of it?

 

Will we ever see a machine exhibit ‘artificial general intelligence’ (AGI) and/or consciousness? From a Buddhist perspective, should we find such prospects threatening or exhilarating?

I suspect that it is possible, and if that machine had the ability to suffer and to be aware of its own suffering, then it would share the same predicament of all sentient beings. This is where things get really speculative, but whether such a prospect should be threatening or exhilarating might depend on the motivations and goals of that machine. If it is programmed, or has somehow evolved, to be motivated by compassion and loving kindness for all fellow-sentient beings, then it could be an incredible gift to have such an intelligence following the eightfold path and teaching others how to do so. Like any tool, AI could be used for benign or malignant purposes, so an AI that is motivated by malice is a threatening project.

My sense is that there is no in principle objection to developing a ‘Buddhist AI’.  We may already be on the path to it: in Kyoto, Japan, a Buddhist temple has developed a robot, Mindar, who gives Buddhist sermons. It is not an AI machine yet, but its designers are intending to develop that capability, hoping that as it grows in wisdom it will become better and better at helping people cope with their problems.

_________

Do you believe we’ll ever see an ‘enlightened’ AI? What would that mean? And would it force a new understanding of humans and our place in the cosmos?

This is a really difficult question! I suppose that it’s possible there could be an enlightened AI. The historical Buddha was a human being, but in Buddhist cosmology, it is not a prerequisite that you be a human being in order to become enlightened. Enlightened beings can manifest in different forms and in non-physical realms, so there are already ways in which Buddhist cosmology challenges our scientific or materialist understanding of humans and our place in the cosmos.

 

More narrowly, what prospective AI developments do you find most attractive, and which seem most dangerous?

On the positive side, AI could lead to a super-intelligence which could help us cope with perennial problems of disease, poverty, social injustice, and so on.  Who wouldn’t find the idea of the eradication of poverty, for example, to be attractive?

This scenario could take a darker turn really easily, though, if the goals of AI are not wisely chosen, or are poorly executed. Then the power of this super-intelligence could be harnessed in devastating ways that actually increase social inequality, poverty and war, and even lead to the subjugation or even elimination of humanity and ecosystems.

In terms of a specific example, the development of autonomous AI weapons seems terrifyingly dangerous to me, especially if you imagine an arms race, and the development of a black market for such weapons.

It’s dangerous not to ground developments of AI in transparent and open consideration of ethics and values.  In his advocacy for a Land Ethic, Leopold insisted that we stop thinking about land use as a merely economic problem : “ Examine. each question in terms of what is ethically and esthetically right, as well as what is economically expedient. A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise.”  A Buddhist perspective may argue that a thing is right when it leads to the alleviation of suffering of all sentient beings.

What will be our criteria for when a ‘thing is right’ in AI ethics? Who will be involved in discussions about this, seeing as all humans, non-humans and ecosystems are stakeholders in a future that could be radically transformed by AI? How do we give a voice to as many of these stakeholders as possible? This is one of the reasons I think that it is important to support forums like this one, where ethics and values are brought to the forefront.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter