New! Become A member
Subscribe to our newsletter
Interview

Interview: Thomas Arnold / Theology, AI, and Human-Robot Interaction

Thomas, for someone doing research at the Tufts’ Human-Robot Interaction Lab, you have a rather interesting background. Tell us about that.

My academic training is in philosophy, classics, and religious studies, so my research interests have traveled down a little bit of a winding road. From philosophy of religion, pragmatism, feminist views on mysticism, and the thought of Ludwig Wittgenstein I have moved into applied ethics, and from there into human-robot interaction and machine ethics.

What prompted your interest in robots and AI?

It has unfolded not so much as an interest as a comeuppance, what Aeschylus and Sophocles would have regarded as punishment for hubris. When I was studying philosophy as an undergraduate, I was often put off by the AI fans (aka “symbolic systems” majors) who would steer issues of meaning, mind, and language into questions of what a neural network, for instance, could do. I started taking more courses in classics because that field’s approach seemed committed to learning about living, breathing human beings in the thick of history. It dealt with flesh and blood, the person who conceives of an unmoved mover not just the concept of an unmoved mover alone. I pursued masters and doctoral work in religious studies and theology along the same trajectory, out of a sense that we live as vulnerable, communal, questing creatures, not a set of arguments. Ethical theory and trolley problems did not seem to capture the people and communities struggling to live an examined life through multiple layers of symbol, narrative, and ritual.

But when I talked with Tufts HRILab director Matthias Scheutz about the issues they were grappling with in terms of training and designing social robots, I realized that spurning formalism and artificial intelligence was kind of a cop out. If I really believed in the complexities and depths I studied, then they should be able to inform, challenge, and enhance what computer scientists are doing. It is not so much a love for AI and robotics as such that called me to this work, but the very interesting and collaborative ethical work that it opens up.

Do you have a sense that more and more theologians are drawn towards AI-related issues, or are you still something of an aberration?

I definitely think theologians and scholars of religion are starting to walk around the agora of technology and ethics, in part because their training enables them to recognize appeals to transcendence when they see them. To put it in terms of Jean-Luc Marion, that means discerning both idolatrous and iconic aspects of AI: where, on the one hand, AI hype markets shallow, misleading images of intelligence and improvement, and, on the other hand, where AI methods offer genuine opportunities for socially responsible assistance, as well as spurs to reconsider what life is really about.  The “AI and Religion” panels at this year’s AAR meeting were a good demonstration of that, where a range of methods, historical perspectives, and use cases revealed the multiple relationships that AI and studying religion embody through and with one another. In many ways the interdisciplinary conversations are just getting started.

In your view, why is it important for people of faith generally, and theologians more particularly, to be deeply involved in the ethical AI arena?

Let’s say you arrived at this piece from Twitter— you’re already involved in an algorithmic process by virtue of your eyes alighting on this sentence. If your theology and faith have nothing that bears on how such a system operates, in any fashion, then why not? What do your theology and faith represent if they do not reach down to the level of everyday work and life with other people? There may be no clear answer to that, or there may just be overly vague slogans thrown toward it. But to me AI and robotics are testing grounds for examining where one’s commitments make a difference. So, for instance, I believe there can be a difference between pastoral care and clinical care in the face of birth, death, and suffering. Instead of the usual stereotype of religion helping people escape, a chaplain might help a person discover more about the realities of life than someone seeking to medicate or treat a condition. But instead of just asserting that, I need to work with others to explore the care context — from nurses to administrators to roboticists to social workers — and see what responds to true needs. Being able to facilitate collaborative, informative discussions will be key here.

Theologians and people of faith are not alone in being tested along unsettling lines about why people build machines, what they are for, and what it means if they perform some tasks better than people do.  Facing an AI system at the Go board is a different thing than watching a robot be at the bedside of your loved one in hospice care. Broad-brush sermonizing about AI as a whole (which is currently done more through tech journalism than in places of worship) is less valuable than mapping the uneven, messy, interactive landscapes that AI is already shaping.

Tell us about your work at the Human-Robot Interaction Lab — particularly the research topics that most interest you.

My work at HRILab has been about helping to do some of the mapping I just mentioned, not just in terms of the social settings for robots but how ethical challenges can form there. This means wading past the sci-fi scenarios of robot takeover and talking about more ordinary dynamics. How should designs of soft robotics take into consideration people’s dispositions toward soft objects — do they invite aggression, offer a deceptive sense of cuteness, creep people out if similar to human skin? What if the robot is “touching” another person — what do we attribute to that robot in terms of social awareness, even unconsciously?

I’m also interested in how people, organizations, and communities can hold AI systems to account. There is a lot of work in “explainable AI” right now that fudges and hand-waves about what a good explanation can be, in part out of a recognition that “black-box” systems (e.g. those of deep learning) do not offer transparent access to why they reach their outputs. These two aspects of ordinary embodiment and social accountability have led me to start researching and working in topics of care work and ethics. What does it mean for a patient with dementia to be in interaction with an automated assistant? How can care providers know enough about the system to correct it, or know it’s broken, or challenge one’s higher-ups from implementing it given its current capabilities?  These are a few of the clustered questions upon which I’d like to work collaboratively.

How do you think about human-robot interactions? Should we think of such interactions as simply human-machine, or akin to human-animal, or as something quite different altogether?

The importance of human-robot interaction as a field of study is to explore that question with some humility, precision, and savvy. What HRI work already shows is that human beings interact with robots differently than how they imagine, or even say, that they would.  After an episode of Westworld or Black Mirror you might have some ideas of what you would do or feel interacting with a robot, but real interactions prove to be another story. You can swear up and down that a robot is a tool, or a mere device — I can even tell you the thing does not understand anything beyond the commands “stop,” “go,” “turn left,” and “turn right”— but when it simulates crying you might still instinctively offer it comforting words. And it is not just human-human instincts that are in play. The Paro companion robot simulates a seal, an animal rather than humanoid template that delivers warmth and tactile feedback.

But ultimately I’d say human-robot interaction will force us to ask what or who is really “present” at all in a given interaction.  When I shout “REPRESENTATIVE!” at an automated voice service, I’m not talking to an animal or a human being. I’m talking to no one, and I may not even be aware of who I am or want to be. It’s important that people ask in what way they are embodying the “human” end of the bargain as they use and relate to technology. That may be a harder thing to face than how to categorize a machine.

I know one of your areas of research has been sex robots. Tell us about that, including what prompted your interest in that arena?

You can’t seriously talk about “robot ethics” without at least broaching what robots represent for and about human sexuality, but ideally you can do it without sensationalism. Our two articles on sex robots were sparked by a rash of opinion pieces about what sex robots were and were not. We wanted to offer a broader and more rigorous survey of people’s moral intuitions and opinions around sex robots, just to get a little better sense of whether the opinion pieces were gauging wider opinion helpfully.  We found telling gender differences around what appropriates uses for sex robots would be, whether interaction with a robot counts as “sex” (or, in the case of a married person, “cheating”); among other upshots, the results showed that the social setting and role for a robot (e.g. a jointly used device for a couple) affects how appropriate it is deemed to be. There was also a broad consensus that a robot should not be made in the form of a child. These empirical results were meant to complement and perhaps give reason to reconsider the bases for some of the armchair arguments being thrown around a couple of years ago, for example that child sex robots could keep pedophiles from harming real children. It reinforced the fact that we need to think more about human-human relationships as the backdrop for human-robot interaction (not enclosing the ethical judgment to just the human-robot dyad). The studies also suggested to us that sex per se is not as critical to study for HRI so much as intimacy, all the ways that robots and people might end up sharing vulnerable physical space.

In the movie Her, the Joaquin Phoenix character falls in love with an AI-based computer operating system. Is that a risk with AI-enabled sex robots? And how close, or not, do you believe we will come to having robots with human-like capabilities and interactivity?

An AI system with the subtlety of voice, sense of humor, psychological acuity, and Krishna-with-Gopis multiplicity of Scarlett Johansson’s character in Her may arrive no sooner than another Spike Jonze film conceit: being able to inhabit John Malkovich’s body. There is so much tacit knowledge about relating to one another that fantasies like “Her” smooth over as technically achievable. You may think standing next to other people, or knowing when to let someone finish speaking, is utterly trivial as a piece of expertise, but robot design shows otherwise. Falling in love is an extreme. One might start by asking how even basic bonding with robots could risk falling for deception and manipulation.

One thing I would point out is that interactivity may not directly vary with capability. That is, it may be that robotic incapacities allow for heightened interactivity, both positively and negatively. The WIRED story by Emily Dreyfuss, “The Terrible Joy of Yelling at Alexa,” beautifully chronicles how Alexa’s unwavering persona gave an adult couple the impetus to cuss her out after a long day at work (one problem being when their two-year-old picked up the habit). HRILab, working with occupational therapists and Parkinson’s patients, has explored on what terms it might actually relieve a patient of pressure if a robot did not perceive emotional cues from their face (a challenge for Parkinson’s patients suffering from facial paralysis). What robots won’t be able to do, in other words, will be as important to design with care as what they can do — and for that very reason such incapacity might yield more responsible, effective forms of assistance.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter