Overview
For today’s #WhoWeAreWednesday we feature an interview with Dr. Matthew Dickerson. Dickerson is a professor of computer science at Middlebury College in Vermont and an advisor for AI&Faith. He holds a B.A. from Dartmouth College (1985) and a PhD from Cornell University (1989). In addition to his book The Mind and the Machine: what it Means to be Human and Why it Matters 1, he has co-authored several books on the writings of J.R.R. Tolkien and C.S. Lewis which explore moral, theological, and environmental themes in their work. He has also published several books of non-fiction nature and environmental writing.
How would you describe your experience with AI?
Although I have a PhD in computer science and have been teaching college level computer science for more than 30 years, AI is not my area of expertise. Within computer science my primary fields of interest are algorithms and data structures (especially for geo-spatial computing), agent-based modelling (with applications to ecology), and computer science education. I have never taught an AI class or done research in AI. When it comes to the technical side of how to design or evaluate AI strategies or results, I am perhaps just slightly more-knowledgeable than an average bystander.
I have done a fair bit of research, writing, and teaching on the philosophical side, especially in the area of philosophy of mind. Although my PhD is in Computer Science, I also did graduate work in medieval literature. I have written several books about literature, and I bring that background in conjunction with my computer science background to my work in philosophy of mind. I am interested in questions such as: “Can computers reason?” and “Are computers creative?”. I believe these are fundamentally philosophical and not technological questions. They are questions that cannot be answered by pointing to specific computing systems (such as ChatGPT), nor can they be answered via neuroscience. Much of my work resulted in my book The Mind and the Machine 1. This book offers a response to the physicalist assumptions of several contemporary thinkers including engineer Raymond Kurzweil, philosopher Daniel Dennett, and biologist Richard Dawkins*.
* The physicalist philosophy holds that human beings are essentially complex computing machines. Kurzweil 2, 3, and 4, base this assumption primarily on observed physiological, biochemical, and evolutionary phenomena.
How would you describe your faith background?
I am a Christian. That is, I seek to be a follower of Christ. I believe that the universe is not a result of blind, purposeless chance, but rather the result of a loving, creative act of a divine being (God) who is self-existent, loving, omnipresent, omniscient, and omnipotent. I believe that the God who formed the cosmos and who is present throughout it has also entered it in incarnate form in the person of Jesus. Of course, those are just statements of belief. My understanding of Christianity is as a faith to be lived in love, humility, gentleness, kindness, and shalom with and toward fellow humans and the created order, and not just reduced to a set of axioms.
My spiritual growth—that is, my desire to follow Christ in word, deed, and thought, is a central part of my life. My active reading list nearly always includes something intended to help me grow in Christian faith, both understood and lived. As I write this, for example, I have been reading a book by Steve Garber about vocation titled The Seamless Life: a Tapestry of Love & Learning, Worship and Work5. This book challenges me on how to approach my own vocational work as a computer scientist and a writer.
What led to your interest in the intersection of AI and faith?
My initial interest came out of philosophical questions about whether physicalism, materialism, and naturalism are correct views of the world, or whether there is a reality behind the material universe. Is there a spiritual reality as well as a physical one? Are humans simply complex biochemical robots? Just beings philosophically reducible to machines? Can humans reason with any normative value? Are humans truly creative? These are central questions to my book The Mind and the Machine1.
I think there are also very important ethical questions regarding the uses of computers, AI, and technologies like machine learning. How we answer those ethical questions depends greatly on what might be called theological, philosophical, religious, or faith-based assumptions. We cannot answer ethical questions in a vacuum without some a priori set of values, whether those values are cultural or religious or have some other source.
Why are you involved with AI&F?
Mostly I have been an observer who has participated in meetings. As noted above, my computer science background is not in AI. While over the past several years I have increasingly incorporated some sections on ethics and responsible computing in several of my undergraduate computer science classes, I lack the expertise in AI or the formal training in ethics to contribute a lot to discussions in those areas (though I have done considerable writing, reading, and teaching in environmental ethics).
How does AI&F affect your work outside the organization?
Part of my increasing effort to incorporate questions of ethics and responsible computing into my computer science classes has come from my participation in the AI&F community. These questions become more and more pressing as our technologies continue to rapidly advance. It has been very helpful to interact with folks addressing these questions from faith traditions or faith assumptions beyond just the materialist faith, which seems to dominate many segments of academia.
What open problems in AI are you most interested in?
I am very interested in hearing other voices addressing various ethical questions in AI. For example, I have been interested in the environmental implications of computing. This includes the upstream demand for resources, the downstream disposal of electronics, and the current energy use of computing technologies. In an era of climate change and environmental devastation from resource extraction, often driven by computing technologies and the need for precious metals, the field of big data continues to grow rapidly. I think these are issues that need to be address and discussed, but often get very little attention from within the field of computer science. Even raising these issues makes many computer scientists very uncomfortable with our potential culpability. The reliance of AI on big data exacerbates all these issues2.
The arrival of ChatGPT is also likely to come with many ramifications. I think our culture, and my institution, will face many ethical questions about how ChatGPT and other similar technologies are incorporated into the classroom. As an instructor of computer science, I want to encourage my students to think carefully about what technologies they help develop. This goes beyond just letting job markets and pay scales determine where they will work and how they will invest their time and skills.
References
Dickerson, Matthew. “The Mind and the Machine: What it Means to be Human and why it Matters.” Cascade, 2017.
Kurzweil, Ray. “How to create a mind: The secret of human thought revealed.” Penguin, 2013.
Dennett, Daniel C. “Consciousness explained.” Penguin UK, 1993.
Dawkins, Richard. “The Selfish Gene.” Oxford University Press, 1976.
Garber, Steven. “The seamless life: A tapestry of love and learning, worship and work”. InterVarsity Press, 2020.
García-Martín, Eva, Crefeda Faviola Rodrigues, Graham Riley, and Håkan Grahn. “Estimation of energy consumption in machine learning.” Journal of Parallel and Distributed Computing 134 (2019): 75-88.
Acknowledgments
A big thanks to Dr. Matthew Dickerson for his time to carry out this interview. Thanks to Emily Wenger and Marcus Schwarting for proofreading, editing, and publishing this work.