Today we feature an interview with Haley Griese. Haley is a student at Harvard Divinity School as well as a contributing member of the AI&Faith editorial team. Haley graduated Cum Laude from Carleton College in 2018 with a Bachelor of Arts in religion. Her academic interests include pragmatism, decolonial theory, moral philosophy, existentialism, and technology ethics. After completing her Master’s degree, Haley hopes to pursue doctoral work in a related field. Haley is also an avid photographer.
How would you describe your experience with AI?
I approach AI from a humanities perspective. My academic background is in religious studies and I am currently pursuing a Master’s at Harvard Divinity School focusing on faith in science, technology, and society. While I do not have experience in the design and production end of AI, I (like most people in the digital age) am nonetheless impacted by its effects. In short, I would describe my attitude toward AI as wary yet hopeful. Along with many other questions, I wonder when AI should replace versus enhance human discernment. How might we consciously employ AI to complement human creativity and perception? At what point does the status of AI as a tool become blurry, and at what cost?
How would you describe your faith background?
As someone who studies faith intellectually, and wrestles with that word’s meaning daily, this is a difficult question to answer! Suffice to say, were I not compelled by the divine, I doubt I could have sustained interest in this field of inquiry for as long as I have.
What led to your interest in the intersection of AI and faith?
I am enthralled by the ways in which AI spins very real but unseen webs in our world today. In The Varieties of Religious Experience , William James characterizes belief in and adjustment to “the reality of the unseen” as a vital component of “the religious attitude of the soul.” Considered a father of modern psychology, James’s work on differentiating sense-data processing from perception in the human mind is just one example that bears strongly on contemporary concerns regarding AI. Figures like James offer lush inroads into the most pressing moral questions of our era, which collide with technological developments such as AI.
I am interested in instances of people having faith in AI, as well as the wisdom and insights of faith traditions and thinkers who engage them might offer regarding contemporary AI applications and use cases. I deliberately put various traditional wisdom in conversation with contemporary case studies because I believe that through concerted action and critical intelligence, we can apply AI to further, rather than impede, human flourishing. In my view, attempts to redact faith from these vital conversations often wind up empty-handed.
Why are you involved with AI&F?
AI&F is a community of people with distinct areas of expertise who are interested in the same issues that I find to be some of the most pressing for our present and future. I am honored to be a part of this network, and this type of collective movement inspires my hope for bright future possibilities. AI&F occupies the intersection that animates most of my work, so I am delighted to be involved in this organization.
How does AI&F affect your work outside the organization?
My work at AI&F bridges my academic work and the “real” world. Not only does it connect me with individuals who approach issues that interest me from a variety of perspectives and diverse wealth of expertise, but it helps me think through how to communicate the knowledge from within academia to the public and vice versa. Academia can seem siloed from the rest of the world, and AI&F helps me dissolve those barriers. I feel ethically responsible to communities beyond the academy, so AI&F helps me learn how to translate between these spaces.
What open problems in AI are you most interested in?
My primary concern is the role of AI (and tech more broadly) in world-building. As I see it, technology is inextricably linked to human futures, so the primary undertaking becomes one of reducing potential for harm and maximizing its potential to affirm life and further human flourishing. Biased AI, or AI that mechanize and further oppression of groups of people based on various identities, cannot support such a mission. It is incumbent upon the work of many human hands to make this vision of an equitable digital future a reality.
Recently I wrote an article on neuroprediction which was published by AI&Faith, so that is an area of concern for me. I am currently working on a paper that thinks through how we distinguish between ficticity and facticity of online content, thinking particularly of live-streaming (especially of mass-shootings), and what role, if any, AI plays in flagging content or making decisions about whether content is fact or fiction. More broadly, I am interested in developing moral philosophy and pragmatic digital ethics that address current technoscientific innovations, such as new AI, and the risks and opportunities such innovations may hold.
Acknowledgements
A big thanks to Haley Griese for her time to carry out this interview. Thanks to Emily Wenger for proofreading, editing, and publishing this work.
References
James, William. The varieties of religious experience: A study in human nature. Routledge, 2003.