New! Become A member
Subscribe to our newsletter
Interview

Interview: The Story of Levi Checketts

Poverty and technology are deeply personal topics for Levi Checketts. Growing up poor, through his education he began to question: “why do many poor in the US support things that seem to be against their own interests?” (Question 2). He wanted to research this but did not know how to address it. In graduate school he studied theological technology ethics. While teaching business ethics at two schools appealing to opposing income brackets (one lower- vs. one higher-income) and seeing how technology is tied into some business models, he saw the combination of the two topics. His new book, Poor Technology: Artificial Intelligence and the Experience of Poverty, is the result and acts as a warning of the dangers of Artificial Intelligence (AI) for poor people. Just as “many societies have recognized the importance of women’s rights, ethnic minorities’ rights, religious freedom, environmental regulation, and even increasingly animal rights” (Question 8), Checketts argues for recognizing, listening, valuing, and including the poor. He proclaims: “Let us not sacrifice the interests of the poor for the shiny promises of AI!” (Question 8). Please enjoy the below interview with Checketts as he dives into all these topics and check out his new book.

Interview:

1. Please provide a summary of your book. Including poverty and Artificial Intelligence (AI)/ Artificial General Intelligence (AGI).

I always struggle with how to summarize my ideas, and I often think my reviewers are better at saying this than I am.

Most of all, the book is about the danger that the way we talk about AI generally, and AGI specifically, poses to people who do not fit AGI’s model of the human being. When we say AI is meant to achieve “human level” intelligence or, in the case of AGI, something like consciousness or understanding, we tend to then make assumptions about what “human level” means. The problem is intelligence, consciousness, and understanding are really hard to nail down. What we end up normally saying, then, is a certain mathematical standard of intelligence based on the ideas of the programmers is what intelligence is. The problem with this is that not everyone thinks like AI programmers. Poor people, who are the subject of my discussion, tend to think in ways very different from computer software engineers. Instead of long-term payoffs, for example, they tend to be more concerned with immediate survival. If that’s your focus, you will have a very different conclusion about the things that are “good” and what constitutes an “intelligent” decision.

This division already exists in our society, and a lot of conversations by people with money or power tend to disparage the poor as “bad with money,” “ignorant,” “unrefined,” or some other negative value. But there have always been people to challenge these conceptions. The problem we face nowadays is that many people are discussing whether AI should be given equal treatment with human beings. The pertinent question here is, “based on what?” and the answer tends to be “based on intelligence.” So this is where the problem really becomes clear: if AI is considered morally important because of its “intelligence,” and that intelligence is modeled after a certain monied view of intelligence, then those who don’t stand up to this model of intelligence will be considered not morally important. This may sound far-fetched, but there are lots of examples from just the past century of people with divergent models of thinking being persecuted, arrested, sterilized, mocked, and so on. (Read the book to learn more.)

2. How do you connect to this book personally through your own Christianity, poverty, and academic career, not necessarily in that order? Tell us your story.

This book is very much autobiographical. I grew up in a poor family. While we were never starving, we never had money for doctors’ visits, for new clothes, for fresh foods, or anything like that. When I went to college—on full financial aid—I was utterly shocked to see how my classmates took spending money for granted. This led me to begin to see a problem inherent in American society: even though we talk about equality of opportunities, it’s nearly impossible for people from the lower economic rungs of society to get ahead, no matter how hard they work.

I’m actually a convert to Catholicism. I was raised in an LDS household, and while I was in college had a conversion experience. Part of what attracted me to Catholicism was the wideness of the church, the diversity of saints and theological positions, and the focus on the person as they are in society. One aspect of Catholicism that became hard for me to square with my home life, however, was the church’s focus on the plight of the poor. The poor have always been an important part of Catholic morality, and specific developments like Catholic Social Teaching and Liberation Theology have placed even more emphasis on the needs and perspectives of the poor.

A crucial part, however, came my senior year of college, when I took a class on the Catholic Worker movement. For people like Dorothy Day, Peter Maurin, the Berrigan brothers, and so on, the working poor very clearly benefited from unions, welfare programs, regulations and protections. But my family tended to be against these things. So this posed a question which nagged me for many years—why do many poor in the US support things that seem to be against their own interests?

Fast forwarding a bit, when I went on to graduate studies, I had to bracket the issue of poverty as I couldn’t quite figure out how to address it. Catholic theology tends to talk about the poor, not with the poor. So I focused more on a topic that was, at the time, quite undeveloped, i.e., theological technology ethics. At the same time, when I was finishing up my PhD work, I ended up teaching business ethics at a school where many of the students were lower-income, and at another school where the students were higher-income. The contrasts between these populations, and the way that technologies are tied into certain business models helped me to see where the question of the poor’s interests became relevant for AI.

A big push for this also came from my friend Sophia Park, who was helping me in applying for teaching positions and pointed out that I needed to make my own voice clearer. I had gotten used to writing in the way that academics tend to write, a result of years of study. But this isn’t how I related to my students, and it doesn’t reflect my own background. As I began to reflect on this, I realized the way that education itself is designed to separate someone from the experience of thinking the way the poor often do, hence why even liberation theology is about the poor, not from their perspective.

3. What is so powerful about stories?

Stories are really how we make sense of the world. We tell stories to relate to others, to pass on wisdom, to teach morality, to articulate a worldview, and so on. As children, we consume stories—from picture books to tv shows to children’s sermons and lessons from our parents. As adults, we tell each other stories in social settings like bars, in motivational speeches, in job interviews, to share our feelings—generally to connect. Basically, story is the whole way we relate to the world. You can even see this in the discussions of AI—its potential is articulated through stories about what we want and its danger through stories about what we don’t want. Every time we say “here’s what AI will do…” we tell a story.

4. How can and has AI “hurt” the poor?

AI has already hurt the poor in specific concrete ways. A number of scholars, such as Virginia Eubanks, Cathy O’Neil, Ruha Benjamin, Daniel Greene, have pointed out how unwarranted trust in algorithmic decision-making tends to ignore the human element that goes on in even the most bureaucratic decision-making. Criminal justice is an area that a lot of scholars have addressed already: when you let algorithms predict where crimes are likely to occur or who is likely to be a re-offender, you tend to reinscribe biases from the past. And then, to trust the algorithmic process is to pretend there isn’t bias built in.

Why does this kind of thing occur? A key part of it is because poor people generally (and poor people of color more especially) are disvalued in a society that ties economic success to moral goodness. So we have always built in ways to keep the poor at arm’s length, but human beings are capable of having compassion. That’s considered a liability.

An easy case to see is how much “welfare fraud” is prosecuted over wage theft. American society tends to treat one as far worse than the other. So to avoid welfare fraud, we’re fine with automating welfare policies, even if that means harming the poor. Automated systems still have flaws, which they cannot themselves recognize, and they cannot be appealed to the same way that a social worker or office bureaucrat can be.

Even though I do talk a lot about how the language surrounding AI as “intelligent” is dangerous to the poor, the worry is not so much that people will start treating the poor as objects the way we treat old computers. Rather, the worry is that by framing what is morally valuable as what AI does, it’s easier and easier to justify policies and programs that disadvantage and isolate the poor as disvalued.

5. How can and has AI “helped” the poor?

It’s actually harder to articulate ways AI has helped the poor. One reason why is because AI is conducted in for-profit companies that want a high ROI.

The EU Horizons Programme, however, has funded several projects designed nominally to help the poor, such as improving factory safety, better crop yields, better aid systems for the elderly, and so on. If you consider AI to be just a tool, not a “person,” and you focus on specific outcomes that can be used to empower the poor, it can be truly beneficial. The problem with this is that it will likely not be tremendously profitable, as it shifts resources from the hands of the rich to the hands of the poor.

In the book, I articulate a hypothetical scenario of creating a genuinely “poor” AI, where the poor would be involved in every step of the process from funding to research to training and implementation. My only reservation in this is that it still suggests the answer to the problem of poverty is mathematical rather than motivational—we have the resources we need to address poverty, just not the real interest in doing so.

6. How can we live a full life with AI? You mention religion, poverty, and other things in your book. Please elaborate. Do you think AGI will ever happen? Why or why not?

I’m going to combine these two questions because my answer to them is related. I don’t think we’ll really achieve AGI for a couple reasons. First, I think human intelligence isn’t just mathematical. Aesthetics, morality, care, wonder, wisdom, are not quantifiable. What’s more, differences in language, culture, and historical context will yield very different understandings of the world. Even automated translation programs make this clear—sometimes the sense of a word or phrase is different between languages than can be rendered by a computer.

So at best, I think we’ll end up something that people may treat like an intelligent human but will not be on any deeper reflection. We already have this with LLMs and chatbots.

Assuming we can get used to this—and some AI researchers who deny the possibility or goal of AGI are already going this way—we can live with AI as a useful tool. I remember when DeepMind announced AlphaFold, I thought this was a deeply important moment for AI, much more than AlphaGo’s defeat of Sedol Lee. The reason why is because what AI can and should be doing for us is those technically difficult tasks which are tedious but necessary. Automating parts of manufacturing, mathematical synthesis, data analysis, etc., are great ways AI is improving our lives.

What a lot of people have already articulated, and I will reiterate, is that AI can be very good for freeing us up to do more human, more important work. The application of AI to the arts is such a bad idea. Likewise relying on it to do the religious work for us, or treating it as a religious object, is dangerous. But it can be useful for helping us appreciate art or deepen our religious experience. The issue for this, as for all cases in which we want AI to genuinely improve humanity, is how to separate its usefulness from the idea that we must only ever be involved in doing “useful” activities.

7. What are the implications of AGI either way? From religion, society, economics, etc.

David Gunkel has written some interesting thoughts about this. The way we define “person” has different resonance in different spheres of society. We can consider corporations “legal persons,” though almost nobody would consider them moral persons, for example.

I think the discussion of AGI is ultimately about treating AI as persons. From an economic perspective, I’m deeply skeptical about any motivations to do this. Money is an important tool for us organic beings because we exchange it for things we need and for things that bring us pleasure, which is tied into our biology. It’s not clear to me why AI would need a money economy as its needs are energy and hardware maintenance, which it could potentially maintain on its own. Beyond this, what would be the purpose of acquiring wealth?

Legally this poses a lot of interesting questions, which many people have noted. If an AGI commits a crime, how do we punish it, for example? Human punishments have always been connected to the kind of beings we are—physical torments, executions, or even imprisonment which deprives us of valuable years. How do we deal with questions of ownership of an AGI, or familial relationships? We’ll need entirely new legal frameworks to address the specific rights and responsibilities that befall entirely artificial beings.

Religiously, this question is already getting a lot of attention, as AI and Faith is a good indication of. Different religions will, as they already do, have different attitudes, and even single religious traditions will see a lot of diversity. I would expect Christianity will probably never fully treat AGI as a person. Catholicism especially, with its emphasis on the physical experience of our mortality, will probably never fully integrate AGI into its religious system.

Culturally, this will be an interesting question, though. I’m sure there will be many skeptics when AGI is first proclaimed, and if it gains cultural approval, I think there will still be many holdouts. There may arise a sort of racism toward AGI. There will be people who only want to relate to AGI.

Ultimately, my concern isn’t about any specific area here but about the broader meaning of this. Noreen Herzfeld, for example, has highlighted the dangers of considering an AI to be something we relate to—it gets what it means to relate to another wrong. Likewise, if we ground our arguments about why AI deserves recognition as a person on something like its “intelligence,” we’ll be enshrining a specific notion of intelligence as the condition for being a person, which can lead to others being excluded. But this leads to your last question.

8. What’s a hope you have for the world based on this?

I don’t want to sound like only a techno-pessimist. I think there’s a lot of potential with AI. Rather, my concern lies in the motivation of the people promoting it the strongest. When people speak derisively of evil done in the world as “stupid,” or as “crazy,” we’re asserting that neurotypicalism and our models of intelligence are “good.” If this is contrasted with all the optimism for AI and its promise of “superintelligence,” a disconcerting picture forms.

Post-human philosophers like Francesca Ferrando have argued that we need to expand our moral circle past the human to include non-human animals, the earth itself, and even AI. I agree with this in part. I think it should be our goal to expand our moral obligation beyond even our species to include things like AI. My worry is that most of us don’t have the energy to have this much concern—we often care about other species instead of the marginalized in our societies, or AI instead of the earth. Pope Francis’s Laudato Si’ is deeply prophetic for just this reason—he notes how environmental care is deeply intertwined with care for the poor! If this can be our moral orientation, I have great hope for humanity. But it will require massive changes to our motivations, philosophies that tie together our fate with that of many invisible Others, concerted efforts to overturn zero-sum economic attitudes, and patient, self-reflective, persistent activism to integrate all of our diverse interests into a single aim. It’s a big challenge, but I think the way many societies have recognized the importance of women’s rights, ethnic minorities’ rights, religious freedom, environmental regulation, and even increasingly animal rights, suggests that we are moving in this direction. In this context, I want to say my aim is to make sure we don’t get too distracted on this path. Let us not sacrifice the interests of the poor for the shiny promises of AI!

1 Comment
  • Ezra Chipatiso
    1:02 AM, 13 August 2024

    Well articulated, surely we should not sacrifice the interests of the poor for the shiny promises of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter