New! Become A member
Subscribe to our newsletter

An Interview with Andrew DeBerry: Teaching AI to Do Good

After connecting through AI & Faith, Andrew DeBerry and I hit it off immediately. We have a lot in common – our respective backgrounds in the defense community, our shared Christian faith, our current work in the social media world, and our interest in mitigating potential harms of AI technology. I wanted to learn more about Andrew’s perspective on all these things, but especially on his work with AI. He graciously fit me into his busy schedule. You can read an edited transcript of our conversation below.

About Andrew DeBerry

Image of Andrew DeBerryAndrew DeBerry has 16 years of experience leading new artificial intelligence initiatives and launching platforms for mixed reality at Microsoft, election security in Silicon Valley, and health innovation at Amazon Web Services. He currently leads a team for responsible AI and privacy for a social media company and serves as a Reservist in the U.S. Cyber Command introducing AI principles for the DoD. Andrew has a degree in Aerospace Engineering from Notre Dame with minors in Public Policy, Arabic, and Catholic Social Teaching. He gained a Masters in National Security studies while a U.S. Air Force intelligence officer and completed an MBA in Strategic Management and MA in International Studies/Arabic from the Wharton School’s Lauder Institute. 

Emily: How did you end up in the AI & fairness space?

Andrew DeBerry: I started teaching myself about AI through Coursera during my work commute and meet-up workshops, then volunteered for projects involving AI in healthcare and oncology. There I discovered the emerging area of privacy-preserving machine learning, which combined for me an interest in AI, community, and also faith. I have a minor in Catholic Social Teaching from my undergrad, so this area and eventually my current role was a compelling, unique match. The work has not failed to disappoint. Privacy-preserving machine learning is core to the future for how we connect to each other and unlock social value. 

E: What makes the mission of AI & Faith compelling to you?

DeBerry: I think that having our values and our faith guide and inform the future and innovation is critical. So, AI&Faith’s mission is both necessary and obvious. The values of human dignity and the importance of faith should continue to have a larger influence in the world.

E: In your day-to-day role, how does your faith perspective influence your decisions and thoughts about your work?

DeBerry: Catholic Social Teaching holds certain tangible practical principles like the “preferential option for the poor” or “solidarity” or the “call to community & participation.” While these Catholic-specific phrases may not be said explicitly, the work we do in privacy-preserving machine learning very much aligns with those principles. How do we ensure that the products that platforms used to earn revenue are inclusive and fight against bias? How do we ensure equal treatment for vulnerable communities? 

It’s complicated.  Thought leaders in public tech media companies acknowledge this isn’t a conversation for companies to have alone, but it’s great to be in one of the leading organizations across all sectors (public, private, social) trying to build out this space.

E: You mentioned this idea of “equal treatment of subgroups” as a key problem in machine learning fairness. Would you say that that is the biggest challenge facing the community with respect to this principle of fairness? 

DeBerry: Specifically for fairness, the short answer is yes. A broader second challenge is having algorithms that are inclusive and do not replicate human bias. A third is going outside the US and Europe and including developing nations in the data that are being used to make algorithms that will have global societal implications. And then the last is having an AI that adheres to a set of principles. There is a broad move for organizations to have AI principles – from Microsoft to the Vatican to the Australian government, etc.  Ensuring that the principles keep pace with what’s possible technically is another matter, but it is all-important.

E:  You talked about how your faith influences your perspective on AI and specifically fairness. What do you wish other people from your faith background knew or did with respect to this topic? 

DeBerry:  One practical thing is that I want folks to be able to see the potential AI has to do good. That’s why I’m enthusiastic about privacy-preserving machine learning and responsible AI. If we can get privacy-preserving machine learning techniques adopted more broadly, we suddenly have the ability to create significantly larger datasets. This unlocks a very powerful way to build algorithms for solving things like disease, or climate control, or social issues when we can unlock sensitive data from different silos.

Second, technology augments human intention, and because of that, I want our faith to be very much in the ring leading these conversations. At least in the Catholic Church, there’s this unfortunate culture of being reactive–of first critiquing new innovation then being a laggard adopter of it. That’s not the most effective way for any Christian to shape the future with these tools. I want us to enter the ring in a very strong, proactive way, leading with prayer, not standing aside and poking at it from a distance after it’s already been happening without us.  I want us to enter the conversation in a proactive, hands-on, engaged manner and bring a prayer into the algorithm.

E: I like that phrase. Thank you. Do you have any resources or maybe specifically books that you’ve read recently about this subject that you think would be helpful to others or that you recommend?

DeBerry: The current body of research on inclusive AI – It’s not a book, but I do think it warrants much more attention than it gets. This research calls out AI for not being inclusive of developing countries or the global south. The people who are building tech need to be diverse and inclusive. If we rebuild a world virtually without being inclusive, that’s going to be fundamentally off. How we treat each other in our direct human relationships will have a viscerally real impact on how the AI itself would be built. 

E: Absolutely. Anything else you wanted to say?

DeBerry: I’ll just keep foot-stomping this idea of the preferential option for the poor and being inclusive. More groups should have the ability to influence and shape the future. That may sound like a very paradoxical and ironic way of being cutting edge, but that’s the challenge and that’s why we need prayer. 

E: Thank you so much for your time!

Read more AI and Faith Interviews here.

Emily Wenger

Emily Wenger is pursuing a PhD in computer science at the University of Chicago with a particular emphasis on machine learning and privacy. Her research explores the limitations, vulnerabilities, and privacy implications of neural networks. Emily worked for two years as a mathematician at the US Department of Defense before beginning her PhD studies in 2018. She graduated from Wheaton College in 2016 with a degree in mathematics and physics.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter