Become A member
Subscribe to our newsletter
Insights

Where Faith Meets Code: Brendan Newell on AI, Human Dignity, and the Vatican’s Vision

When Pope Francis first described artificial intelligence (AI) as heralding a “change of age,” he signaled that the Catholic Church must treat this technology not as a passing innovation but as a significant civilizational turning point.1 His successor, Pope Leo XIV, sharpened that vision by calling for moral “guardrails”—boundaries rooted in divine authority to guide AI toward human dignity and the common good.2 Together, these metaphors frame the Vatican’s evolving witness: from naming the epochal shift to establishing the ethical boundaries. Against this backdrop, EWTN’s Ave Maria in the Afternoon Radio welcomed Brendan Newell, principal engineer at Microsoft and an Expert at AI and Faith, whose reflections embody the convergence of technical expertise and faith-driven discernment.

A couple of men sitting at a desk with laptops and microphones AI-generated content may be incorrect.

A Catholic Witness in Tech

Brendan studied to be a Catholic Deacon before entering the technology field. That formation continues to shape his professional life at Microsoft and his engagement in the wider professional community. Within Microsoft, he is active in the Catholic employee network, helping colleagues reflect on how faith informs their professional and ethical responsibilities. For those of us who explore the intersection between AI and Catholicism, his voice carries more than technical expertise: it reflects a life formed by faith and a deep commitment to service. This dual vocation—engineer and disciple of Jesus—was evident throughout the interview. Brendan spoke with precision about AI’s mechanics but always returned to the moral horizon: the human person is a singular moral agent in the time heralding a change of age.

The Vatican’s Call for AI Ethics

Brendan was involved in Microsoft’s discussions of AI ethics with the Vatican, and business and government agencies that became the Rome Call for AI Ethics, issued in February 2020.3 The first five original signatories were leaders from the Pontifical Academy for Life, Microsoft, IBM, the United Nations Food and Agriculture Organization, and the Italian Ministry for Technological Innovation. They launched the initiative for responsible AI development with six guiding principles—transparency, inclusion, impartiality, reliability, security, and privacy. These principles translate into concrete practices: transparency means users should know when they’re interacting with AI rather than a human; inclusion ensures AI benefits all people, not just the privileged; impartiality guards against algorithmic bias that might discriminate based on race, gender, or economic status. Since then, the Rome Call has expanded to include governments, universities, and faith traditions worldwide.

For Brendan, these principles are “basic” by design: they set a compass and require governments, corporations, and end users to enact them in practice. His insistence on this point reflects the theological conviction that moral responsibility rests with human agents, not machines. But how do these principles play out when corporate pressures collide with ethical commitments?

Profit or Person? – The Vatican’s Test of Technology

The Vatican’s 2025 doctrinal document Antiqua et Nova: Note on the relationship between artificial intelligence and human intelligence sharpened this moral framework by insisting on the centrality of the human person.4  Profit, Brendan explained, can be a byproduct of innovation but must never be its motive. “Enhancing the human person should always be the driver for technology,” he said.

This conviction echoes his philosophical and moral understanding of the relationship between humanity and technology: technology must serve human flourishing, not diminish it. In his work life, Brendan carries this ethos into conversations about design and deployment, reminding colleagues that technical brilliance without moral grounding risks exploitation.

Deepfakes and the Fragility of Trust

The interview turned to one of the most pressing dangers of AI: deepfake technology. Brendan described scenarios where children are being targeted by malicious videos or elderly parents deceived by fraudulent phone calls mimicking their grandchildren’s voices. His concern was pastoral as much as technical. Speaking as both a father and a Catholic, he recognized that trust—the foundation of human relationships—is fragile and precious. “It’s going to be a hard thing to do to give the person at the other end of the phone or the Zoom call the dignity and respect they’re due as a human,” he cautioned, “when you think you have to ask them to prove that they are human.”

Soul and Simulation: Distinguishing True Intelligence

Brendan distinguished between human intelligence and AI using a Plinko game analogy—a game where a disk drops through a board of pegs and lands randomly in different slots at the bottom. Brendan explained that AI is not truly intelligent, but is a system of probabilities structured by human design. Just as the Plinko disk doesn’t “choose” where to land but follows patterns determined by peg placement, AI doesn’t reason but produces outputs based on training patterns. “If I trained a Plinko game and moved the pegs around after thousands and thousands of tries,” he explained, “the ping pong ball would always go into a ping pong ball slot. A golf ball would go into a golf ball slot. That is how GPT, a Generative Pretrained Transformer model, works.” The disk bounces, outputs appear, but no moral reasoning occurs.

For people of faith, that distinction matters: true intelligence belongs to the human soul, while AI remains a simulation. Brendan reminded us, “the danger is not machines overtaking humanity, but humans forgetting the difference. In Catholic tradition, human intelligence is a power of the soul. AI can simulate reasoning, but it cannot truly understand.” Brendan’s immersion in the Catholic intellectual tradition sharpens this point: machines cannot reason morally, and the danger lies in humans surrendering moral decision-making to AI tools.

Guardrails at Home: Families and AI

As the conversation closed, Brendan offered practical counsel for parents and grandparents: just as children are gradually introduced to the internet, so too should they be introduced to AI with careful supervision. “Go slowly,” he urged. “Put the proper controls on it. Yes, it will have value, but careful use is necessary.” His voice carries weight given his busy life as a father, husband, and professional.

Discipleship in a Change of Age

For readers of AI and Faith, Brendan’s witness is instructive. He embodies what it means to carry faith formation into the tech world: to see software engineering not as morally neutral but as morally challenging, requiring discernment and guardrails. His public voice on EWTN reminds us that discipleship extends into every sphere of life. Over two decades at Microsoft, he has served as a technical architect in identity and cloud security, strengthening organizations against harm. He has helped pilot security changes for financial institutions in the developing world, preventing attacks that could have devastated small businesses and consumers. He has also partnered with Microsoft Research on projects like leveraging unused TV spectrum to bring broadband access to schools and community centers in underserved regions. These efforts show how Brendan’s faith formation and technical expertise converge: he sees technology as a means of service, a way to protect the vulnerable and expand opportunity.

The Church names the epoch, the Rome Call sets the ethical guardrails, and disciples like Brendan embody them in practice. Together, in this change of age, the measure of innovation needs to be guided by moral authorities rooted in divine authority and lived out by disciples who are true champions of the dignity of the human person.


Resources

  1. Pope Francis, Message for the 57th World Day of Peace, “Artificial Intelligence and Peace” (January 1, 2024), https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html 
  2. Pope Leo XIV has consistently called for moral guardrails in AI development. In his message to the Builders AI Forum (November 2025), he urged developers to “cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.” See: Vatican News, “Pope Leo: AI must reflect the design of God the Creator,” November 7, 2025, https://www.vaticannews.va/en/pope/news/2025-11/pope-leo-xiv-message-builders-ai-forum-ethical-technology.html; Catholic News Agency, “Pope Leo XIV calls on Catholics to lead in ethical AI development,” November 7, 2025, https://www.catholicnewsagency.com/news/267675/pope-leo-xiv-calls-on-catholics-to-lead-in-ethical-ai-development
  3. Rome Call for AI Ethics, “Signing of the Rome Call for AI Ethics” (February 28, 2020), https://www.romecall.org/
  4. Pontifical Academy for Life, Antiqua et Nova: Note on the relationship between artificial intelligence and human intelligence (January 28, 2025), https://www.vatican.va/roman_curia/pontifical_academies/acdlife/documents/20250128_antiqua-et-nova_en.html

Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.


Dr. Yuriko Ryan

Is a bioethicist-gerontologist with over 20 years of international experience in healthcare ethics and policy research. Based in Vancouver, Canada, she holds a Doctorate in Bioethics from Loyola University of Chicago and is a certified Healthcare Ethics Consultant (HEC-C). She is a contributing writer/member of the AI and Faith Editorial Board. She writes on AI Ethics, Public Health Ethics, Business Ethics, and Healthcare Ethics.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter