Become A member
Subscribe to our newsletter
Insights

AI, Formation, and the Illusion of Neutrality

Artificial Intelligence has reignited one of modernity’s oldest hopes: the dream of neutrality. If we could only remove prejudice, emotion, and self-interest from decision-making, perhaps justice would finally flourish. AI appears to offer that possibility. It does not tire out. It does not resent. It does not covet. It processes.

And yet, as Large Language Models (LLMs) mature, a sobering realization emerges: AI is not neutral. It reflects the data it is trained on, the objectives it is given, and the values embedded in its design. It is generative—producing outputs shaped by its formation.

That insight should not surprise those who hold a biblical worldview. Scripture has long taught that outputs follow orientation.

Indeed, Luke 6:45—“Out of the abundance of the heart the mouth speaks”—affirms that what is visible reveals what is hidden within. Our actions do not arise spontaneously; they flow from what has already been cultivated in the heart.

AI systems operate in a strikingly similar way.

A large language model receives a prompt and generates the next most likely word or idea, iteratively extending that prediction with each subsequent output. What appears creative is, in reality, a structured unfolding of patterns already embedded within it. Its responses are not spontaneous acts of imagination but disciplined continuations of its training. The output is faithful to its formation.

Human beings, too, are generative—but in a morally distinct way. James describes temptation as a process: desire conceives, gives birth to sin, and sin when fully grown produces death (James 1:14–15). The trajectory is incremental, patterned, and predictable. Small inputs shape larger outcomes. Repeated habits form character. Character forms culture. Unlike technology, however, this generativity is rooted in desire. Human action proceeds from love, longing, and worship; it is morally charged because it emerges from the heart. Technologies generate outputs, but they do not desire. They extend, amplify, and accelerate human intention, yet they possess no inward orientation, no will, no loves. The danger, therefore, is not that machines sin, but that they magnify the moral direction of their makers and users. What begins as inward inclination in the human heart can, through technological mediation, scale into cultural consequence. This theological anthropology challenges the myth of neutrality in technology. If humans are not neutral creatures—if we are shaped by disordered and ordered loves—then the systems we design will carry those imprints. Technology does not transcend our formation; it extends it.

Technology as a Carrier of Worldview

Every tool formalizes assumptions about what matters.

When we optimize an algorithm for efficiency, we reveal a value judgment. When we prioritize engagement in social media systems, we reveal another. When predictive systems privilege certain outcomes over others, they disclose embedded moral commitments.

Even the definition of “fairness” presupposes a worldview. Does fairness mean equal treatment? Equal outcomes? Maximized utility? Protected vulnerability? These are not engineering problems alone; they are moral questions.

Proverbs teaches, “The way of a fool is right in his own eyes, but a wise man listens to advice” (Prov. 12:15). Wisdom requires recognizing that our perspective is not self-justifying. The same humility must guide AI development. Without it, we risk baptizing our cultural assumptions in the language of objectivity.

The myth of technological neutrality often masks hidden normativity. When we say a system is neutral, we often mean that its values are so deeply embedded that they appear natural. But neutrality is never the absence of values; it is the invisibility of them.

AI makes this unmistakable because it mirrors us back to ourselves. Biased outputs uncover biased datasets. Disproportionate impacts reveal disproportionate histories. The machine does not invent distortion; it amplifies what has already shaped it. In that sense, it becomes a kind of reflection—formed in our image, devotedly reproducing what we have encoded into it.

Formation Before Function

A deeper theological insight sharpens this critique: human beings are not primarily thinking things but loving beings. As James K. A. Smith argues in Desiring the Kingdom, we are shaped less by the ideas we profess and more by the practices that form us—by what we worship, by the habits that train our imagination, and by the cultural liturgies that quietly orient our desires.

Romans 12:2 calls believers not merely to the acquisition of more information, but to transformation through the renewal of the mind. The apostle does not envision cognitive accumulation alone, as though spiritual maturity were the result of increased data. Renewal implies retraining—reshaping the internal architecture from which perception, judgment, and desire flow. It is the reordering of loves, the recalibration of imagination, and the reformation of reason under the lordship of Christ. Christian maturity, therefore, is not informational expansion but dispositional renovation. It is not simply knowing more, but becoming new.

AI development provides a powerful analogy. You cannot correct a misaligned model with a few surface-level prompts. You must retrain it. You must reexamine its objective function. You must audit the dataset. Sometimes the architecture itself must be reconsidered.

Similarly, we cannot achieve justice through procedural tweaks alone if the underlying loves are misdirected. If a culture prizes productivity above dignity, its technologies will reflect that hierarchy. If it exalts autonomy without responsibility, its systems will optimize individual choice even at communal cost.

Jesus’s teaching that “where your treasure is, there your heart will be also” (Matt. 6:21) applies here. What we treasure collectively will shape what we build technologically.

AI reveals that formation precedes function. What we feed a system determines what it generates. And what we repeatedly practice determines what we become.

The Generative Power of Sin—and Grace

The generative nature of AI offers a sobering theological parallel.

Sin rarely appears fully grown. It begins with a desire, a distortion, a small accommodation. Left unchecked, it generates the next most likely step. And then the next. Its logic unfolds incrementally until a predictable outcome emerges.

So too with bias in AI. A skewed dataset generates skewed predictions. Skewed predictions shape real-world decisions. Those decisions reinforce the dataset. The cycle becomes self-reinforcing.

Both systems—human and artificial—demonstrate the power of compounding formation.

But the gospel also presents a generative counterforce: grace.

Ezekiel records God’s promise, “I will give you a new heart and put a new spirit within you” (Ezek. 36:26). Christian hope is not cosmetic adjustment but foundational renewal. Not merely better prompts, but new orientation. Not incremental optimization alone, but re-creation.

Ethical AI requires more than patchwork corrections. It demands moral imagination grounded in a theological account of the human person as the image-bearer of God (Gen. 1:27). Humanity is therefore called to steward creation under God’s authority. Technology, as cultural labor, belongs within that mandate. Yet dominion is now exercised in a fallen world. Because sin distorts every faculty—reason, desire, and will—our systems inevitably reflect disordered loves. They are not neutral; they extend and scale the moral orientation of their makers.

This raises a harder question: can AI be ethically designed within an economic order that treats information as currency and rewards control, prediction, and behavioral influence? Reformed theology cautions against naïveté. Systems shaped by disordered incentives will tend toward distortion. Common grace may restrain evil and enable relative goods, but it does not eliminate structural pressures toward exploitation.

Justice, therefore, will not arise from computational sophistication alone, nor from market momentum. It must be intentionally pursued, institutionally constrained, and publicly accountable. Ethical AI is possible in a penultimate sense—real yet fragile—only where stewarded power is bounded by moral law and ordered toward human flourishing rather than domination.

Stewardship in a Generative Age

AI does not create the problem of bias; it exposes it.

It forces us to confront the reality that our technologies reflect our loves. They magnify our virtues and our distortions. They scale our commitments and our blind spots.

This places an enormous responsibility on the Church and on Christian thinkers engaged in AI ethics. We cannot retreat from the conversation, nor can we baptize technological optimism uncritically. We are called to steward creation faithfully (Gen. 1:28), which now includes digital systems that shape human lives at scale.

Stewardship requires theological clarity. It requires resisting the illusion that neutrality can be engineered apart from moral formation. It requires communities committed to justice, humility, and love of neighbor (Mic. 6:8) to shape the frameworks guiding AI development.

Ultimately, AI confronts us with a mirror. It shows us that generative systems do not drift toward justice on their own. They move toward whatever they are trained to value.

The pressing question, then, is not whether AI can be neutral.

The deeper question is whether we are being formed by the right loves—and whether those loves are shaping the systems we build.

Because the machine will generate what we give it.

And we will generate what we worship.

In an age of artificial intelligence, Christian faith offers not technological neutrality but moral clarity: transformation begins at the level of the heart, and justice flows from rightly ordered love.

 


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

Evangelical Christian

Dr. Benson Kamary

Ben Kamary, Ph.D., serves as Director of Development Partnerships at 222 Foundation, a nonprofit investing in the next generation of ministry leaders through scholarships, mentoring, and strategic support. He has taught Christian Worldview, Educational Philosophy, and Ethics & Technology at Trinity International University, Kosin University (South Korea), and Evangelia University. He is a Colson Fellow, serves on the Advisory Board for Ethics at the Daystar Center for AI, Modelling and Digital Trust, and is a Board Member at Linked Africa.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter