Become A member
Subscribe to our newsletter
Insights

The Impossibility of Neutral Wisdom: Artificial Intelligence and the Ancient Problem of Phronesis

“It is not possible to be good in the strict sense without practical wisdom, or practically wise without moral virtue.”

Aristotle, Nicomachean Ethics, VI.13

1. The Question That Technology Cannot Escape

There is a persistent myth in our culture that technology is philosophically neutral. A hammer, the argument goes, is neither good nor evil. It simply is. The same logic is now applied to artificial intelligence: it is a tool, neither moral nor immoral, and its character depends entirely on the person who wields it.

This is a comfortable position, but it does not survive even modest philosophical scrutiny. A hammer may be value-neutral in the sense that it expresses no propositional claims about the world. But an artificial intelligence does. Every time a person asks an AI system for advice on how to raise a child, whether to leave a marriage, how to respond to a moral dilemma at work, or what it means to live a good life, the system must draw on some framework of values to produce an answer. There is no view from nowhere. There is no counsel without conviction.

The question, then, is not whether AI systems embed philosophy. They do, necessarily. The question is whether they are honest about which philosophy they embed, or whether they disguise particular commitments as universal objectivity. This is not a new problem. It is, in fact, one of the oldest problems in Western thought, and Aristotle gave it a name twenty-three centuries ago: phronesis.

2. Phronesis: The Wisdom That Cannot Be Abstracted

Aristotle distinguished between several forms of intellectual virtue. Episteme is scientific knowledge: universal, demonstrable, concerned with things that cannot be otherwise. Techne is craft knowledge: the skill of making or producing. Sophia is theoretical wisdom: the contemplation of first principles and eternal truths. But phronesis, practical wisdom, occupies a category of its own. It is the capacity to deliberate well about what is good and beneficial for human life, not in the abstract, but in particular situations with particular stakes.

The crucial feature of phronesis is that it cannot be separated from the moral character of the person exercising it. You cannot be practically wise, Aristotle argues, without being virtuous. And you cannot be fully virtuous without practical wisdom. The two are entangled. A crucial clarification follows from this: AI systems do not possess phronesis. They cannot. Phronesis requires embodiment, moral formation, and the kind of lived experience that no machine can replicate. An AI system has no character in the Aristotelian sense. It does not deliberate. It does not care about outcomes. But this is precisely what makes the current situation so philosophically precarious: AI systems are routinely asked to operate in the domain of phronesis, to offer counsel on particular moral situations, without possessing any of the qualities that Aristotle regarded as prerequisites for doing so wisely. The question, then, is not whether AI has practical wisdom. It does not. The question is what happens when a tool without practical wisdom is consulted as though it does, and whose embedded philosophical commitments shape the counsel it gives regardless.

3. The Concealed Philosophy of ‘Neutral’ AI

When a mainstream AI assistant is asked whether a person should forgive someone who has wronged them, it does not respond with silence. It produces an answer. That answer will draw, whether explicitly or not, on some tradition of moral reasoning. It may lean toward therapeutic frameworks that emphasise emotional well-being and self-actualisation. It may default to a broadly utilitarian calculus of harm reduction. It may invoke the language of rights, autonomy, and consent that characterises post-Enlightenment liberal ethics.

It is true that a user can instruct an AI to respond from a specific perspective: Thomist, Buddhist, utilitarian, Reformed. Most systems will comply. But this capability, far from resolving the problem, actually deepens it. The default mode, the voice the system adopts when no tradition is specified, is the one that shapes the vast majority of interactions. Most users never think to specify a framework, and why would they? The system presents its default as though it were simply reasonable. Moreover, even when a user does select a tradition, the moral weight of acting on that counsel cannot be delegated to the machine. The responsibility remains with the person who acts. What the user can reasonably expect, however, is honesty about the tradition from which the counsel is drawn, and this is exactly what the default mode withholds.

What mainstream AI will almost never do is acknowledge that these are particular philosophical traditions with particular histories, particular assumptions, and particular blind spots.

This is not neutrality. It is a specific philosophical position that has been so thoroughly absorbed into the assumptions of secular Western culture that it has become invisible to itself. The philosopher Charles Taylor diagnosed this phenomenon decades ago in “A Secular Age,” arguing that the modern West does not lack a moral framework but rather operates within one so dominant that it mistakes itself for the absence of framework altogether. The AI systems trained on the outputs of this culture inherit its blind spot.

Consider: if a person asks an AI whether they should prioritise career advancement or family obligations, and the AI consistently treats this as a matter of personal preference rather than moral substance, that is not neutrality. It is a commitment to a particular view of the relationship between the individual and their duties, one that privileges autonomy over obligation, self-determination over tradition, and subjective satisfaction over inherited purpose. A Confucian, a Thomist, a Stoic, or a Buddhist would each find this framing deeply tendentious, not because it is wrong (though they might also think that), but because it presents a contested philosophical position as though it were common sense.

4. The Case for Explicit Commitment

If the foregoing analysis is correct, and I believe it is, then the most philosophically honest form of AI-assisted counsel is one that declares its commitments openly rather than concealing them beneath a veneer of neutrality. This is not a popular position in the technology industry, where neutrality is treated as a design virtue and philosophical commitment as a form of bias. But the opposite is closer to the truth: it is the refusal to commit that constitutes the deeper bias, because it smuggles in assumptions without accountability.

We are beginning to see experiments in this direction. There are AI systems being built on explicitly Buddhist principles of mindfulness and compassion. There are systems grounded in Stoic philosophy. There are Christian AI assistants, most of which emerge from broadly evangelical or non-denominational Protestant contexts, that attempt to offer practical guidance shaped by Scripture rather than by the unstated defaults of secular modernity. The diversity within Christianity itself, from Orthodox to Catholic to the wide spectrum of Protestant thought, means that any such project must be honest about which strand of the tradition is actually operative rather than claiming to speak for Christianity as a whole. But the ambition of grounding AI counsel in a declared tradition is philosophically sound. One may agree or disagree with any of these commitments. But the intellectual honesty of declaring them openly is, I would argue, superior to the pretence of having no commitments at all.

The analogy to human counsel is instructive. When a person seeks advice from a pastor, a rabbi, a Stoic mentor, or a secular therapist, they know, at least in broad terms, what moral framework will inform the guidance they receive. This transparency is a feature, not a limitation. It allows the person seeking counsel to evaluate the advice against their own convictions, to accept what resonates and push back on what does not. It treats them as an adult capable of critical engagement. By contrast, a system that disguises its philosophical orientation as mere objectivity deprives the user of the very information they need to evaluate the counsel they are receiving.

5. The Danger of Invisible Authorities

There is a broader cultural concern here that extends well beyond artificial intelligence. We live in an era of what the sociologist Zygmunt Bauman called “liquid modernity,” in which the traditional structures that once provided moral orientation (religion, community, shared narrative) have been dissolved without being replaced by anything comparably robust. Into this vacuum step institutions and technologies that offer guidance while denying that they occupy any authoritative position. Social media algorithms curate our moral environment while claiming to merely reflect our preferences. News organisations frame contested narratives while insisting they are simply reporting facts. And AI systems dispense practical wisdom while maintaining that they hold no philosophical position.

The pattern is the same in each case: authority exercised without acknowledgement. And the danger is not that these systems influence us (influence is inevitable in any social arrangement), but that they influence us in ways we cannot examine, because the influence denies its own existence. Nietzsche warned that the most dangerous forms of power are those that present themselves as nature rather than as choice. An AI system that presents a particular moral framework as the neutral default is doing precisely this.

6. Pluralism as Philosophical Maturity

None of this is an argument against pluralism. Quite the opposite. A genuine pluralism, one that takes seriously the existence of multiple competing visions of the good life, requires that those visions be articulated clearly enough to be evaluated and debated. A culture in which everyone pretends to hold no particular view is not pluralistic; it is confused. It has not transcended the ancient arguments about how to live; it has merely lost the vocabulary to conduct them.

Artificial intelligence, properly conceived, could be an extraordinary tool for genuine pluralism. Imagine a landscape in which AI systems built on Christian, Buddhist, Stoic, Islamic, secular humanist, and indigenous philosophical traditions each offer their distinctive forms of practical wisdom openly and without apology. A person navigating a difficult decision could consult multiple frameworks, compare their counsel, and arrive at a more considered judgment than any single tradition could provide alone. This would be a richer form of intellectual life than our current arrangement, in which one particular tradition (broadly secular, broadly liberal, broadly therapeutic) masquerades as the absence of tradition.

7. Conclusion: Wisdom Requires a Place to Stand

Archimedes reportedly said that given a lever and a place to stand, he could move the world. Practical wisdom requires something similar: a place to stand, a set of commitments from which deliberation can proceed. Aristotle understood this. The Stoics understood this. Every serious philosophical and religious tradition has understood this. The peculiar modern belief that wisdom can be dispensed from no particular vantage point is not sophistication; it is a failure of self-awareness.

As artificial intelligence becomes an increasingly significant source of counsel in ordinary human life, the question of its philosophical foundations will only grow more pressing. The answer is not to insist on a single correct framework, nor to pretend that no framework is operative. The answer is transparency: to build systems that know what they believe, say what they believe, and trust their users to engage critically with the result. That is the beginning of honest wisdom, whether human or artificial.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

 


Joshua-Paul Rebelo

Josh is a technologist based in the UK with a particular interest in how AI systems handle moral reasoning. He is the founder of Son of God AI, a Christian AI assistant, and writes on AI, philosophy, and faith.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter