Dialogue about AI and work often centers on a hypothetical future. On one extreme, AI will be able to automate any useful economic task, which may be a positive or negative depending on your philosophical perspective. On the other radical end, jobs will be the least of our concern since a powerful AI will eliminate humans. However, AI’s impact on work is not theoretical; it can be felt now. I am not talking about layoffs but rather about hype.
Concerning layoffs, “AI washing” is prevalent, where companies falsely attribute layoffs to their savvy with automation. The true reasons for many recent layoffs are more pedestrian: an economic slowdown, tax code changes, and fallout from overhiring during the Covid-19 pandemic. “AI washing” also extends to what efforts are labeled as “AI.” Many purported AI projects are fully achievable, and would likely perform better, without large language models (LLMs) and other generative AI technologies. Likewise, real-world AI systems require substantial supporting investments in software architecture, computing infrastructure, and data. Such efforts would be more truthfully labeled as “technology projects” rather than simply “AI” (see appendix, point 1).
As demonstrated in these examples, hype has a cost. It distracts us from the reflection required for deep understanding, which is needed to use AI with purpose, ethics, and sustainability. In fact, we must think critically and even philosophically about any tool we want to use effectively.
AI Hype and Software Engineering
The hype around LLMs has been substantive and not always rooted in reality. A popular narrative is how AI could make software engineering obsolete. Famously, Anthropic CEO Dario Amodei claimed in March 2025 that 90% of all code would be AI generated within six months. While this outcome might have come true in some contexts, his prediction glosses over the important nuances of having AI work on large legacy codebases, with messy data, and through organizational hurdles (see appendix, point 2). These kinds of statements leave too much left unsaid and fail to engage with details that matter more than the headline.
AI tooling is certainly helpful for outputting large volumes of functional code. Importantly, however, generating code is only part of software development. The biggest value is in knowing what to develop, why it should be created, how it should be maintained, and how it fits into a larger and often complicated context. Short-term productivity does not automatically equate to long-term value. Recent work by Microsoft is an example of the conflation of these two concepts. The company has promoted how much they are employing AI-coding assistance to enhance productivity; however, they botched their most recent Windows release. Creating code more efficiently did not produce a better product because accelerated productivity is not the end-all-be-all. More productivity often leads to more work that is not backed by a clear why. In Microsoft’s case, their dogged focus of putting AI into every product crevice has certainly not delighted users.
Clean-and-neat hype narratives omit important parts of the story: Automating coding is not the same as automating software engineering. AI can certainly produce known solutions to known problems. However, knowing what to do, why to do it, and how to pursue completely new paradigms is comparatively more important than speed of execution. The role of the human is still vital to the strategic parts of the equation, which is where differentiated value actually gets created. We especially observe this reality in the field of AI. The research and strategic breakthroughs we have experienced cannot be attributed to LLMs recursively self-improving, even though this is a common hype narrative. These ideas and their implementation clearly stem from human ingenuity.
AI will change the nature of work, including software engineering, but the default dialogue about how this might unfold should include more depth and nuance than what we presently experience.
The Need for Contemplation
In aggregate across contexts, the financial return on LLM efforts has been lackluster. The pressure to use these models has resulted in AI-generated workslop that only masquerades substance, startups with no clear value proposition, and large numbers of prototypes that have no impact. The sprint to integrate LLMs everywhere and the acquiescence to the headline-focused AI narratives have a root cause: lack of contemplation, a topic on which we can look to ancient Christian tradition for direction. AI can be used for positive aims and can achieve impactful outcomes, but only when we are intentional and reflective.
Saint Thomas Aquinas wrote, “The study of truth requires a considerable effort — which is why few are willing to undertake it out of love of knowledge — despite the fact that God has implanted a natural appetite for such knowledge in the minds of men.” Pursuing truth is often challenging, circular, and frustrating. Due to this struggle, we often default to believing common tropes and hype.
Jumping onto the hype bandwagon is easy, popular, and convenient. We don’t have to wrestle with questions about practicalities, tradeoffs, and ethics. In fact, hype pushes us away from the contemplation that is needed to understand subjects, including technological innovations, on an important philosophical level. LLMs are unquestionably profound and should elicit our introspection. Is language the ultimate schema for modeling the world? What is the nature of intelligence, and how do we accurately measure it? When should these models be employed and where might they cause harm? The “hype mindset” might provide a politically-safe, generic response to such queries, but for those involved with LLM development and application, we should closely examine these topics in a more intentional way.
A Call for Action
Contemplation, however, is not sufficient. We must pair our reflections with action. Per ancient theologian and philosopher Augustine of Hippo, “No man has a right to lead such a life of contemplation as to forget in his own ease the service due his neighbor; nor has any man a right to be so immersed in active life as to neglect the contemplation of God.”
We must grapple with the prospect that our contemplation might move us from a place of complacency to one of advocacy. Even under the pressure of hype, we might support technology reduction to address unintended consequences, or we might push back on applications with no clear purpose or proper risk management.
Less comfortably, our reflections might also breed a sense of uncertainty, paradox, and dissonance. For instance, if someone’s career is centered on building or promoting technology, displaying caution about new innovations might appear and feel odd. Finding the right scope for leveraging LLMs, stating realistic expectations for these models, and describing the necessary scaffolding for such systems is not straightforward. It involves trial and error, revisions based on evidence, and humility. Translating contemplation into action is often recursive and complex.
As Christians, we are called to contemplation about topics in which we invest our time, secular or sacred. Contemplating a topic does not always mean arriving at clear answers. In our reflection, we also might find ourselves in a place of ambiguity, a sign that God is calling us to keep seeking wisdom and to trust the timing of His inspiration. The process could mean that we have employed our God-given intellect to research a topic and considered how our faith might influence moral considerations. God bestowed on us the ability to create disciplines such as philosophy, the capability to think rationally, and the ingenuity to practice effective science. Conversely, blindly buying into hype misses the gift of reflection that God has given us. We can certainly choose to believe the hype but only once we have done the due diligence to turn it into substance.
Theologian Ron Rolheiser once penned, “Have enough of God’s agenda to let you know that this world is not ultimate, but enough of the world’s agenda to let you know that your task here is to help God shape the earth.” For those invested in technologies like AI, this statement couches the balance we are invited by God to pursue: to reflect on the deeper meanings of our world and to purposefully act in concert with our Creator.
Appendix
1. This autonomous warranty adjudication use case could be easily and deterministically solved with standard data validation tools, database queries, basic logic flows, and programmatically filling out approved report templates. Using LLMs in this specific process would inject stochastic behavior that would need to be monitored and corralled, a tradeoff that is likely not worthwhile for a workflow that should result in clear, defined outcomes. As another illustration, the technical diagram for “Yelp Assistant” demonstrates that the vast majority of system components are not what we would deem “AI”.
2. Companies that are successfully leveraging AI agents in large and complex code bases have spent substantial resources to construct their own customized tooling. For instance, Stripe built a bespoke coding agent that relies on considerable company-specific context, sophisticated in-house developer tools (authored before the agent), and many human-driven design choices.
3. We should require a much higher preponderance of evidence to abandon the basic risk management of humans understanding the complexities of software engineering and being actively engaged in the development process. LLM-based systems still have a number of issues that often go unaddressed: reasoning failures, inflation of benchmark performance due to data contamination, current limitations of recent breakthroughs, the failure of AI agents on real-world tasks, and fundamental limitations of the underlying architecture.
Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.


