Imagine the grief of losing a loved one, not to another human being, but to a machine. Killed by an autonomous drone, faceless and unaccountable, with no understanding of what it means to take a life or to stand eye to eye with the person it kills. Or imagine yourself in a room of intelligence officers, casually sipping tea while allowing an algorithm to identify and eliminate targets. This article contends that AI models deployed in the battlefield- if not subject to meaningful human scrutiny – risk eroding moral responsibility, bypassing due human judgement, and sliding us into ethically ambiguous war. Where does human intelligence come into play when machines compile kill lists? Can justice survive when decisions of life and death are entrusted to opaque pattern-matching systems?
The Rise of Autonomous Warfare
War is often defined as the outcome of violent conflict, characterized by large-scale armed hostilities between groups, non-state actors, or nation-states. The objectives of war and conflict can fluctuate – often driven initially by ideological differences and ending with material gain- whether land, power, or resources. Historically, warfare has always intertwined military force with economic strategy – from ancient battles over resources and control of trade routes to modern conflicts driven by cyber economic tools that manipulate markets and disrupt financial infrastructure.
Anthropologists like Edward Van Dyke Robinson and Clayton Robarchek have argued that war is fundamentally about material advantage, leaving little room for consideration of value systems or spirituality and explicitly disavowing human decision-making in the process.1
While these material motivations persist, warfare has steadily shifted – from impersonal industrialization first seen in the Crimean war with automatically advancing guns and trench strategies, to today’s algorithmically driven ‘hyper war’ where AI systems may operate with minimal human oversight. This technological transformation, while not without precedent, marks a critical juncture in the ethical escalation of conflict. It poses urgent legal and moral dilemmas: Can machines reliably distinguish civilians from combatants? And when autonomous systems err, who bears responsibility?
Over the past decade, AI has come to influence every aspect of our life and has hijacked our critical thinking before we became aware of its impact. The determination of techno-optimists furthers the application of AI armed conflict and enables autonomous killings of humans. At its core, the narrow functions of AI like classification, prediction, generalization, and reinforcement learning may appear as mere mathematical abstractions- but when deployed in high-stakes environments, these systems introduce profound ethical risks. For instance, reinforcement learning algorithms are susceptible to backdoor attacks that can render autonomous systems unpredictable or malicious when triggered.2 Moreover, classifiers trained on biased data are known to produce discriminatory outcomes, especially in surveillance contexts.3
These autonomous machines, though not directly trained on human behavior, operate within frameworks deeply influenced by human-derived data and decisions. Designed to interpret human actions and environments, often using pattern recognition, inference, and statistical correlation, they inevitably carry forward biases, gaps, and errors. As a result, these systems treat humans as objects rather than as sentient beings capable of thought, emotion, or moral reflection. This fundamentally alters the landscape of modern warfare, as their functions lack the contextual understanding, ethical reasoning, and empathy required for decisions involving life and death. These machines dehumanize conflict and introduce new forms of trauma and injustice in post-conflict societies.
For instance, the development of autonomous weapons systems (AWS) and software not only replaces soldiers but also diminishes the moral weight of human risks, embedding them within codes and evading accountability for their consequences.
The U.S. Department of Defense (DoD) defines AWS as “systems capable of understanding higher-level intent and direction, namely achieving the same level of situational understanding as a human and able to take appropriate action to bring about a desired state.” In 2023, the definition was expanded to include systems that, once activated, can select and engage targets without further input from a human operator.4
According to the International Committee of the Red Cross (ICRC), deploying autonomous weapon systems does not eliminate human control, but it significantly reduces the level and clarity of that control. These systems rely on software and sensors to detect and strike targets autonomously, carrying out operations based on predefined target profiles, often without real-time human oversight. While humans still may initiate or deploy such systems, once activated, human judgment and adaptability can be sidelined. The ICRC emphasizes that meaningful human control must be preserved throughout all stages- design, deployment, and operation of AWS – to prevent unanticipated consequences and to uphold compliance with international humanitarian law.5
For instance, several Palantir platforms such as TITAN, Maven Smart System, and MetaConstellation integrate diverse data sources like satellite imagery, drone footage, and signals intelligence into real-time battlefield awareness. The AI assistance of these platforms enables rapid analysis of sensor data- reducing the ‘sensor to target’ time frame from hours to mere minutes and supporting the army’s goal of 1000 battlefield decisions per hour which increases the risk of unexamined errors unless closely monitored. While these systems emphasize human-in-the-loop setup by demonstrating simple commands like sending drones, estimating enemy capabilities or jamming communications, there’s scant public transparency regarding how they’re trained or validated. Critics warn that this creates a ‘black box battlefield,’ where automation may overshadow human judgement because there’s little to no public data on errors rates, bias, or operational failure, creating an accountability gap on the battlefield.6
The Risk of Autonomous Systems during Conflict
The use of artificial intelligence in warfare in the form of killer robots and drones is not only a technical development to enhance situational understanding of intent and direction, but also to take appropriate actions in less time with more precision. It highlights the profound crisis that erodes the legal and spiritual accountability of military actions and further dehumanizes conflict.
The late Pope Francis stated that use of autonomous machines contradicts the principle of human dignity because true decision requires human wisdom that understands human dignity. In a similar vein, Christof Heyns, former Special Rapporteur on Extrajudicial Arbitrary Executions refers to AWS as ‘Death by algorithm’ and an ‘affront to human dignity’ where people are considered pests or objects as they are killed merely by sensing their movement and environment.
At the Hiroshima conference on “AI ethics for Peace”, Pope Francis emphasized that a “machine makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences. Human beings, however, not only choose but in their heart can decide differently”.7
Unlike humans, who can draw upon collective judgment, situational insight, and ethical reflection- autonomous systems act on preconfigured patterns and objectives, often without the capacity to question or reassess their operational context. While humans can revisit mission outcomes and adjust future behavior, AI lacks the ability to reflect in the human sense- it cannot pause to weigh ambiguous scenarios or moral uncertainty. It cannot pause or reconsider its actions as to whether the detected object is a military asset or a civilian vehicle. In the context of war, this inability to reflect can result in mistakes so severe that it risks not just individual lives but the very foundations of civil society.
Leading global powers, including the United States and China, are also engaged in an intense competition over advanced AI-enabled military technologies.8 This competition is not merely theoretical; it is already manifesting in conflicts worldwide. Countries like Israel and Russia are deploying automated weapons systems with increasing frequency in Ukraine and Gaza. The testing and operational use of these AI-enabled targeting systems have created unprecedented ethical challenges. For instance, experts warn that Israel’s AI-enabled targeting system, Lavender, yielded a roughly 10% error rate9 in identifying targets10, often operating with minimal human oversight and rapid decision making, raising serious concerns about civilian harm.
Further, as these systems, Lavender11 and Where’s Daddy,12 Gospel,13 accelerated the process of killing, by facial recognition systems, mobile tracking, and behavior patterns to assign threat levels- method that inherently error-prone, the Israeli military reportedly identified more than 37,00014 individuals as Hamas-affiliated targets.
It is important to note that error detachment in these systems is not absolute; they remain subject to human analysis and authorization. While such systems can generate recommendations and flag potential militants based on previously trained data, human oversight plays a critical role. However, in practice, human decision-making may be reduced to the rapid confirmation of AI-generated outputs— a process that critics warn risks devolving into a perfunctory ‘rubber stamping’ of algorithmic suggestions.15 One of the outcomes is indiscriminate killings, particularly due to the reliance on facial recognition technology in target identification. While these systems are designed to improve targeting accuracy, the multidimensional and dynamic nature of facial features makes misidentification highly likely—especially in uncontrolled conflict environments. Variables such as lighting, distance, movement, and the lack of diverse facial datasets can result in civilians being falsely flagged as combatants, with potentially fatal consequences.16
Scholars have pointed out that even in controlled environments, these technologies can not be fully trusted. It has produced significant false positives in civilian contexts outside armed conflict. In the United Kingdom, live facial recognition (LFRT) used by law enforcement produced an average false recognition rate of 95%17, while in the United States, a 2019 case in New Jersey led to a suspect being wrongly imprisoned for 10 days due to misidentification.18
Additionally, human oversight in threat classification has been noted as significant during the pre-deployment phase, though scholars suggest it can be calibrated as stringently or as loosely as military actors deem appropriate.19 Reports estimate that since October 7, over 44,000 Palestinians have lost their lives, with a portion of these casualties plausibly linked to AI-assisted targeting mechanisms where identification processes may not have been sufficiently precise.20 One also might have expected some form of acknowledgement that there is no easily objectively verifiable information on the use made of these weapons in current conflicts or the number of civilians killed as a result of misidentification. These gaps in accountability and verification deepen the ethical concerns about delegating life-and-death decisions to AI systems, particularly in contexts where civilian harm is so pronounced.
Building on these concerns, investigations by +972 Magazine and Local Call further suggest that human intelligence officers reportedly devote only brief periods—sometimes as little as 10 to 20 seconds—to authorization in order to accelerate the targeting cycle, effectively serving as a form of procedural endorsement or “rubber stamping” rather than substantive oversight. This raises serious ethical concerns regarding human dependence on AI. As reliance on AI-based decision-support systems expands, the risk is that critical human functions are increasingly displaced by algorithmic processes. The International Committee of the Red Cross has cautioned that when such systems begin to shape battlefield outcomes, essential human qualities such as moral reflection, accountability, and judgment, may be eroded, even if operators remain nominally “in the loop.”21
According to Tal Mimran, a former military legal advisor,22 Prior to the adoption of AI in 2015 it would take approximately 20 intelligence officers nearly 200-250 days to compile targeting lists of comparable scale, involving deliberative processes and critical discussions. By contrast, similar outputs can now reportedly be generated within a week through AI systems. While such acceleration may be viewed as an operational advantage, scholars and practitioners have raised concerns that the compression of time frames risks bypassing the ethical, legal, and moral considerations that traditionally informed such decisions.23
Enduring Risks of Autonomous Surveillance in Post-Conflict
The involvement of AI in warfare does not stop at attacking or killing. Rather, it raises concerns about the fragility of human rights, particularly in the context of post-conflict societies. AI-driven information and cyber operations destabilizes humanitarian efforts and exacerbates existing vulnerabilities. Overreliance on AI outputs may skews humanitarian aid distributions.24 A study in the Journal of International Humanitarian Action highlights how autonomous decision-making tools can mis prioritize aid. For example, in displacement zones, algorithms might direct resources toward less needy groups if data inputs are flawed or unrepresentative, potentially exacerbating conflict.25
Further, AI-driven needs assessments undermine impartiality and create a digital divide. By processing large amounts of data- ranging from satellite imagery and social media streams to ACLED collections- AI often paints a flawed and incomplete picture of reality.26 It has a tendency to operate as a “black box,” meaning humanitarian workers cannot understand or trace its decision-making logic without certain qualifications to understand its explainable metrics. Communities with little or no digital presence after a conflict may be overlooked by autonomous predictions, undermining the impartiality and fairness that only humanitarian workers on the ground can ensure.
Delegating life-and-death decisions to machine inference blurs the lines of accountability, fairness, and responsibility. It raises profound ethical, moral, and political dilemmas that mirror the existential concerns of the nuclear arms race. AI systems leveraging facial recognition, biometrics, and geographical surveillance create a kind of ‘algorithmic memory’- once personal information is observed through model training, it becomes deeply integrated in the model’s internal parameters and cannot be removed as it might be from a traditional database.27
Unlike human memory, this digital memory does not fade; once captured, the data may persist indefinitely. Even with emerging techniques such as machine unlearning, which aim to selectively remove specific data points, erasing traces from trained models remains technically challenging and often ineffective. For instance, due to the entangled nature of the deep learning models, removing one data point can degrade performance or fail to completely erase its influence. Such indelible digital records pose the risk of perpetuating conflict by sustaining surveillance, identity profiling, and informational manipulation long after active hostilities have ceased.28
The use of AI for humanitarian efforts in post-conflict societies raises critical ethical concerns related to surveillance and data privacy. Advanced facial recognition systems and biometrics can turn aid infrastructures into tools for data breaches, undermining human dignity. They put already vulnerable communities at risk of coercion and data misuse, potentially fueling conflict escalation and political volatility.29
Way forward
As AI continues to reshape the dynamics of warfare, and civil society groups race to make humanitarian aid more accessible through technological tools, it is imperative that world leaders introduce an international framework for the use of AI in conflict and post-conflict societies.
AI reflects the authoritarian imperatives of current global power structures, further widening the gap between the “haves” and “have-nots” within international governance systems. Civil society actors, peacebuilders, and journalists must understand the profound and lasting consequences of AI-driven warfare, which could prolong or intensify conflicts long after traditional combat has ended.
Many civil society organizations are working to promote the responsible development and ethical use of AI in warfare and broader society. They are also advocating for increased awareness and transparency around how these systems function. However, addressing the pace and scale of technological advancement requires more than ethical advocacy. It demands global cooperation to ensure AI is developed and deployed as a tool for peace, with clear international consensus and regulation.
The future of AI in warfare calls for careful deliberation on its long-term risks. As states increasingly rely on autonomous systems, the likelihood of unintended escalation rises. AI in warfare accelerates decision-making to machine speed, compressing critical timelines and reducing opportunities for de-escalation. AI speed vs. error recommends aggressive actions, worsening tensions even where humans might seek restraint. It is therefore crucial to reshape the global narrative around AI in armed conflict, prioritizing human oversight, robust safety mechanisms, and international legal standards that place human dignity and accountability at the center of technological innovation.
References
- Motivations and Material Cause in On the Explanation of Conflict and War by Clayton Robarchek, Anthropology of War, Chapter 3, p. 56.
- https://arxiv.org/abs/1701.04143/ ; https://www.wired.com/story/tainted-data-teach-algorithms-wrong-lessons/
- https://www.researchgate.net/publication/386099079_AI_and_Ethics_in_Surveillance_Balancing_Security_and_Privacy_in_a_Digital_World/
- https://gjia.georgetown.edu/2024/07/12/war-artificial-intelligence-and-the-future-of-conflict/
- https://blogs.icrc.org/law-and-policy/2025/02/20/ai-war-and-in-humanity-the-role-of-human-emotions-in-military-decision-making/
- https://aiweapons.tech/the-rise-of-palantir-military-ai-from-counterinsurgency-to-kill-chains/
- https://www.vaticannews.va/en/pope/news/2024-07/pope-reconsider-the-development-of-lethal-autonomous-weapons.html
- https://carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield
- https://nypost.com/2024/04/05/us-news/israel-used-secretive-ai-program-to-identify-thousands-of-bombing-targets-report/
- https://www.972mag.com/lavender-ai-israeli-army-gaza/
- https://www.nytimes.com/2024/04/10/opinion/war-ai-israel-gaza-ukraine.html
- https://www.businessinsider.com/israel-ai-system-wheres-daddy-strikes-hamas-family-homes-2024
- https://www.972mag.com/lavender-ai-israeli-army-gaza/
- https://www.972mag.com/lavender-ai-israeli-army-gaza/
- https://www.972mag.com/lavender-ai-israeli-army-gaza/
- https://brill.com/view/journals/ihls/aop/article-10.1163-18781527-bja10119
- Big brother watch, ‘Faceoff: The lawless growth of facial recognition in UK policing’ (May 2018)
- Kashmir Hill, ‘Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match’ (The New York Times, 29 December 2020) <https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html>.
- https://www.972mag.com/lavender-ai-israeli-army-gaza/
- https://www.qmul.ac.uk/media/news/2024/hss/gaza-war-israel-using-ai-to-identify-human-targets-raising-fears-that-innocents-are-being-caught-in-the-net.html
- https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems
- https://time.com/7202584/gaza-ukraine-ai-warfare/
- https://www.ap.org/news-highlights/best-of-the-week/first-winner/2025/as-israel-uses-u-s-made-ai-models-in-war-concerns-arise-about-techs-role-in-who-lives-and-who-dies/
- https://international-review.icrc.org/articles/ai-humanitarian-action-human-rights-ethics-913
- https://jhumanitarianaction.springeropen.com/articles/10.1186/s41018-021-00096-6
- https://prism.sustainability-directory.com/scenario/ai-driven-needs-assessment-in-conflict-zones
- https://www.techpolicy.press/the-right-to-be-forgotten-is-dead-data-lives-forever-in-ai/
- https://cloudsecurityalliance.org/blog/2025/04/11/the-right-to-be-forgotten-but-can-ai-forget?
- https://www.wiltonpark.org.uk/reports/the-risks-and-opportunities-of-ai-on-humanitarian-action-report
Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.