New! Become A member
Subscribe to our newsletter
Insights

War and Technology: Should Data Consider Who Lives? Who dies?

When the ethical implications of the use of artificial intelligence (AI) or machine learning (ML) are discussed in the military context, the conversation more often than not centers on lethal autonomous weapons systems or LAWS. While this is, of course, an interesting area,  LAWS should not be the only focus of ethical analysis when it comes to the deployment of emerging military technology. I will examine the effect of such AI/ML on the movements of troops and equipment, including routing and navigation, on military medical triage, and on distinction (e.g., identifying possible threats). Each of these areas has its own positive opportunities technology may provide, along with its own associated ethical risks.

First, I need to make a broad point about artificial intelligence. The term “artificial intelligence” is inherently misleading, as it suggests that what AI systems do is closely akin to the workings of human intelligence. This is not the case. The state of the art is nowhere near producing artificial general intelligence (AGI) that would resemble how humans think. What actually happens in AI systems now is rapid and complex data processing, usually to seek out and identify patterns. These patterns, however, may not be what we expect. Berenice Boutin of the Asser Institute explains this point well with a simple example comparing human and machine recognition of turtles. A young child can be shown cartoon images of turtles in picture books and still go on to recognize a real turtle in a zoo. Current data-driven AI systems cannot make the same leap. It takes very careful coding and training of AI-based classification algorithms to ensure that they are reasonably reliable, and currently, the most effective systems are those that are able to analyze fairly static, consistent images, with as few variables as possible to interfere with correct classifications.

Why does this matter? It matters primarily because people have been exposed to quite a lot of “sales pitches” suggesting that AI can do more than it can (or will be able to do for decades, if ever). It is vital not to fall for hype about the capabilities of AI/ML systems for a number of reasons. In the context of the use of automated systems by the military, there is the strong danger of automation bias, which is the tendency to accept the authority of an automated or computerized system as superior to one’s own or another human’s authority.  Elke Schwarz warns that “Set against a background where the instrument is characterized as inherently wise, the technology gives an air of dispassionate professionalism and a sense of moral certainty to the messy business of war.”  For human-in-the-loop military decision-making, the risk of individuals allowing automation bias to cloud their judgment must be given proper weight. Automated systems are certainly not amoral or divorced from the ethical or character flaws of humans. In Weapons of Math Destruction, Cathy O’Neill points out, “these models are constructed not just from data, but from the choices we make about which data to pay attention to—and what to leave out. Those choices are not just about logistics, profits, and efficiency. They are fundamentally moral.” Even as such systems are improved, they need to be seen as tools that process information only in very specific ways, not as superhuman or godlike intelligences that are necessarily more accurate or objective than human agents.

In any discussion about the ethical use of AI/ML systems, we need to understand that these systems do not think like us and that they can only work with the information (data) that we give them. With this perspective in mind, let us review some ethical issues that arise when deploying data-driven systems for military use.

First, consider the case of military medical triage. The idea behind AI-augmented military medical triage would be to have an algorithmic, data-based system to help medics on the ground make time-sensitive triage decisions. In theory, adding algorithmic decision-making to such decisions should take the most gut-wrenching decisions out of the medics’  hands . Would that really happen, though, or would medics still second-guess whether they should have, for example, overridden the system’s suggestions? I (and other military ethicists) are skeptical both that we are anywhere close to having the technological capability to field an AI triage system and that such a system would reduce the responsibilities of medics.

This raises an ethical concern: what does healthy dissent look like when AI systems are given a role in decision-making, either as an advisor or an authority? One way to frame the issue is to find a threshold at which it is better to let the data decide who lives and who dies. Is there a point where we should feel morally comfortable “going with the machines,” because, for instance, the machines make X percentage fewer errors than humans in similar circumstances? This way of approaching the problem is tempting, yet it may be fundamentally misguided. It matters ethically what kinds of errors are made, not just how many. For example, an AI triage system that makes fewer overall errors than human medics do but that consistently underestimates the survival potential of a particular gender or race would not be ethical to deploy.

Let us turn now to another possible military application for automated systems: the movement of troops and equipment. Initially, logistics like routing and navigation seems like it would be less fraught with ethical peril than military medical decision-making. The benefits also seem fairly clear if we consider, for instance, the appeal of just-in-time deliveries of needed parts to units, made possible by an advanced automated system that tracks and learns where supplies will run low when. However, as any cybersecurity expert will tell you, even the best autonomous system remains susceptible to human error, as well as to attack. A just-in-time supply chain depends on correct, updated information–garbage in, garbage out. Any automated system can be hacked and undermined or simply fall prey to mistakes or unanticipated elements of chaos introduced by nonhuman factors like the environment or human ones. This is especially true of systems that depend on complex digital platforms to collect and communicate information. Every new piece of technology introduces new potential points of failure.

This highlights another concern. Great care must be taken before militaries buy into systems that may not only not have humans in the loop but may actively lock humans out of the loop. That is unacceptable. There is a reason why the NASA astronauts of the Mercury program strongly objected to complete automated control of windowless capsules, which famous test pilot Chuck Yeager said reduced them to mere “spam in a can.”  Progress has been made in this area, and there are hopeful signs, such as military research funding organizations like DARPA increasing their requirements for ongoing ELSI/LME (ethical, legal, and social issues/legal, moral, and ethical) reviews of developing projects. Nevertheless, legitimate concerns persist in light of the US military’s tendency to characterize every potential technological advancement as an urgent upgrade that must be deployed as quickly as possible to gain an advantage. This “arms race” attitude is not only reckless but also fails to consider possible harms from improperly vetted systems and the uncomfortable truth of asymmetric conflicts. It is simply not the case that the more technologically advanced side in an armed conflict always (or even usually) prevails, nor is it consistently true that the first side to deploy a particular technological advancement always gains the advantage. Sometimes, the second mouse gets the cheese.

The just war tradition (JWT) immediately becomes relevant when we turn to look at the use of AI/ML systems to attempt to distinguish between combatants and noncombatants. Here it is especially important to remember the point I reviewed at the start of this essay concerning how these systems “recognize” things and spot patterns. It is a value-laden question to ask if you want to train a computerized system to determine if a person is or is not a legitimate target in war, what exactly should you tell it to look for? Should you try to train it to look for a weapon? What do weapons look like, to a pattern-analyzing machine? What would reliably distinguish a rifle from other objects? What about cruder weapons? Humans find these kinds of identifications challenging, too, especially when under extreme stress. As before, we also have to ask what kind of error rates we need and if certain types of errors are more or less ethically tolerable than others. For example, is mistaking a child’s toy for a gun worse than mistaking a carton of cigarettes for an IED?

Suppose we decide that looking for weapons seems too problematic. The alternatives might be even worse since they would most likely involve trying to determine combatant/non-combatant status by an individual’s hostile intent (or lack thereof). Exactly how would you train a system to pick out hostile intent? It is notoriously difficult to predict and analyze human behavior and responses, and as Ruha Benjamin points out in Race after Technology, these issues are even harder when dealing cross-culturally, with diverse races, genders, ages, and communities, all in high-stress circumstances likely themselves to skew “normal” behavior.

What we would want from an ethics perspective would be a system that could assist human troops with discrimination while erring on the side of assuming someone is a noncombatant—that would focus on helping to prevent wrongful targeting and deaths. There are many reasons to hope for the development of such a system, not the least of which is a concern for the well-being of the troops themselves, who, as I have argued extensively elsewhere, can suffer moral or emotional harm when targeting mistakes are made or collateral damage assessments are incorrect. Cases of troops intentionally committing war crimes against civilians are thankfully rare, but tragic mistakes are more common. Assisting troops with avoiding killing those who do not need to die would be a goal worth achieving.

Warfare always drives innovation, and it is only to be expected that people will look for ways to use technology to try to better survive future conflicts. From an ethical perspective, however, the type of survival that matters is more than physical.

Warfare always drives innovation, and it is only to be expected that people will look for ways to use technology to try to better survive future conflicts. From an ethical perspective, however, the type of survival that matters is more than physical. However well-intended, the wrong applications of emerging tools would be devastating. The incorporation of new technology into military operations must therefore be handled with great care and deliberation, not in a mad rush to be the first out of the gate. War is not a game of chess or Go, nor is it readily reducible to zeros and ones. As World War II combat veteran J. Glenn Gray poignantly reminds us in The Warriors: Reflections on Men in Battle, “For all its inhumanity, war is a profoundly human institution.” There may be ways to innovate intelligent systems that truly augment troops, but when it comes to deciding who lives and who dies, we have to keep the human in the loop.

Editor’s note:  This feature is excerpted from Dr. French’s chapter in a forthcoming collection of essays entitled Ethics in the AI, Technology, and Information Age, edited by Michael Boylan and Wanda Teays.

Shannon E. French and Lisa N. Lindsay, “Artificial Intelligence in Military Decision-Making: Avoiding Ethical and Strategic Perils with an Option-Generator Model,” Bernard Koch and Richard Schoonhoven, editors, Emerging Military Technologies: Ethical and Legal Perspectives , The Netherlands and Boston: Brill/Martinus Nijhoff Publishers, forthcoming.

Elke Schwarz, ‘Technology and moral vacuums in just war theorising’ (2018) Journal of International Political Theory 1.

Cathy O’Neill, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York: Crown Publishing Group (2017), p.218.

Stephen D. Giebner, “The Transition to the Committee on Tactical Combat Casualty Care.” Wilderness & Environmental Medicine28, no. 2 (2017).

Jannelle Warren-Findley, “The Collier as Commemoration: The Project Mercury Astronauts and the Collier Trophy,” https://history.nasa.gov/SP-4219/Chapter7.html.

Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code, Cambridge: Polity Press (2019).

See Shannon E. French, The Code of the Warrior: Exploring Warrior Values, Past and Present, Lantham, MD: Rowman & Littlefield Publishers, second edition (2017).

J. Glenn Gray, The Warriors: Reflections on Men in Battle, New York: Harper and Row, 1970, pps. 152-153.

 


Dr. Shannon French

Shannon French is an AI and Faith Advisor, and is the Inamori Professor in Ethics and the Director of the Inamori International Center for Ethics and Excellence at Case Western Reserve University (CWRU) and a continuing advisor to various US defense and intelligence agencies. Her primary research especially focuses on conduct of war issues, ethical leadership, command climate, warrior transitions, moral injury, and the future of warfare. Previously, Professor French was a tenured faculty member teaching ethics at the US Naval Academy. She holds a Ph.D in Philosophy from Brown University.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter