New! Become A member
Subscribe to our newsletter

Book Review by Prof. Shannon E. French of Four Battlegrounds: Power in the Age of Artificial Intelligence by Paul Scharre

Paul Scharre’s Army of None (Norton, 2019) deservedly had a significant impact on domestic and international conversations about the practical potential, strategic, and ethical, legal, and societal implications (ELSI) of the military applications of existing and emerging autonomous weapons systems. Scharre’s latest book, Four Battlegrounds (Norton, 2023), is an extended argument making the case that artificial intelligence (AI) will play an outsized role in shaping global power dynamics in the not-so-distant future. Scharre compares the ways in which AI is being integrated into military and non-military systems within the major world powers – especially differences between what is happening in the United States and China – and asserts that key distinctions concerning (1) methods of collecting, using, and securing data, (2) developing advanced computational ability (which relies both on innovation and securing scarce resources), (3) the appreciation and cultivation of talent, and (4) the organization and cooperation of core institutions will determine whether authoritarian or democratic values end up supreme.

Like Army of None, Four Battlegrounds is well-written, accessible, and extensively researched. Scharre leverages his access to military and high-tech circles to good effect to bring the reader up to date on the state of the art for AI-related systems as much as possible, given the startling rate of change at the cutting edge of technology. The book is thought-provoking and engaging, and certainly worth reading, especially for those wanting a good overview of potential AI strategic impacts. However, there are some aspects of Scharre’s approach with which I take issue. First and foremost, there are elements of AI sensationalism that feed into “existential threat” narratives. This distracts from the restrained and well-considered deployment of new technologies that Scharre actually means to promote, which is more human-centric and grounded. Generally, he uses far too many scare tactics, which undermine confidence in his insights and the persuasiveness of his central thesis.

Early in Four Battlegrounds, Scharre conveys less healthy skepticism than expected. He seems quick to accept a large degree of AI hype, without adequately acknowledging (until near the end of the book) the frequent failure of the people promoting these systems to report honestly about their weaknesses and vulnerabilities. For example, Scharre tries to stir his readers’ concerns by providing the detailed example of an AI piloting system beating a human pilot repeatedly in simulated dogfights. He builds up the drama of the case, describing the performance of the AI system as “superhuman” –  a word he uses frequently throughout the text. Yet he initially mentions almost in passing the highly relevant facts that the simulation was extremely limited in scope, the AI could make moves not made available to the human pilot, and the AI had full and complete knowledge of of the battle domain: “In the AlphaDogfight Trials, the AI agent was given perfect situational awareness of the simulated environment, including the location of the opposing fighter.” These points make the simulation tests almost meaningless, since they cannot be mapped onto reality in any way that could predict an AI pilot system’s potential success in an actual dogfight – let alone against pilots who know they are engaged with an AI system and can take steps to confound it.

Scharre says blithely, “In the real world, an AI system would need to rely on sensors and data processing algorithms to find enemy aircraft, identify them, and correctly distinguish them from friendly or civilian aircraft. But classification algorithms that can identify objects are improving as well.” While classification algorithms are improving, the truth is that designers are also hitting walls with these systems, discovering that no matter how well trained, they remain susceptible to malicious attacks and deceptive techniques to make them mistake one object for another. Scharre admits this in a mid-text photo caption – “AI systems are vulnerable to a range of attacks, from adversarial images to data poisoning” –  and later on gives clear examples of the extent of such weaknesses and failures (including the now well-known case where eight Marines all evaded a sophisticated AI detection system using tactics as low-tech as putting cardboard boxes on their heads). I applaud Scharre exposing cases of AI snake oil, but he persists in implying that it is only a matter of time until there are solutions to these AI challenges. In fact, it is by no means clear that these problems can be overcome, as human ingenuity continues to keep pace with and overcome algorithms. I prefer Scharre’s descriptions of machine-human partnerships that are presented in a manner less gripping, but far more plausible.

Due to the inherent complexities of modern combat and the so-called “fog of war,” it is blind technological optimism, with no basis in the current or emerging abilities, to assert that AI systems will ever be able to reliably discern friend from foe or combatant from civilian. At their core, as Scharre ultimately acknowledges, all AI systems are brittle, in the sense that they tend not to survive contact with real-world conditions. Unfortunately, he chooses to use even this clear-eyed point to spin up more drama: “We are careening toward a world of AI systems that are powerful but insecure, unreliable, and dangerous.”

The core of the book is divided into eight parts, titled: Power, Competition, Repression, Truth, Rift, Revolution, Alchemy, and Fire. Parts I and II will be familiar to most readers, as there has already been a considerable amount written on the topics he addresses there, such as opportunities and obstacles for militaries to work with civilian developers to improve AI outcomes. The discussion of AI’s seeming supremacy in AlphaGo feels a little outdated (which is nearly impossible to avoid with the publication timeline for books), as recently human players have outwitted AI AlphaGo champions again. However, the case study of Strategy Robot’s poker-playing AI system is timely, intriguing, and well detailed. Ideally, Scharre could also have spoken with critics of Strategy Robot’s work, rather than giving Tuomas Sandhold free reign to present his firm’s work in the best possible light. Generally speaking, the book is a little light on critical perspectives, including some of the most persuasive voices against AI hype, bias, and brittleness. I would have liked to have seen more engagement with those viewpoints.

Part III, on Repression, does a good job of examining some of the more dystopian threats that AI systems present, especially when employed by Authoritarians. Similarly, Part IV contains compelling warnings about how AI can contribute to a toxic mix of disinformation and mistrust that presents a clear and present danger to democracies. In Part V, Scharre really digs into the guts of the current state of entanglement between US and Chinese AI programs, and reflects on what this may mean for future geopolitics. He is at his strongest analyzing and campaigning for the need for robust bilateral cooperation between the US and China around the development of international norms and rules governing AI, as well as the importance of bringing in a much wider swath of nations with powerful high-tech interests.

Parts VI, VII, and VIII return to some of the territory Scharre covered in Army of None, but with updates and greater focus on AI, along with fresh connections to the theme of competition between authoritarian and democratic states. He opens the section on Revolutions with a statement with which I strongly agree: “History shows that what matters most in periods of technological disruption is not getting a new technology first or even having the best technology but finding the best ways of using it.” To put it another way, it is the second mouse who gets the cheese, and studying the failures of others is a wise approach. History has proven repeatedly that the higher-tech side in an asymmetric conflict is by no means guaranteed to prevail. Part VI offers an “inside baseball” examination of the way government red tape interferes with AI advancement. Scharre seems to blame the many failures of AI he catalogs here and in Part VII on bureaucratic hurdles and other external obstacles, again stopping short of concluding that AI may never fulfill its promise.

In the final part of the book, Scharre speculates further about possible military AI futures, including wars fought by or between AI systems with no political motivations. Such concerns depend too much on AI advancing beyond what most experts expect. Thus, I find these fears less grounded than the legitimate risks Scharre flags of AI systems being deployed before they are ready and given too much power to make serious mistakes. There is already ample evidence of this occurring, which is why any degree of AI hype, even to grab readers’ attention, strikes me as counterproductive. In the end, Scharre wants what we all should want – safe, sane, and restrained use of AI as a tool, not a replacement for human decision making or accountability.

Prof. Shannon E. French

Shannon French is the Inamori Professor in Ethics and the Director of the Inamori International Center for Ethics and Excellence at Case Western Reserve University and an advisor to various US defense and intelligence agencies. Her primary research focuses on conduct of war issues, ethical leadership, command climate, warrior transitions, moral injury, and the future of warfare.  Previously, Shannon was a tenured faculty member teaching ethics at the US Naval Academy. She holds a PhD in Philosophy from Brown University.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter