New! Become A member
Subscribe to our newsletter
Book Review

Army of None: Autonomous Weapons and the Future of War

Tom Clancy, James Patterson, or Dan Brown probably wish they’d written the opening sentence of Paul Scharre’s book, Army of None: “On the night of September 26, 1983, the world almost ended.” From there, Scharre tells a truly terrifying tale.

It was the height of the Cold War. Actually, not that cold. Just a few weeks earlier, on September 1, the Soviet Union had shot down a commercial airliner flying from Alaska to Seoul that strayed into Soviet air space. 269 people, including an American congressman, had been killed.

Fearing retaliation, the Soviet Union was on high alert — including its Oko satellite early warning system designed to detect U.S. missile launches. Just after midnight on September 26, Oko issued a grave report: the United States had launched a nuclear missile at the Soviet Union. Lieutenant Colonel Stanislav Petrov was on duty that night in a bunker outside Moscow. It was his responsibility to report the missile launch up the chain of command to his superiors.

“In the bunker, sirens blared and a giant red backlit screen flashed ‘launch,’ warning him of the detected missile, but still Petrov was uncertain. Oko was new, and he worried that the launch might be an error, a bug in the system. He waited. Another launch. Two missiles were inbound. Then another. And another. And another—five altogether. The screen flashing “launch” switched to “missile strike.” The system reported the highest confidence level. There was no ambiguity: a nuclear strike was on its way.”

Soviet military command would have only minutes to decide what to do before the missiles would explode over Moscow. Still, Petrov had a funny feeling. Why only five missiles? That didn’t make sense. A real surprise attack would be massive, an overwhelming strike to wipe out Soviet missiles on the ground. Petrov estimated the odds of the strike being real at 50/50, no easier to predict than a coin flip.

“If he told Soviet command to fire nuclear missiles, millions would die. It could be the start of World War III. Petrov went with his gut and called his superiors to inform them the system was malfunctioning. He was right: there was no attack. Sunlight reflecting off cloud tops had triggered a false alarm in Soviet satellites. The system was wrong. Humanity was saved from potential Armageddon by a human ‘in the loop.’”

Then Scharre asks and answers the question that haunts the remainder of the book: What would a machine have done in Petrov’s place? “The answer is clear,” Scharre says, “the machine would have done whatever it was programmed to do, without ever understanding the consequences of its actions.” And so we come face to face with the potential dangers of autonomous weapons.

Surprisingly, though, the book gets a lot less interesting from there on out — partly because, despite the gripping opening, Scharre mostly writes like what he is: a policy analyst. Which means, for example, that he gives us a great deal of detail on things like the degree of autonomous capability embodied in various weapons systems (going back, even, as far as the Gattling gun, a Civil War precursor to machine guns).

In fact, it turns out that’s sort of complex because there are three distinct dimensions of autonomy for weapons: the type of task the machine is performing; the relationship of the human to the machine when performing that task; and the sophistication of the machine’s decision-making when performing the task. In practice, though, the meaningful distinctions are between:

  • Semiautonomous systems — the machine performs a task and then waits for a human user to take an action before continuing. A human is “in the loop.””
  • Supervised autonomous systems where the human sits “on” the loop. Once put into operation, the machine can sense, decide, and act autonomously, but a human user can observe the machine’s behavior and intervene to stop it, if desired.
  • Fully autonomous systems where the machine can sense, decide, and act entirely without human intervention. Once activated, the machine conducts the task without communication back to the human user. The human is “out of the loop.”

Scharre makes clear that most weapon systems in use today are semiautonomous, but “a few cross the line to autonomous weapons.”

Interestingly, Scharre reveals that the U.S. military seems to be of two minds regarding autonomous weapons. On the one hand, the military sees their potential benefits — including lower cost to operate, safety for humans who can be absent from the scene of conflict, and faster decision-making versus humans. But the military seems also to sense that the decision to kill a human(s), even in war, should be made by a human, not a machine. And that the downside consequences of a “glitch” in an autonomous weapons system might be catastrophic.

In that regard, one of Scharre’s more interesting chapters looks at the arena of high-frequency trading in the financial markets as a potential analog for autonomous weapons — since both involve high-speed adversarial interactions in complex, uncontrolled environments. Given that, his retelling of the Knight Capital Group story is especially harrowing.

In 2012 Knight Capital Group was a high-frequency-trading titan, executing over 3 billion daily trades. Knight typically bought and sold stocks the same day, sometimes within fractions of a second. Because of that, Knight was a key player in the U.S. stock market, executing 17 percent of all trades on the New York Stock Exchange and NASDAQ.

At the opening bell on July 31st, 2012, Knight deployed a new automated trading system. One of its functions was to break up large orders into smaller ones, which could be executed with less risk of market turmoil. However, Knight’s trading system wasn’t registering that these smaller trades were actually completed — so it kept tasking them, creating an endless loop of trades.

“Knight’s trading system began flooding the market with orders, executing over a thousand trades a second. Even worse, Knight’s algorithm was buying high and selling low, losing money on every trade. There was no way to stop it. The developers had neglected to install a ‘kill switch’ to turn their algorithm off.”

While Knight’s computer engineers worked desperately to diagnose the problem, the software was actively trading in the market, moving $2.6 million a second. By the time they finally halted the system 45 minutes later, the runaway algorithm had executed 4 million trades, moving $7 billion. Some of those trades made money, but Knight lost a net $460 million.

At the start of the day, Knight had $365 million in assets. Now, 45 minutes later, the company was bankrupt. Scharre’s point? Knight’s runaway algorithm vividly demonstrates “the risk of using an autonomous system in a high-stakes application, especially with no ability for humans to intervene.”

Scharre leaves it to his readers to complete the obvious thought: stock trading is one thing, but war is truly the ultimate high-stakes application.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter