Jacob Ward, an American journalist and technology correspondent for NBC News, reports on technology’s social implications. Unlike many “doom and gloom” perspectives of our technology future, which focus on Artificial General Intelligence and its eventual takeover in a singularity, Jacob Ward proposes a measured, purposeful exploration of the gradual effects of AI pattern-recognition and decision-guidance systems on human behavior and the shrinking availability of choices. At stake is nothing less than human freedom itself and the future of free will.
The Loop begins as a reality check of our options for leaving Earth. It would require a ship within which to survive for generations and, from us, long-term sensibilities for recognizing and regulating something that will determine the future of our species. Jacob Ward describes The Loop as three concentric feedback loops: human behavior, how our society and technology reflect human behavior back to us, and the loss of human agency; it is a generational spiral of shrinking choices that could affect all areas of our lives. With this book, he suggests there is still time to step back, recognize its preconditions, spot its paths in modern life, and break its influence on our future.
The first loop is the human condition, beginning with evolutionary biology and the “mental gap between our perception of what’s going on and what’s actually going on.” Ward’s argument, backed by decades of behavioral studies, is that we are built to follow guidance in our lives while “believing we are in fact making our own decisions,” so we had better get to know the inner workings of our brains, or we’ll be vulnerable to those who would prey on us and keep us blind to the effects. These inner workings include common mistakes in judgment, aversion to uncertainty, desire for assurance, amnesia for lessons of the past, and outsourcing of important decisions to our emotions. Ward contends that blindness to this loop and lack of awareness of human behavior make us susceptible to the second loop.
The second loop is the modern world that has evolved to gather human behavioral data and exploit our unconscious tendencies out of greed, all while not knowing how susceptible we are to being guided and manipulated. The extraordinary opportunities for exploiting our involuntary decision-making systems and the ways we rationalize our choices are a powerful, unexamined part of what makes capitalism work.
Research to help understand and inform us of our tendencies is used by capitalists to exploit them further. Ward notes that knowing about them doesn’t prevent or reduce their effects. The science of forming habits suggests that the only solution is to not participate at all “by anticipating and avoiding self-control struggles.” The deep cognitive illusions and psychological vulnerabilities of the human mind, our tribalism, our persuadability, and our deference to patterns make us the perfect data set for AI.
The third and final loop is the power of modern AI pattern-recognition and decision-guidance systems to seduce consumers, companies, and governments with the lure of convenience, cost savings, efficiency, and legal immunity. Machine learning can only use existing data to make predictions. Ward describes the phenomenon of ‘performativity,’ which closes the third loop by offering new choices based on past decisions. New decisions are then fed back into the system to reinforce the predictions and limit the next generation of choices.
The Loop is the convergence of pattern-recognition technology and unconscious human behavior, the effects of which are just beginning to show with dire predictions for the future. It goes like this. The applications of AI will be bolted onto every part of our lives. Without legal action and regulation, our choices will narrow, human agency will be limited, and our worst impulses will further dominate society. Caught in a cycle of sampled behavioral data and recommendations, we will experience a collapsing spiral of choice. By handing over more and more of our difficult choices to decision-guidance systems, we will no longer know what we like, how to make choices, or how to speak to one another.
Central to the goal of each AI system is the notion of an objective function, its stated purpose, which some experts contend is actually a relative determination instead of a once-and-for-all requirement. For simple problems, objective function is easier to define, but where human behavior is concerned, and ethical behavior is sought, there is no such thing as an objective function.
Ward warns that the biggest obstacle to fixing our dependence on decision-guidance technology is a set of assumptions about human behavior, AI systems, and capitalism. The basic assumption is that tech makes our lives better, and capitalism creates solutions that will lift up everyone. The positivity bias that “the arc of the moral universe is long, but it bends toward justice” is a statement of belief, not a provable theory. We assume that AI can improve any process. Machine learning predictions camouflage our unconscious choices as rational, reasonable, rewarding courses of action. Ward says, “Capitalism blinds us to the inequities baked into The Loop.” Hard work, chance, novelty, spontaneity, and the possibility of a lucky break feed the myth of equality when, in study after study, it is centuries of systemic bias, race, gender, and last name that more accurately determine success.
Ward is not all doom and gloom. In his final chapters, he gives examples of AI researchers and legal battles that shine a light on the path that could prevent the closing of The Loop, a point of no return – apps that teach parents how to communicate in front of their kids, advise homeowners in a fire zone how to reduce their insurance premiums, and inform governments on the true sentiments of their constituents. These tools have the potential to look toward future changes in human behavior instead of recycling decisions of the past.
Impressive legal wins against online casino game-makers have shown significant progress against the predatory use of AI decision-guidance systems, which deliberately promote addictive behaviors among whales. Ward suggests that the National Vaccine Injury Compensation Program (NVICP), an exception to the rules and assumptions of capitalism to make the societal good of vaccines a financially and legally viable service, establishes a model for addressing the harm done by socially necessary technologies. The same could be done for AI-based systems. This is true for self-driving cars to protect against lousy driving and backup cameras to save the lives of children, but not in the case of firearms. Our emotional attachment to our values swings both ways. What the AI uses to decide what’s important doesn’t always reflect who we are or want to be.
Ward succeeds in his goal because he offers a way to devise new solutions by altering human behavior in the direction of increased awareness, knowledge, freedom, and choice. We are built to avoid thinking for ourselves, setting The Loop in motion, and we are ready to turn over our most vitally important decisions to automated systems. Ward encourages us to learn from this moment and pull back from the seductive influences of convenience and profit to protect “the best aspects of who we are.”