New! Become A member
Subscribe to our newsletter
Book Review

Review of Kate Crawford’s Atlas of AI

Among both those who are giddiest and those who are most terrified about technological change, there’s this phrase that gets thrown around a lot: technological singularity. The singularity, they say, is a point in the future—perhaps the near future?—when global technological progress will turn a corner and we will find it has transformed into something new, something greater and more powerful than the sum of its parts that—crucially—can no longer be stopped, and which will profoundly alter the course of human civilization. This is the sort of thing that other cultures have called “apocalypse,” and as with other apocalypses, whether you think this is excellent or terrible depends on what you think is about to happen and whether you think it will benefit you.

People who believe in apocalypses are used to disappointment; projections of the singularity’s arrival date are often revised. What believers cannot accommodate, however, is a rejection of the apocalypse itself—and this, effectively, is what Kate Crawford has claimed in her new book, Atlas of AI, a book-length teardown of an emergent technological orthodoxy.

For Crawford, the singularity has already happened—and it has happened not because some bits of software have made a special kind of leap, but because we have preemptively taken a disparate group of rather problematic technologies, bundled them together and, asserting that they are more than the sum of their parts, have slapped on the label “Artificial Intelligence.” Crawford does not think we should have done this; for her, in fact, “AI is neither artificial nor intelligent,” and ultimately too vacuous a concept to be tackled as such. Rather than being radically new, the technologies that comprise AI do what other technologies have long done: they extract labor, they depersonalize, they reinforce ideologies, and they project power. Yes, AI does these things differently, but not so differently that it gets a pass on morality. Instead of bowing to the temptation of the AI concept and wondering what will happen next, Crawford invites us to look behind the term and behold all that it obscures

What it obscures, it turns out, is a lot of exploited human beings. Crawford begins her book with two chapters on how AI hides its massive human and environmental costs with “a well-practiced form of bad faith.” The environmental costs incurred to support ever more sophisticated learning models and the slave labor likely hidden in mining lithium and other metals are not fundamentally different from the invisible costs of previous technologies, and the hounding of employees with constant surveillance and zero-margin efficiency standards are just souped-up versions of what industrial employers were demanding a century ago. Despite the fact that it looks effortless, machine learning is incredibly energy intensive, and ambitions to improve it will only make it more so. Apple and Intel cannot guarantee the cleanliness of their supply lines and likely never will. If AI is unique, it is in its shortcomings, which have created space for Amazon’s Mechanical Turk and other stop-gap measures (“fauxtomation”) in which human beings work for fractions of the minimum wage to do work that AI actually can’t do, but might in the future—if cheap human labor helps it along.

The hypocrisy of all of this is that this people-hiding technology has been gobbling up unimaginable quantities of information about people and using it to evaluate them and the world. Crawford devotes her next three chapters to this. Images are ripped from the internet without context or consent, turning the internet into a kind of “natural resource.” It’s really just another form of extraction, masked by its silence and neutered by its scale. “Ultimately, ‘data’ has become a bloodless word,” Crawford says. But this bloodless mass of data still requires structure, and very often that structure re-inserts biases about gender, race, and class, functionally bequeathing to AI the same worn biases that our society has been trying to shake for decades or centuries. Systems like these are designed to classify human beings in as many ways as possible, in the process recreating notions of “purity” and “deviance” for a million different classifiers. This bias towards knowledge-through-classification can even take AI down the road of pseudo-science, as with the longstanding interest in divining human emotional states from facial expressions. Treating these systems as omniscient can result in discrimination and dehumanization on an unimaginable scale.

Much of Crawford’s book is about the potential of AI systems to be misused, but the last chapter hones in on what can actually go wrong. The use of Palantir’s algorithms by law enforcement, for example, means that the people chosen for surveillance will find their data in more law enforcement databases, which means they will be surveilled even more. China’s social credit system and its pervasive monitoring of minority groups show how a state can do this all by itself, but Crawford is especially focused on the danger of collaborations between state actors and private corporations, an arrangement in which states are handed powerful tools that they may not know how to wield, and technology firms become de facto state actors without any clear liability.

Throughout the book Crawford mixes ethics deliberations with reporting, illustrating each problem with a story of real people from somewhere in America. Though this technique occasionally feels stilted, Crawford’s writing style is consistently clear and accessible. While the book’s critiques are not radically different from the hand-wringing about AI that regularly appears in popular magazines, Crawford’s framing and her severe intolerance for AI hagiography gives the reader the feeling that here is someone who has captured the whole story. Ironically, dissolving the myths of AI is a lot easier when those critiques have been fashioned into a single narrative.

Understood as a critique, Atlas of AI is a strong new offering. But deconstructing modern technological narratives is only the first step towards an AI which does not exploit people or their data, does not treat machine learning’s effect on the environment and negligible, and does not rely on unethical labor. If they are to catch in the public’s imagination and not languish in policy documents, these next steps must coalesce in the public consciousness much the same way that AI has already done.


David Zvi Kalman, Ph.D

is a Founding Expert of AI and Faith, and a Fellow in Residence at the Shalom Hartman Institute of North America, where he writes and teaches about religion and technology. He received a PhD from the University of Pennsylvania with a dissertation on the relationship between Jewish history and the history of technology, and an MA from the University of Pennsylvania on medieval Islamic law. He writes in both academic and popular forums and has founded two media companies. Information about his work can be found at www.davidzvi.com.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter