First there was SUNY Professor and sociologist Virginia Eubanks’ 2018 ground-breaking book, Automating Inequality, which spotlighted how even well-intentioned technology solutions to complex social needs often only spawn new problems. Now Princeton sociologist Ruha Benjamin’s book Race After Technology: Abolitionist Tools for the New Jim Code provides a classification system for such ill effects, and seeks to demonstrate how they naturally arise when technology is simply overlaid on inherently unfair social systems.
Ruha Benjamin is an Associate Professor of African American Studies at Princeton University. Per her website, she “stud the social dimensions of science, technology, and medicine” and writes, teaches and speaks “about the relationship between knowledge and power, race and citizenship, health and justice.” Race After Technology frequently cites knowledge gained from her teaching of a popular undergraduate course on race and racism, for which she received Princeton’s Distinguished Teaching award in 2017. Her website quotes a personal motto: “Remember to imagine and craft the worlds you cannot live without, just as you dismantle the ones you cannot live within.”
The subject of her book – ill effects from superficially considered technology solutions to important social problems — features in major annual conferences for technology insiders like the Fairness, Accessibility and Transparency (FAT) Conference and We Robot, conferences our AI&F Founding Experts attend. Professor Benjamin’s book opens these issues to outsiders and relates them in an accessible way to critical race theory as well as the popular discussion of structural racism that is roiling our nation.
Indeed, a major change in the two years between publication of Professor Eubanks’ book and Professor Benjamin’s is our reinvigorated national debate over continuing, pervasive structural racism, as well as a newly critical look at the power and responsibility of Big Tech in both conveying and influencing such broad social discussions. We live in a moment where it seems half of our country is seeking to persuade the other half about the ways in which racist structures and privileges result in a different, less free lived experience. This against the rapidly approaching national demographic of a white majority becoming a white minority. Finally we seem to be making real progress – more so than at any time since the Civil Rights Movement of the 1960’s. But much of the media through which the discussion is playing out – granular and glandular social media posting and professional journalism that has widened and hardened the divide between two Americas – are different in kind, if not just degree, from the ‘60s, and threaten the fruitfulness of our current discussion.
Her book consists of an introductory essay of nearly 50 pages, four chapters which classify the ways that uncritical technology solutions for social problems go awry, a fifth chapter on possible solutions, and a useful appendix. To my mind, the opening essay is the strongest part of the book, the four classifications an untested taxonomy but populated with useful illustrations, and the solutions chapter the least developed in the book, though the appendix provides some beneficial references.
Benjamin’s organizing idea, as delineated in the Introduction, is The New Jim Code. She defines this as “the employment of new technologies that reflect and reproduce existing inequities but that are promoted or perceived as more objective or progressive than the discriminatory systems of a previous era.” “Code” is both a reference to racial codes that society uses as sources and means of power and social control, and computer coding that creates the algorithms which often perpetuate those social controls, even as they disguise them.
The New Jim Code is a nod to Michelle Alexander’s deeply influential book The New Jim Crow, about how changes in law and criminal punishment in the 1980’s around drugs and urban decline created a new post-Civil Rights era pervasive system of racial discrimination that mimics the post-Reconstruction Jim Crow era of legally blessed “separate but equal” discrimination. Under the New Jim Code regime, for instance, Benjamin cites the research of computer scientist Latanya Sweeney showing that pattern recognition algorithms ethnically classify peoples’ names and search engine algorithms, then learn from users’ searches to associate “Black names” with arrest records at a much higher rate than “White names”. Thus, the search engines learn bias. And when the biased datasets built off such searches are employed to “teach” other automated judgment algorithms that act as important and broadly deployed gateways to jobs, education, healthcare and housing, the New Jim Code blossoms into a new full blown system of pervasive racist societal sorting.
Benjamin sets out to provide a “field guide into the world of biased bots, altruistic algorithms, and their many coded cousins”, technical fixes for social justice problems often born not out of malice but “the desire for objectivity, efficiency, profitability, and progress.” Her field guide classifies such misconceived fixes as:
1) “engineered inequity” in which “innovative techniques give rise to newfangled forms of racial discrimination” – for instance, racist robots which are thought to be superior to humans in terms of efficiency and regulation of bias but in fact uptake human bias through their programming.
2) “default discrimination” in which “particular perspectives and forms of social organization” creep into technical systems that simply “mirror . . . the wider terrain of struggle over the forces that govern our lives” – for instance, predictive policing algorithms that effectively redline neighborhoods as nests of criminality and transform their residents into presumptive criminals.
3) “coded exposure” in which technology “both reveals and hides forms of social and political vulnerability and risk” – for instance the crosshairs created for black people by predictive analytics software which supposedly reveals troubling patterns of behavior in ethnic communities, and facial surveillance software that is supposed to regulate such behavior but does a lousy job of identifying particular black faces, with the result that an ethnic group is targeted and individual members of it misidentified.
4) “technological benevolence” in which technology is deployed to solve a problem without looking deeper into the systemic structure of the problem – for instance, the use of digital tracking devices as an alternative to the bail system, which simply drives the “prison industrial complex” deeper into the lives of black people through new technology.
The test of any such taxonomy must be its utility for clarifying facts in the world and facilitating knowledge and analysis. The jury remains out on the utility of Professor Benjamin’s particular scheme, but it certainly serves as a useful iterative step.
Whether or not these classifications work crisply, all of them illustrate the tradeoffs involved between short term, often beneficial fixes, and longer term, perhaps more beneficial but more difficult change. For example, in the category of “technological benevolence”, Professor Benjamin contrasts the work of one nonprofit which crowd sources bail money to enable black people to opt out of the current system, with a digital monitoring startup popularized by Jay Z and other celebrities that some critics liken to a digitized update of old school shackles. The crowd-sourced bail nonprofit has helped 65 people. The digital monitoring startup allows scale for a vast number of people, many of them ethnic minorities, who would be unable to bail out due to lack of funds. Both approaches perpetuate the current system and do not directly address the underlying bias that causes a disproportionate number of arrests and incarceration in the first place. But alongside its valuable scale (less actual incarceration for many people while awaiting trial), the tech solution bears an additional stigma precisely because it purports to be a benevolent and progressive response, when in fact it can be said to simply enable an unjust structural incarceration system. Such are the tradeoffs we must continue to debate.
The phrase “abolitionist tools” in the subtitle to this book is an energizing term, but can be best seen as a call for much more work, than a provision of actual solutions. The fifth chapter of the book, which is supposed to address such tools, mainly continues as a critique of ways of thinking, especially “design thinking” and “empathy” as overblown solutions.
The fifth chapter and appendix are useful, however, in spotlighting organizations with different approaches to identifying and redirecting the impact of data as a beneficial structural element of our current society. Data and Society, the New York City anthropology-based think tank is one such organization. Two others that are more grass roots-oriented are Data for Black Lives and The Algorithmic Justice League. These organizations make an interesting contrast.
The Algorithmic Justice League combines scientific research with art and storytelling to distribute it’s policies and recommendations. Data for Black Lives is a multidisciplinary movement which overtly seeks to “abolish big data” as “part of a long and pervasive historical legacy of scientific oppression, aggressive public policy, and the most influential political and economic institution that has and continues to shape this country’s economy: chattel slavery.”
Data and Society probes the impact of data on society through a network of fellows and programs that are academically based but realistically applied. Benjamin highlights the work of Data and Society and the related study group Auditing Algorithms in calling for equity audits as one solution toward “algorithmic accountability”. Data and Society poses three questions as a starting point for an equity audit:
- What are the unintended consequences of designing systems at scale on the basis of existing patterns in society?
- When and how should AI systems prioritize individuals over society and vice verse?
- When is introducing an AI system the right answer – and when is it not?
This audit process is similar to the accountability process Microsoft President Brad Smith describes in Tools and Weapons: The Promise and the Peril of the Digital Age.
In closing, I came away from this book appreciating Professor Benjamin’s approach of interrogating through the framework of racial experience how technology is superimposed on our societal structure. The same should be done through the framework of faith beliefs translated into applied ethics – the mission of AI and Faith. We are presently seeing a corporate-blessed proliferation of interfaith Employee Resource Groups (ERGs) at many of the major tech companies, coming into existence alongside longer-established ethnic and gender-based ERGs. It would be exciting to see how these groups could work together to improve their corporations’ products and outcomes for society by the application of mutually accepted and overlapping values of fairness, justice, and human flourishing.