New! Become A member
Subscribe to our newsletter
Book Review

Book Reviews: The Ethical Algorithm and A Human Algorithm

THE ETHICAL ALGORITHM: The Science of Socially Aware Algorithm Design By Michael Kearns and Aaron Roth

and

A HUMAN ALGORITHM: How Artificial Intelligence is Redefining Who We Are By Flynn Coleman

Two buttoned-down AI scientists and a ‘can’t we all just get along?’ hippie walk into a bar . . .

OK, please forgive the click-bait opening sentence, but after reading these two very different books about the very same subject, I couldn’t help myself. Both books purport to address a compelling question: Given that algorithms increasingly control our lives, can they be designed to incorporate humane values up front — as opposed to simply seeking, after-the-fact, to regulate their dangerous outcomes? Despite that, the books couldn’t be more different.

Kearns and Roth are the two AI scientists referenced in my opening line. Their book’s intention is to explain “the emerging science of ethical algorithm design” — emphasis on “science.”

Kearns and Roth describe conventional AI algorithmic models, a la Nick Bostrom, as sort of mindless ‘paperclip maximizers,’ always seeking the most efficient or productive solution to the posed problem. Of course, given that machine learning systems derive algorithmic models on their own, these data-derived solutions often ignore ethical or moral constraints that humans consider important or imperative.

Not surprisingly, therefore, many AI systems exhibit demonstrable bias. Some AI models, for example, maximize profit by denying conventional loans, and/or making subprime loans, to people of color. Other AI models for hiring, college admissions, and sentencing decisions, exhibit similar bias. So far, the response has largely been to (try to) better regulate against such discriminatory outcomes.

Ethical Algorithms

What Kearns and Roth want, instead, is algorithms designed to make values like fairness and privacy directly part of the algorithmic outcome. “Instead of people regulating and monitoring algorithms from the outside, the idea is to fix them from the inside.”

That sounds good, of course, but the devil is in the details. The authors acknowledge as much. “Of course, one of the greatest challenges here is in the development of quantitative definitions of social values that many of us can agree on.”

Notice the word “quantitative.” That’s crucial because, otherwise, any particular social value can’t be made part of an algorithmic (i.e., math-based) solution. Much of the book, therefore, is about the process of trying to determine, quantitatively, what we actually mean when we talk about values like fairness, privacy, and transparency. For the most part, I came away unconvinced — unconvinced that humans can come to consensus, much less on quantitative moral definitions.

The authors, despite their enthusiasm for (the prospect of) ethical algorithms, acknowledge that many others are unconvinced as well. “The critics of the algorithmic approach may often be right. There are many consequential domains where algorithmic tools are still too naive and primitive to be fully trusted with decision-making.” And coming up with agreed-upon, quantitatively-precise definitions of human values is not the only difficulty.

Algorithmic Pitfalls

Another is that the very process of data-driven algorithmic optimization frequently leads to unexpected and undesirable side effects. The book recounts several examples, including this obvious-only-in-hindsight classic: during the late 2017 southern California wildfires, Waze and Google Maps guided fleeing people directly into danger — because those were the routes with almost zero vehicular traffic. Kearns and Roth draw the obvious conclusion: “algorithms generally, and especially machine learning algorithms, are good at optimizing what you ask them to optimize, but they cannot be counted on to do things you’d like them to do but didn’t ask for, nor to avoid doing things you don’t want but didn’t tell them not to do.”

To their credit, the authors conclude by asking this very big question: Are there decisions that algorithms simply should not (be allowed to) make? They even pose a potentially compelling instance — killing a human being in (automated) warfare.

“The argument is that the final decision to kill a human being should only be made by another human being because of the moral agency and responsibility involved; the weight of such a decision should lie only with an entity that can truly understand, in a human way, the consequences at hand.”

Still, they then note that “of course, if the algorithm really is more accurate, sticking to this moral principle will result in the deaths of more innocent people.” Maybe. But despite their techno-optimism, I remain unconvinced.

A Human Algorithm

Unfortunately, I was even less convinced by Flynn Coleman’s, A Human Algorithm — the ‘hippie’ in my opening sentence. Actually, hippie isn’t fair. Flynn Coleman is an international human rights attorney, so I mean no disrespect. Still, there is much in her book that invites caricature.

She hints at what is to come on the very first page of her introduction. “This is a book about how our relationships with the technologies we create will help us reimagine what it means to be human and who gets to be included in the definition of humanity.” Coleman, it turns out, takes a (very) big-tent approach to defining humanity.

She starts, though, with a laudable appreciation of human diversity:

The responsibility for ensuring that future intelligent machines are fair, ethical, and coded with a conscience that respects values equitably lies with the architects of the future—all of us. To do so effectively, we need a diversity of voices in the room, across spectra of gender, sexuality, race, and experiences and across socioeconomic, religious, and cultural lines: not only significant numbers of women and people of color participating, but also people of different ages, abilities, and viewpoints.

But Coleman keeps right on going, arguing that we “need to revise our value systems to house a more inclusive definition of human rights—one that includes nonhuman, even artificial, beings.” She  then adds that it’s time to let go “of the fallacy that life revolves around humans, as we race toward a future coexisting with new, intelligent beings.”

Coleman never adequately addresses why AI computers should be thought of as “beings,” rather than mere machines. She acknowledges that many are deeply skeptical that computers and robots could ever be conscious. Despite that, she blithely assumes eventual machine consciousness (though probably not until computing goes quantum).

Coleman continues along this trajectory toward its logical destination:

Just like the animals who do so much to keep the ecosystems of our world afloat, our intelligent machines, if imbued with ethics, morals, and values, should be granted rights . . . along with animals and forests, oceans and trees, should be included in the crucible of humanity (emphasis added).

And here’s the real payoff:

Ideally, our benevolent and brilliant machines could help us eradicate poverty and unemployment and diminish disease, violence, and the deeply rooted injustices in our human-made and very flawed systems that already affect billions each day . . . In partnership with extraordinarily smart AI, we can choose to improve not just our own lives but the lives of many. We can choose utopia (emphasis added).

There you have it. If humans can quit thinking of themselves as exceptional, as superior to other living things, much less machines, solutions to all our problems lie just around the corner. Our machines can save us, if only we’ll let them.

In the process, we can fulfill some cosmic plan. “Together , we can reimagine our place in the universe, our connection to all things, remembering that we are all — human and creature, mountain and river, sun and moon — made up of stardust (emphasis added).” Joni Mitchell* had us pegged all along.

If two AI scientists and a hippie walk into a bar, it turns out they really won’t have much to talk about. The AI scientists want to discuss ethics as a math problem, and the hippie wants to cede the future to machines. They may raise interesting questions, but their answers sound convincing only if you’ve already downed a few drinks.

* Woodstock, by Joni Mitchell (partial lyrics)

We are stardust

We are golden

And we’ve got to get ourselves

Back to the garden

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter