New! Become A member
Subscribe to our newsletter
Book Review

Review of “What to Expect When You’re Expecting Robots”

When my oldest daughter was born prematurely, her life depended for a mercifully short but crucial time on a ventilator. Her lungs needed the help of automation if they ever were to have a chance to breathe on their own. It is a memory that has stayed with me throughout my research work in  robotics and ethics, and it took on added  depth reading Laura Major and Julie Shah’s new book “What to Expect When You’re Expecting Robots” (Basic Books). Certainly the title alone would have been enough to do the trick: whose expectations of robotic technology, and what manner of birth (as the popular “What to Expect When You’re Expecting“ book addresses), is being presented  here?  Or is this title more of a wry poke at overblown, futuristic expectations, a compulsion to send birth announcements while never getting into the messy details of how much can go wrong when the future arrives? However the reader approaches it, the book proves an able, intrepid look into the complications and challenges that human-robot interaction has already begun to pose. For those interested in AI and robotics in the face of religious reflection, it also demonstrates how important social norms will be to any robot a responsible individual or society ought to expect.

 

Major and Shah offer assured guidance through many issues and cases that AI and robotics have raised to public attention over the last decade: the uneasy promise of self-driving cars, spectacular leaps in certain machine-learning applications and their data demands, the spread of robotic systems from factory floor into street and home. Their aim is not to alarm or hype, but to help the reader, as the kids say, “touch grass:” lurid evocations of sci-fi and breathless Silicon Valley pitches are not needed to see real developments in robotic interaction, however slow, on the ground.

 

While attention-grabbers like the recent social robot Jibo may have flopped, Major and Shah argue compellingly that more careful, inclusive design can hit upon genuine need and responsible implementation of beneficial automated action. They do more than cite “human-centered design”– they flesh out the kinds of design tensions and priorities that will need sorting out before machines can be expected to play a functional role. Drawing upon larger histories of automated technology than the current wave of AI (did you know autopiloting planes goes back to 1912?), the authors continually show how important it is to track the societal setting of a robot before focusing on how it is “superhuman;” in fact, they show that such a label does not even make sense without accounting for how human beings are acting and reacting among such a system’s actions.

 

To give but one example of this finer-grained discussion of human-machine collaboration, Major and Shah distinguish levels of “societal entropy” into which robots might enter. The higher the entropy, the more unexpected the type of situations an agent might encounter and the more they need norms. Driving down a clear road may require knowing the speed limit, but dealing with a fallen branch on the road may involve delicate understanding of what it might mean for other drivers, or one’s own car’s axle, or a kid walking out to pick it up. Working robots, Major and Shah tell us, may help us by society helping them. If the environment can be organized with such entropy in mind, it may liberate robots from the commonsense knowledge demands that would otherwise render them useless or even harmful.  As one of its chapter’s title notes, it may “take a village to raise a robot.”

 

This phrase brings back to mind the question of birth, of course, and how often  robots are  cast as children. One  answer to the birth/child paradigm that the book illustrates, though it never claims to do so directly, is that children are the ones  who break norms with some degree of societal tolerance. They do not know any better. And that is the challenge Major and Shah stress– how can robots not just play but also work well with others?  This requires a much closer attention to norms, something the field of human-robot interaction still struggles to integrate and communicate.

 

Even Major and Shah at points fall short on this score. They introduce  an idea of affordances (what a robot’s design invites in terms of use) without noting how much  work has already  undertaken  regarding the challenge of affordances and norms for robots (e.g. Vasanth Sarathy among others).  I  also would have liked the book to gear its cases more toward issues of disability, especially since disability studies is such a key and fertile area of design research (e.g., looking at he Starship delivery robot in Pittsburgh that blocked the crosswalk and stranded a person in a wheelchair still in the road). A related topic  the authors do not discuss enough is what norms data collection itself might violate in the effort to train robots through machine-learning approaches. Which villagers will be doing the training and with what support (Mary Gray and Siddarth Suri’s “Ghost Work” is a poignant reminder of how brutal the life of training algorithms can be)?

 

Nonetheless, Major and Shah’s recognition of interaction as fundamental to how robots are to be “expected” (including mapping where it will be more useful to impede communication between people and robots) opens up wide avenues for cultural and religious reflection.  Which  norms are upheld, threatened, and transformed through advances in automation, and how will norms work in light of that? Interaction with robots may well evoke more social, norm-negotiating responses than currently technologies commonly do. A child on a ventilator is one thing, but what about a robotic arm feeding a loved one in hospice care? I could not breathe into my daughter’s lungs, but I might want– need?– to feed a dying parent.

 

The liminal moments of birth, death, and suffering are ones to which religious and spiritual traditions speak amply, if not always clearly or helpfully. Major and Shah speak at several points of “expert analysis” of social contexts for robots. What kind of cultural expertise needs to inform engineers of where norms of different kinds may criss-cross social space like those moving museum-security lasers in heist movies? To use an example from Sarathy, the robot may be able to pick up a vase without dropping it, but the stakes of doing so are worlds apart if a flower – and not a loved one’s ashes – are inside.

 

In public discourse still largely guided by “takeover” prospects and the daunting bounds of Boston Dynamic robot dogs, it is a difficult task to stay attentive to the everyday, distributed challenges of robots sharing space assistively. Major and Shah’s book goes a long and laudable way toward that shift. It invites a broader circle of inquirers, designers, and citizens to continue the work of guiding where and how robots could best operate. Most commentators in human-robot interaction will recite that robots are about assistance not replacement. But the harder question is when one cannot breathe without automation and when it is time to breath on one’s own. Still harder, what just realm the breath of life might give us the courage to expect.


Thomas Arnold

is a Founding Expert of AI and Faith and a Research Associate at the Human-Robot Interaction Lab at Tufts University. He has served as lecturer in the Tufts Computer Science Department, teaching a course on AI, ethics, and robotics. His work at Tufts focuses on the ethical evaluation and design of AI systems for interactive contexts, including how moral norms will shape how social robots are judged. Thomas is an author and co-author of numerous peer-reviewed articles on AI ethics and human-robot interaction, as well as a co-author of “Ethics for Psychologists: A Casebook Approach.” He holds a bachelors of arts degree in classics and philosophy from Stanford University, and a masters of theological studies from Harvard Divinity School. He is nearing completion of his PhD dissertation for Harvard’s Committee on the Study of Religion.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter