New! Become A member
Subscribe to our newsletter
Insights

Before Writing Ethics for Robots, Let’s Get Humans to Apply their own Ethics First

Before Writing Ethics for Robots, Let’s Get Humans to Apply their own Ethics First

The August 4 copy of New Scientist included an article titled Robot Laws ( in the digital version the title expanded to Robot laws: Why we need a code of conduct for AI – and fast), written by Douglas Heaven.

The article starts with the recent crisis of conscience moment for AI in the real world, the death of Elaine Herzberg from the impact of an autonomous vehicle. The investigation attributed Herzberg’s death to design flaws: disabled emergency breaks for smooth rides and no programming that told the car to alert the human operator when it perceived a danger. The US National Transport Safety Board conclusions meant that given those design parameters, cars using this version of the software would likely cause a problem. It was just a matter of time.
AI in the wild

This leads Heaven to mention Asimov’s Rules of Robotics, which at 75 years old stand as the most known of the codified attempts to keep robots from harming humans. Asimov, however, was writing fiction and much of his work focused on the tension between his rules and their application in the situations in which he placed robots.

As Heaven points out, there are no rules for robots in the contemporary world. He quotes Mady Delvaux, a member of the European Parliament for Luxembourg and author of an EU report on laws for robotics who equates robots to the early days of the automobile. The first drivers had no rules,” she says. “They each did what they thought sensible or prudent. But as technology spreads, society needs rules.”

But Delvaux fails to point out that humans were behind the wheel of those early automobiles. Just because they didn’t have specific rules about automobiles, large, fast, potentially destructive vehicles fell into a class of things humans had rules about, things that hurt people. All major religions have rules against hurting others, so while the drivers of automobiles in the early days may well have self-defined where they could drive, their speed limit and where they could park, the inherited values of general moral obligations held. Most people did not wantonly run into crowds of people, or even one person, or a building, on purpose. People knew using a car as a weapon was not right, even when the more subtle issues of automobile law remained in question.

No general moral umbrella restrained people from using automobiles as getaway cars in bank robberies, which could endanger passersby when the driver decided their own welfare outweighed those they were driving past. This same behavior exists today in car chases where perpetrators place people and property in harm’s way. These people know about a moral code, but also know the rules related to how and where to drive a car—and yet they still initiate high-speed chases. And drive-by shootings derive their name because people shoot bullets from a car. All illegal and all still too common.

Toward a robot code of behavior

Heaven asserts that “robotic intervention into human affairs is going to require something far more comprehensive than the highway code.” And I disagree. Judaism counts 613 commandments in the Torah. According to Chofetz Chaim in the Sefer HaMitzvot HaKatzar, of those 613 commandments 77 positive commandments, and 194 negative commandments are currently observable, and of those, 26 apply only when in Israel. That leaves 245 commandments that relate to modern Torah-observant Jews, and of those only a few apply to how people relate to each other (bein adam le-chavero). Examples of the interpersonal laws include visiting the sick, hospitality to guests, controlling anger and loving your neighbor. Some scholars argue that commandments like not committing murder are self-evident without the law. Psalm 15 summarizes eleven ethical requirements which provide the underpinnings for the fulfillment of all 613 commandments.

Not all Jewish laws offer the same clarity upon their first assertion. Many laws find elaboration in the Midrash and the Talmud. These transcribed discussions and commentaries fill in for various vagaries, such as “what work means” in the commandment to not work on the Sabbath. Loving your neighbor also falls into this category as it expands into protecting your neighbor’s property, preventing him or her from being harmed, speaking well of your neighbor, acting with respect and glorifying yourself at your neighbor’s expense. Acting in this way brings peace and is a prerequisite to the Jewish people’s redemption.

Robotics and AI face a choice about interpretation: either wait to deploy technology until it reaches a human level of common sense understanding or keep humans fully in charge or any morally uncertain choice a robot or algorithm faces.

Here come the watchers

Readers easily grasp the simple assertions of Torah and the relatively simple expansion by the rabbis in the Midrash and Talmud. The need for a great body of programming to manage a machine code of ethics comes from machines lacking common sense.  Heaven’s acknowledges this. Common sense in AI, however, continues to prove elusive, which leads to his conclusion, and that of many others, that the best way to prevent harm from robots and AI is that whenever there is a possibility for harm, humans get to make the choice.

Placing humans into all robotic inflection points is easier said than done. As robots increase in numbers, the world will require squads of watchers waiting to intervene. Perhaps being a watcher is the job that those who fear the displacement of human labor fail to imagine. While that choice may seem a viable one, eventually robots will outstrip the human population (there are already almost more cell phone subscriptions (6.8 billion) as people (7 billion) which puts intelligent assistants in the hands of most adults and many children around the world already).

But human watchers present their own dilemma based on the observations above that people regularly violate not only ancient moral codes but also national, state and local laws. Putting watchers in place as a stop-gap to rouge robots or confounded algorithms suggests that people will always make the right choice. That is a flawed assumption.

Regular automobiles fall fully within the control of humans, yet some humans choose to inflict harm with them. Because of the number of robots in play in the future, watchers may need to limit their interventions to major decisions and give up minor ones. A household robot scurries in front of a father coming home from work as it seeks to sweep up a crumb. Do the watchers miss the intervention that sends this man crashing to the floor?

Does a drilling robot that goes off its path because it encounters a rock, then swerves enough to sever a pipe in a residential area fall within the realm of something should check with a human before it proceeds? Those may seem like minor issues, but one could lead to a sinkhole, major property damage, and both could result in injured people.

Make the makers responsible

Delvaux sees responsibility with manufacturers, and that is well and good if the manufacturer builds a robot for which it offers a guarantee of its behavior. In the wider discussion, however, weapons manufacturers for bad actors who want their robots to create destruction will want the same guarantee, and thus evil can be as easily encoded as good.

Pushing responsibility to manufacturers, however, is a non-starter, especially in America. People often propose that argument about holding gun makers responsible for death and injury after a mass shooting. According to the Protection of Lawful Commerce in Arms Act (PLCAA) gun manufacturers cannot be held liable in civil court for harm “resulting from the criminal or unlawful misuse” of firearms or ammunition.”

The problems of transparency and categorization

Heaven goes on to suggest that robotics and AI developers abandon black boxes that obfuscate their reasoning. Robots and AI should offer transparency to their owners, or to those affected by their analytics. They should be able to explain their behavior, to make their moral intent clear.

Making robots and AI more transparent is a good choice, however, it is not a solution to bad behavior except when debugging such behavior after it occurs. The accident mentioned early in this post was surely unintentional, but it is also clear that having a smoother ride created the intentional act of disconnecting the emergency brakes. Roboticists must start with intent and design for it. A machine that intends to drive safely must, therefore, include programming that unpacks what it means to drive safely. In this case, designers intentionally disconnect a mechanical item integrated into automobiles for safety purposes. The error was not with the robot, but with designer intention, those who prioritized comfort over safety. Who will control the overrides is as big a question as who will own the off switch. Robots may do harm when people tell them to either intentionally harm another human, or through unintended consequences of an intentional programming parameter choice.

Anne-Marie Imafidonfrom Palo Alto’s Institute of the Future suggests we should make robots who are better than we are, which means making sure they don’t share our obsession with categorizing things. Most AI today focuses on categorization. That is a task that machine learning does well. And most AI systems that use rules also categorize data, though they may use ruled-based logic rather than machine learning-driven pattern recognition (or in combination with pattern recognition).

Examples in Heaven’s article suggest not that machines don’t obsess with categorization, but that they do so in a way that is subtler and more individual, an intentional design choice that may invoke privacy concerns, but also doesn’t solve the problem of robot behavior. Even if a machine had broader categories or narrower ones when driving, it would still need to classify threats (things it might hit or things that might hit it) and opportunities (better pathways, perhaps even emergent ones). That argument seems a dead end.

Heaven then heads down the uncanny valley of being able to always identify a machine, regardless of how good they get at tricking us into thinking they are human. This has little to do with safety or ethics. If a machine acts morally, should we care if it is a machine or not? Perhaps machines fed the corpus of human moral writings could derive their own version of ethics they would follow more religiously than one specifically designed by humans to govern AI and robotic behavior.

Humans should always have access to the off switch

Heaven’s final sentence ends with, “There is at least one technological fix we might all agree on, however. ‘A human should always be able to shut down a machine,’ says Delvaux.” We must also question that assertion. If a robot is protecting people from something, say a mine cave in, and the human who owns the mine decides that he or she wants to shut off the robot and let the cave collapse, should he or she, as the owner of the property (the mine and the robot) and the employer of the people in the mine, be able to do so? The robot would not be at fault, it would be an instrument of human will.

Take a weapons system example with the following rules of engagement: Classify anyone in a given geographical area as an enemy. Classify any non-enemies who remain in the area as collateral damage. Commanders send in autonomous robots intending to secure the area. How would human control change the robot’s behavior while executing on their intent? With moral choices determined and parameters set, the robot won’t go rogue and attack another area (unless it suffers a serious malfunction).  The robot may create enormous damage during its mission, but probably more selective damage than aerial bombardments or street fighting if you look at any record of post-war cities in any World War II or any of the recent Middle Eastern conflicts. An autonomous robot will only carry out an autonomous action within the constraints of its programming.

Leave aside for the moment any efforts to offer creativity to robots or AI, like any tactical weapon, commanders will only deploy robots within established rules of engagement. At the strategic level, where weapons like nuclear bombs dominate, the use of robots may also be unleashed with very open terms of engagement. Any use of robots in this way, however, requires humans to make the choice about how to implement the rules of engagement.

Even though humans own the off switch, the mission parameters that matter most originate with them. Any emergent ethical rules need to focus on the person responsible for telling the robots what to do. Unfortunately, history proves that for all the existing laws and ethical codes of conduct, humans often fail at doing the right thing even when they know what the right thing is. Rather than spinning our wheels on creating ethical commandments for robots, we should first figure out why the ones we seemingly cherish so much so often fail us.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter