New! Become A member
Subscribe to our newsletter
Insights

Where the Trolley Problem is Useful in Autonomous Vehicle Technology

There is considerable backlash against the idea that the trolley problem is a useful thought experiment to explore the ethics of autonomous vehicle (AV) technology. While it is true that there are many conceptual pitfalls in its application, the trolley problem is actually useful in a specific case – namely, the issue of optimization when it comes to passenger versus pedestrian safety.

What is the trolley problem?

When autonomous vehicle technology became realistic, there was immediate and considerable optimism around the application of the ‘trolley problem.’ The trolley problem is a classic thought experiment used in ethics to test some basic moral intuitions. The test is that if you witnessed an out-of-control trolley speeding toward five unsuspecting people, but you realized you could pull a lever and switch the trolley to a track where it would kill only one person, would you pull the lever?

The initial exuberance around the application of this dilemma is probably rooted in some hope that an abstract philosophical problem actually had some real-world relevance (finally!). But in recent years there has been a backlash against the idea that the trolley problem has any useful applicability to AV technology.

One article puts the argument like this:

“Trolley dilemmas can be useful considerations only if they (a) occur with some reasonable frequency, and can be (b) detected with confidence, and (c) negotiated by an AV system with controlled actions. However, all three of these conditions are unmet , and likely never can be met” (3).

What the authors of this argument are imagining is the kind of scenario introduced by the MIT Moral Machine Experiment, where AVs, apparently equipped with facial recognition technology, make a ‘decision’ to swerve into one type of person instead of another and then take that action. A version of this experiment with human judgers is still live.

Critiques of the Arguments

This argument appears to be unfair concerning (a). While it is true that there will be extremely few real trolley problems faced by AVs, AV algorithms do not need to be trained by real-life examples. That is what reinforcement learning is for, which is done in training. This is underscored by the Moral Machine Experiment, which does this in the context of human judgers. Or, perhaps the authors are expressing belief that because AV-based trolley problems won’t happen that often, they are therefore not that important. This is also a mistake. Even a single accident by AVs receives national attention, which is a real issue for trust not just in AVs, but for artificial intelligence in general. Certainly if an AV drove into a crowd to miss a lamppost, the national consternation would be incredible. Concerning (b), the argument is also unfair, since AV technology could be combined with facial recognition technology to ‘detect with confidence’ who it is swerving into.

But there is a larger problem here. The original version of the trolley problem tests our intuitions around aggregating happiness, when all we are deciding about is numbers. Since one is less than five, most people agree that killing one is better. On this point there is general agreement. But the trolley problem as imagined by the Moral Machine Experiment and its critics alike, is not simply about quantitative judgments, but qualitative ones. It asks us, for example, if we would be more willing to kill an old person or a young one, a law-abider or a criminal, a rich person or a poor person. But this is not the trolley problem. This is a demand for making qualitative judgments about human life. As such, this is actually closer to a version of a ‘lifeboat ethics’ scenario, where we must decide who to pull onto our hypothetical lifeboat if there were not enough capacity for all the surrounding swimmers in the ocean.

If we are considering whether to apply lifeboat ethics to AV technology, I share the general skepticism. And I would add that trying to agree on qualitative judgments about human life is inherently demeaning for any and all involved. But what happens if we abandon all the moral baggage added by qualitative lifeboat-style scenarios, and instead focus simply on the original trolley problem?

I think the original trolley problem, on which there is agreement, will lead us to favor the safety of pedestrians over the safety of the passengers, to the extent that such an option can be optimized by engineers.

The Incorrect Assumption

My point is far from obvious. I have actually heard engineers laugh out loud at the idea that an autonomous vehicle might be optimized to harm its passengers over the surrounding pedestrians. The assumption is that if AVs are ‘willing’ to harm their own passengers, this would severely disrupt the viability of the AV market-place. Who would enter such a death-trap?

This is a psychological assumption, and I think it is wrong. The best way to explore it is to look for a relevant comparison, which we find in airline travel. There are at least three useful similarities. First, statistics are clear that airline travel, like future AV travel, is orders of magnitude safer than human driving. Second, even though there are significantly fewer accidents and fatalities in airline travel, the ones that do happen make national news and haunt our collective consciousness. This will also inevitably be true with AV fatalities.

But the important similarity is a third one: despite the sensational fatalities, the airline industry is not lacking passengers. There are in fact certain individuals who will never board an airplane out of fear, and the same will no doubt be true of AVs. But that is a not a significant number.

One possible reason is that people are able to look past the sensational fatalities and examine the statistical truth about the relative safety of air travel. But it seems far more likely to me that the answer has to do with convenience. That is, even though many people have some trepidation about airline travel, they ride anyway simply because ground or sea travel is often far less convenient. The human species simply loves convenience.

Why Optimize for Pedestrian Safety?

If this is a fair comparison, AVs optimized to favor the safety of pedestrians over their own passengers would likely not disrupt the AV market, provided that crashes really do remain rare. But why should they be so optimized? There is an ethical reason and a legal reason, and the ethical reason is finally where the beleaguered trolley problem finds its relevance. It is simply about the numbers.

The fact is, most current car rides have only one passenger. And when AV rides are common, the number of riders on average will decrease sharply. This is because a significant number of AV trips will be passenger-less shipments and deliveries. And so in the future, the vast majority of AV trips will have between 0-1 riders, whereas the number of pedestrians is indefinite. Because of this, and because we should favor less harm than more, AVs should be optimized to simply shut down when in trouble, even when this could endanger the passengers. It is simply too risky to allow them to veer off the road or into other lanes.

The second reason is a legal one concerning liability. It is not difficult to ensure that those entering AVs are aware of the risks. And, as I speculated, people will still board AVs despite awareness of the risk, simply because of convenience. This creates a specific, enforceable contract between the AV manufacturer and its passengers. Pedestrians, however, have entered into no such contract, and so legal questions about liability for pedestrian fatalities will be much more complicated.

And so despite the backlash, I believe the trolley problem does have some use in AV technology. It shows that people in general favor less harm over more harm, which I argue favors optimizing AVs to favor the safety of pedestrians over the safety of passengers. This is in line with our shared moral intuitions.


Nathan Colaner, Ph.D

Nathan is a senior instructor in the Departments of Management and Philosophy at Seattle University, focusing on business ethics with particular research emphasis on big data management and business use of artificial intelligence. He has been instrumental in the formation of the University’s new Center for Science and Innovation.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter