New! Become A member
Subscribe to our newsletter
Interview

On Autonomous Vehicles, AI Bias, and Law: An Interview with Professor Mark Chinen

Mark ChinenI recently had the opportunity to interview Professor Mark Chinen from the Seattle University School of Law. He was educated at Pomona College and Yale Divinity School before receiving his law degree from Harvard Law School. Before he began law teaching, Professor Chinen practiced in the areas of international trade, banking and corporate and securities law in Washington D.C. with the firm Covington & Burling. Professor Chinen teaches contracts and courses in international law and writes on various aspects of international law, particularly international governance, theology and international law, and the relationship between domestic and international law. His first book is Law and Autonomous Machines: The Co-evolution of Legal Responsibility and Technology (Edward Elgar Publishing, 2019), and is the subject of some questions from our interview. His most recent book, The International Governance of Artificial Intelligence, is projected to be published in mid-2023.

Over the course of our hour-long interview, we discussed several interesting and prescient issues. These include the legislation and division of responsibility surrounding autonomous vehicles (AVs), how the legal system should address baked-in biases in artificial intelligence (AI) systems, and the continuum of autonomy and its implication for AI legal personhood. We end with a brief preview of the topics included in Professor Chinen’s newest book.

 

On AV Responsibilities and Legislation

Marcus S.: How do you think about the way that developers, manufacturers, and legal councils ought to weigh their responsibility for passenger safety versus pedestrian safety in the case of an AV? And can discerning those differences be codified in a way that is explainable to an AV?

 

Mark C.: I tend to look at this from a legal perspective. You could make the argument that a rough balance between the interests between a passenger or the operator of the vehicle versus the pedestrian has been achieved by different sources of the law. For example, there are regulations regarding passenger safety and the safety for vehicles, and the standard that those vehicles need to meet before they can operate on the roads. Then there is the whole panoply of traffic safety regulations. Then there are basic rules of the road regarding pedestrians and rights of way. In a sense you can argue that the law has already achieved the balance that you are asking about. So then when you talk about the responsibility of developers as they are trying to design these devices, I tend to think we ought to keep those laws and those regulations in mind. At a minimum that should have achieved that rough balance that you are describing. And then we can work out where you start to bump into edges or gray areas where maybe the law cannot reach some of the new phenomena that are created, because now we are talking about an AV as opposed to a human-driven vehicle. Does that make sense?

 

Marcus S.: Yes, that does make sense. And is the hope that we can really take those laws of the road and start to break them down into the first- and second-order logic that is explainable to an AV?

 

Mark C.: I would put it this way. Right now, every attempt is being made for that to happen. Researchers are breaking down every sort of driving scenario into various parts. In a sense, they already had a head start. When you think about the regulations that already exist and some of those safety requirements, those are already framed in terms of specific operations like rates of speed, rates of changing lanes, and those kinds of things. You can make an argument that a lot of it is already operationalized. It is often not as cut and dry as I am describing it because there are times when, particularly when you are talking about regulations or traffic codes, there is room for interpretation.

 

Marcus S.: Should developers really be put in the position to be interpreting the law in that way?

 

Mark C.: Personally, I think no. As an institutional matter, we leave it up to courts and those who are trained in the law to do that. Of course, the reality is we are all interpreters of the law in that we, in our everyday activities, have vaguely in mind a sense of the law even though most of us are not trained lawyers. Even if we do not talk about autonomous systems, if you are an engineer designing a conventional vehicle, in a sense you are also interpreting, but obviously not to the same level as if you are designing something for autonomous operation.

 

Marcus S.: …In the case of an AV getting to a point where a collision is practically inevitable, earlier this year the NHTSA released a report on Tesla AVs where, if saw it was about to be in an eminent collision, it would effectively abort the autopilot. I think that raises a lot of questions about the responsibility of AV manufacturers to really “see it through” for the whole autopilot process. What role does an AV manufacturer take when a collision is essentially inevitable?

 

Mark C.: I think this is a very complex question, and there are many facets to it. It appears that the designers at Tesla have designed their autonomous system, and rightly so, to essentially cede to human control in those kinds of situations. That reflects the kind of common approach that is being taken now. In any of these kinds of systems we are allowing the meaningful decision-making power when a vehicle or another device is autonomous. Now that is not a fool-proof response. Perhaps what we really want to do is not allow humans to intervene, because humans are the source of error.

 

How to Legally Address AI Errors and Biases

Marcus S.: In your book Law and Autonomous Machines, you discuss the concept of cultural bricolage, which was a new term for me. My understanding is that this means incorporating known legal concepts when dealing with new scenarios, going back to precedent. One illustration you provide is the idea of considering AI like a misbehaving animal. If an animal owned by an individual commits tort, then the individual is oftentimes liable. If an AI possessed by a corporation commits tort, is the company then liable? Is this a suitable analogy? What are the merits of using this case and, more generally, trying to use these past legal precedents to inform our legislation around autonomous systems?

 

Mark C.: So I would start by saying that the idea of bricolage comes from Jack Balkin, who is a law professor, and who is also drawing from the anthropologist Claude LĂ©vi-Strauss. So that idea is not unique to law, but rather that all our cultural tools, whether it be the law or any physical tool, is subject to this phenomenon. Bricolage means that, when faced with new phenomena, we simply use the tools at hand to both conceptualize and respond to this new thing.

 

To go back to your question of the analogy of animals, the idea is if a corporation or any organization were to use an AI system and it caused harm, one of the first things we would ask would be about the degree of autonomy and sophistication of the device. As long as there is a valid argument that the device were under the control of that corporation, then the lines of responsibility will be fairly easy to draw. Ultimately it will be the corporation’s responsibility.

 

… I think wild animals attempts to capture the idea that eventually AI systems will reach the levels of sophistication where they might not be within the control of the “owner” or “master”. Then we start to wonder if maybe we need to talk about the strict liability that, even if we cannot draw straight lines between the “animal” and the “owner”, we are still going to hold the owner liable because of that relationship of ownership.

 

Marcus S.: …Moving to social media, internet content, AI are often constructed from imperfect data. It is hard to get high-quality data. Whether we are talking about criminal justice data or healthcare data, there are these racial, ethnic, gender, etc. biases that are baked into that dataset. When you train an AI on that data, it becomes “garbage in, garbage out”, and you have an AI that is also replicating and distributing these types of biases. In a situation like that, especially when a company knows the kind of imperfect data they are feeding into an AI, what is their responsibility for making sure that an AI does not replicate these biases?

 

Mark C.: I think that if a company or any user is aware of the limitations of the systems that they are using, particularly when it comes to momentous decisions like law enforcement decisions or the decision to give credit or to employ someone, and if you are aware that the devices you are using are flawed, then I think you do have a responsibility to, at a minimum, make sure there are other ways you could reach these kinds of decisions. You might consider the “recommendation” or prediction made by an AI system, but double-check that, because of the limitations of the data on which that model was trained.

 

The state of the art is still grappling with how we really govern these systems. I think most would agree that you want a system of checks and balances human beings who are looking at and auditing, monitoring the “decisions” made based on the AI system, and allowing other folks to assess those results for fairness. If you are using AI applications to, say, sift through resumes, it is just incumbent upon that company to check those results and question whether they are noticing biases for who is being invited for second interviews, and things like that.

 

Marcus S.: …That does make sense. Although in, say, a medical context, oftentimes these AI systems are imported to distance a human being from the decision-making process, and to remove the subjectivity they might have for a particular situation. It sounds like your recommendation for avoiding these biases is not to just have one human, but to have three or four humans that are investigating the decisions an AI makes. Is it maybe more of an aggregate type of analysis that must be done, versus a case-by-case assessment?

 

Mark C.: Yes, that’s right. I would start off with the aggregate because if you start to see trends that are systematic, then that is probably the only way you are going to be able to detect some of these things. If you start to see systematic biases or systematic decisions which lead to unwanted health outcomes or treatment decisions that turn out to be harmful, then you know something is wrong. I will be honest though, ideally you would want oversight to be fine-grained to the point where you could have that evaluated at a decision-by-decision basis. But then you are quite right, I think you would start to run into these pragmatic problems. It might not be workable or possible to do that.

 

The Continuum of Granting AI Legal Personhood

Marcus S.: Looking at the kind of autonomous systems that we have discussed so far, one argument you make in your book is that autonomy exists on a continuum. And the degree of legal oversight and legal liability should change depending on where you are on the continuum. You state that a mouse trap is an autonomous system, and you do not really have any legal ramifications around how it behaves. But do you think that there is a point at which AI qualifies even for personhood and therefore incurs its own legal liability? And if so, what criteria would you use to determine if this point has been crossed?

 

Mark C.: The law has granted legal personhood to “things”. Ships, for example, have legal personhood, as do corporations. Those are done for almost purely pragmatic reasons, to enable creditors and those who are harmed to bring actions.

 

Several commentators in the law have proposed that for AI. Because of these problems in liability, as a matter of pragmatics or convenience, why not give legal personhood to some of these entities, just so it is easier for people harmed to simply make claims against these systems? But as soon as you do that, you almost give the system a kind of quasi-property, right? So not only does an entity not only get these sorts of responsibilities, but maybe some legal rights as well. Then we are taking baby steps in the direction of personhood.

 

There are lots of debates in the literature about whether we will ever reach the point where AI should be considered its own person as a moral entity worthy of moral concern. That does raise philosophical questions like: what makes us worthy of moral patiency? Then we have arguments about whether AI has reached the level that is common to what we possess or have in terms of our abilities. I was reading some research where people are trying to create rough analogues to emotion, and the machine equivalents of pain. And as soon as you start going in that direction, you can imagine there are going to be arguments that AI are like people. Then we start to want to treat them as people. But that seems very far in the future to me.

On The International Governance of Artificial Intelligence

Marcus S.: I wanted to give you some time to talk a bit about your new book.

Mark C.: Yes, thank you. I will just say a little bit about it. It is called The International Governance of Artificial Intelligence. The study is about arguments in the emerging level of AI and is a combination of hard and soft norms for its governance. It looks at the stakeholders in the development of those norms and the sources of loss. So private firms, large technology companies in particular, programmers and academics, nation-states, and international organizations, and state legislation as a source of international law, and the private softer norms that companies adopt like codes of ethics. The basic thrust of the book is showing the interactions of those different sources and how those sources of law are given rise to, what might we call its international governance. There are no strong overarching set of rules at the international level that govern AI, but emerging.

 

I conclude by saying that the overarching set of norms for AI should come from international human rights. I am certainly not alone in making that argument. I address some of the objections that are raised for using that, but still come out in favor of using those as a way of understanding, at a minimum, among nations of a common language, where we can conceptualize and debate the impacts of some of these on AI applications that are beginning to have international effects. We expect to have the book out by late spring to early summer of 2023.

Conclusion

I learned quite a lot from my discussion with Professor Chinen, especially with regards to the legal ramifications of autonomous systems and AI in the world of commerce, finance, healthcare, and beyond. I am looking forward to his new book, and hopefully we can interview him again around the time of its publication. For an unabridged version of our conversation which goes into more depth on the concepts we discussed, download it here: Mark_Chinen_Interview (DOC).

Acknowledgments

A big thanks to Professor Mark Chinen for taking the time to speak with me. Thanks also to Emily Wenger for checking interview questions and proofreading this document.

References

Neal E. Boudette, “Federal safety agency expands its investigation of Tesla’s Autopilot system”, The New York Times, 2022.

Nyholm, Sven, and Jilles Smids. “The ethics of accident-algorithms for self-driving cars: An applied trolley problem?.” Ethical theory and moral practice 19, no. 5 (2016): 1275-1289.

Balkin, Jack M. Cultural software: A theory of ideology. Yale University Press, 1998.

Lévi-Strauss, Claude. The savage mind. University of Chicago Press, 1966.

Deshpande, Ketki V., Shimei Pan, and James R. Foulds. “Mitigating demographic Bias in AI-based resume filtering.” In Adjunct publication of the 28th ACM conference on user modeling, adaptation and personalization, pp. 268-275. 2020.

Kuehn, Johannes, and Sami Haddadin. “An artificial robot nervous system to teach robots how to feel pain and reflexively react to potentially damaging contacts.” IEEE Robotics and Automation Letters 2, no. 1 (2016): 72-79.

Footnotes

This report was filed in June 2022 and, as of this writing, the investigation is still ongoing .

Balkin describes these concepts in his books, but prominently in his 1998 publication Cultural software: A theory of ideology .

Lèvi-Strauss introduced the concept of bricolage in his 1966 publication The Savage Mind .

Researchers are already finding ways to identify biases in the recruitment process of AI, and finding clever ways to mitigate them .

Moral patiency refers to the concept that we have some moral obligation towards some other being. If we owe a moral obligation to a being, then that being is known as moral patient.

Work on constructing AI to feel human emotions and pain is still very much in its infancy. One recent step has been a robotic arm that is programmed similarly to a human nervous system, such that it can “reflexively respond to potentially damaging contacts” .

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter