New! Become A member
Subscribe to our newsletter
Interview

Interviewing Distinguished UW Law Professors Anita Ramasastry and Ryan Calo about Human Rights and Tech Ethics

Anita Ramasastry (AR below) is the Henry M. Jackson Professor of Law at the University of Washington School of Law, and an expert and leading academic in the fields of anti-corruption and business and human rights. At the University of Washington, she directs the Business, Human Rights and Rule of Law Initiative. Professor Ramasastry is a member of and past Chair of the United Nations Working Group on Business and Human Rights. She has also served as a member of the World Economic Forum’s Global Future Council on Transparency and Anti-Corruption.  From 2017-19, she was the president of the Uniform Law Commission, the 127-year old organization of lawyer/commissioners from the 50 States that work to harmonize laws where uniformity is desirable. 

Ryan Calo (RC below) is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. He is a founding co-director of the interdisciplinary UW Tech Policy Lab and the UW Center for an Informed Public. Professor Calo holds adjunct appointments at the University of Washington Information School and the Paul G. Allen School of Computer Science and Engineering.  Professor Calo’s research on law and emerging technology appears in leading law reviews and technical publications, and is frequently referenced by the national media.  In addition to sitting on numerous boards and advisory board for tech policy programs, in 2011, Professor Calo co-founded the premiere North American annual robotics law and policy conference We Robot. 

Thanks to new AI&F Contributing Expert Nassor Saluum for assisting in this interview.

Ed.

 

 

Q: Professors Ramasastry and Calo, each of you as law professors have contributed extensively to the national and global discussion around technology, public policy and human rights. Would you please share with us how this became a focus of your work as lawyers and law professors?

 

AR:  While I deal with technology, my broader area is a field that’s emerging called business and human rights. We look at the way in which corporations, as these non-state actors, have tremendous impact on the human rights of people globally, in so many different ways, from thinking about how surveillance technology is deployed to how it affects a person’s right to privacy.

It’s an emerging field because typically it’s governments that have responsibilities for protecting us in terms of human rights violations and providing us with remedies. In the past 20 years, we’ve moved towards looking at corporations and creating both binding and non-binding rules for how they also have to respect human rights when they are operating anywhere in the world. As you can imagine, the technology sector has come under a lot of scrutiny in this field because of the way in which it interacts — whether it be Facebook or Google  just to name two big companies that are often in the news — in terms of their business models and the impact that those business models have on the rights of people in almost every country in the world.

 

Q: That’s a great summary and clarification, Prof. Ramasastry.  Professor Calo, please tell us a little more about how you ended up moving into this field of policy and technology and creating the Tech Policy Lab at the University of Washington.

RC:  Sure. The way that I come at these questions is I think about the way in which emerging technology changes human affordances and capabilities. They change when a new technology arrives through a complex and non-deterministic process. This changes the way we interact with one another. It changes what we can do and how we do it. So I am interested in the way in which these disruptions affect law and legal institutions and what sort of changes happen to international human rights or business law or privacy, you name it, in light of technological change.

The Tech Policy Lab exists to help policymakers broadly understand and make wiser and more inclusive tech policies. That to me feels like an inherently interdisciplinary exercise. We hold hands across multiple departments and we formally bridge law, computer science and information science.  We also have contributors from linguistics with Emily Bender, to electrical engineering, and to urban design and planning. So the Tech Policy Lab came about because a few of us across campus were deeply interested in what’s the best law and policy infrastructure for emerging technology. We decided to get together and call ourselves an institute, to paraphrase Paul Simon.

 

Q:  Professor Ramasastry, what are some of the basic safeguards that we should be looking at around human rights and AI systems?

AR:   All AI systems should have to protect human rights or prevent violations of those rights. We talk about basic safeguards by talking about international human rights. As a member of the Working Group on Business and Human Rights, I work with companies, governments and civil society actors to implement the U.N. Guiding Principles on Business and Human Rights. That framework reminds governments first of all that they have a responsibility to oversee corporations within their jurisdictions.  So the U.S., the EU, or really any government needs to think about what regulations, or incentives, or what’s appropriate in how they treat companies when they know that companies are going to have a negative impact on the human rights of consumers or citizens or other people.

For example, the U. S. State Department just published guidance on the export of surveillance technology. One answer would be the government could just let a thousand flowers bloom, but what the State Department has done is come up with guidance to say, not only are we going to have export controls, we’re also going to look at where companies are selling their technology.  And we’re also going to ask companies to have processes and frameworks in place when they design and then deploy and sell that technology to third parties to ensure  these companies also safeguard against their technology being linked to human rights abuses.  Governments, for example, have used surveillance technology to spy on human rights defenders in many countries.

The second pillar of the guiding principles is about corporate respect for human rights. This is important because governments sign treaties and it’s governments that have commitments to deal with human rights and international human rights standards. Companies don’t sign those treaties; however, companies have agreed to the U.N. Framework. This means that companies are increasingly asked to benchmark their own conduct wherever they are in the world because the law in the U.S. may be more rights-enhancing than the law in another country that is more repressive. Companies are supposed to benchmark their conduct against international human rights standards. Those standards are clear and quite universal. For example, the right to health, the right to religion, the right to privacy, the right to be free from torture, and the right to be free from arbitrary detention, are all very articulated and enumerated rights.

Within this framework, companies are meant to do human rights due diligence by engaging in impact assessments of their business. When they’re designing a product or service, they’re meant to look at the impacts of that. For technology, this means thinking about what the product or service might do and who it might impact. These kinds of impact assessments are important at the design stage, at the implementation stage, and at the stage of having a business partner or business relation. So the idea is that human rights due diligence and the guiding principles require companies to identify their negative impacts, mitigate those impacts and if they can’t mitigate them, then to remedy them. That obligation, again, is to benchmark their conduct against international human rights standards, not against local law, which can vary wildly across the world.

 

Q:  Would you please tell us a little more about the landscape of organizations, including faith-based ones, which participate  in this effort to ensure compliance with international human rights by corporations?

AR:  There are different and overlapping groups of people, right? For example, in the technology space, there is a group called Access Now, which is very active in the tech world, but also very active around concepts of human rights and digital rights. Many groups, including faith-based groups, spend a lot of time advocating before the United Nations Human Rights Council. They petition governments, and also use what we call treaty mechanisms, or they write to independent experts like my Working Group to clarify when they believe there’s been a human rights abuse that’s taking place, whether it’s the government engaging in the abuse or the company. We receive complaints from around the world and my Working Group actively engages with companies and governments  to try to get them to change their practices.  Faith-based groups are particularly concerned because they do see that, for example, surveillance technology or access to customer databases and records are used to basically infringe the rights of religious minorities globally.

 

Q:  Professor Calo, several of our AI and Faith Founding Experts have taken the Tech Policy Lab  training in commenting on public policy. Can you tell us a little more about the spectrum of groups that you’re including within that review process and how that works?

RC:   The project that you’re referring to is called the Diverse Voices project. It was developed primarily by my colleague Batya Friedman, although the entire Lab participated in the design and implementation. It begins with the observation that tech policy tends to reflect the mainstream because the people who are engaged in tech policy, especially at a national level, have similar backgrounds and demographics.

Diverse Voices is a methodology which we’ve documented in about 50 pages. We’ve also trained others, including members of your organization, on it. It’s a methodology by which to bring in the voices of experiential experts to comment on early-stage tech policy documents. For example, if we’re going to put a white paper out there about a particular topic, we will identify a group of experiential experts in order to find out from them what is broken about the document, frankly.

We are presently working to develop an interfaith panel, but at the moment we have panels around low socio-economic status, the formerly incarcerated, women, people living with disabilities, and we’re convening a panel on indigenous populations. These panels are diverse. We have some people for whom the particular identifying characteristic is lived experience. For example, on the disability panel there are people living with disabilities. In addition to that category, we also have experts who advocate on behalf of those populations or study those populations.

We do that because there are certain things you can only know by virtue of lived experiences, and there are other things that you may be aware of because you’re advocating for a group and you may see patterns that are not visible to the individual person.  We try to include both.  We find that this process inevitably enhances the work product. We recognize that there are limitations. For example, it’s not a true co-design model where stakeholders are present at every stage. We’re bringing them in well into the process.

It can also be somewhat difficult — and even some say arbitrary – to determine whose voices are included in any given project. But starting from zero, where almost no effort whatsoever is made to that early stage  policy work product, I feel that this is a really consequential and material improvement.

 

Q:  Let me ask you about the overlap between the human rights world and the world of tech policy, regulations and ethics. Do you see an unnatural siloing between those worlds, or effective overlap?

RC:  Chief Justice Warren, I think it was, said that law floats on a sea of ethics, meaning that, ethics is often the normative basis. Ethics and political moral theory are the normative basis for laws, including domestically and internationally. It’s critical for us to have conversations about ethics and for them to be consistently renewed in light of changes to our affordances from new technology. So I believe all that to be true. My concern is that we cannot stop at ethical principles. The concern I have about that is that often times the statement of principles are not things that anybody seriously disagrees with. It’s just a restatement of stuff everybody agrees with.

The aspect that I like about ethics statements specifically from faith communities is that they remind us that there might be different ways from a religious vantage that we might look at these technologies that may not be well known. When I teach students about technology, I will often assign them articles that, for example, are about the way that Amish people engage with technology or the way that Jewish populations, with respect to sabbath, engage with technology. This shakes the students loose from this idea that somehow technology is inevitable and the way that we interact with it should go precisely the way that television and technology companies dictate. That said, I think we’re well past the point where we ought to be changing laws and legal institutions to respond to these changes.

 

Q:  Professor Ramasastry?

AR:   I agree with Ryan in many respects. Looking at it from the human rights side, here’s the challenge we have with ethics: business schools across the world have classes on what they call business ethics, but business ethics has largely been a voluntary and very amorphous field  as to what it means for businesses to behave ethically.  It’s often about what they believe the business or the corporate entity and its management believe is the right approach. Ethics is a larger and malleable sea of concepts.

The human rights framework, while it is capable of being interpreted in many different ways, is still a bit more concrete in terms of what are the specific rights enshrined in the universal declaration that you now have consensus from states around. It’s a fragile consensus that is by no means perfect. However, it is a much more precise way of thinking about how people are impacted by business conduct or by technology than business ethics.

We see  that  ethicists  and companies in particular often are quite happy to sort of play in the ethics space. You have ethics officers within companies. The move to human rights has been a new change for companies to start thinking about this in this much more specific way. This requires thinking about identified rights and to actually focus much more not on what’s broadly moral or ethical, but specifically what impact are you having on what we call a rights holder.

One of the things I admire so much about the Tech Policy Lab, which is very consistent with the U. N. Guiding Principles on Business and Human Rights, is this idea of meaningful stakeholder engagement. You start by asking those who will be impacted by a decision what the impacts are going to be on them and then you make policy and develop approaches that are based on that consultation. This typically has not been the case in the world of business and human rights. Very often, it’s about consultants and companies thinking  they know what the harm will be, but what we’re seeing is in fact you really have to ask before you know.

 

Q:  Who is represented and who is underrepresented in these global conversations around AI and human rights,  how do these conversations change in different countries, and how can those underrepresented become more involved in those conversations?

AR:  There are many ways to address that question. So, the first , as you’re saying, is whose voice and  whose vision of human rights? There’s often a critique of international human rights  as being a creation of the West and of liberal democracies. I think you find that there are scholars who come from a variety of different disciplines, who have tried to  breathe life into international human rights as a concept. The UN Human Rights Council, which is comprised of states from all regions of the world and all kinds of countries, is meant to represent that fragile consensus. So I think that we could say that there is a universal framework, there is a fragile consensus, but we always need to do more to be inclusive of voices from different places.

The  largest challenge, of course, is that transnational economic activity has harmed people in the Global South. The business and human rights movement and corporate accountability has been a longstanding attempt with huge lifetimes to go to right that balance, the impact of colonialism, and the impact of neoliberalism  on people in the Global South.  With investment and trade comes responsibility and accountability.  So that’s a big challenge in the human rights sphere.

And then the last piece is: are rights really the same? There is an ongoing  debate going on in Asia and other countries around where the emphasis should be.  Western European countries, and the United States  have tended to focus on  civil and political rights. Whereas if you speak to someone who’s from China, they’ll say that  the rights that are most important are economic, social and cultural rights.  That is the  debate that you see very much alive in the UN today: which rights should we prioritize?  I don’t think there’s an answer there as to what does the right — let’s say gender equality, for example — really mean. So when I speak about universality, I am very aware that it is a tenuous and contested concept, but it’s one of the places where we do have (until someone pulls that back), an internationally  agreed-upon consensus. And that’s important.

 

Q:  Professor Calo?

RC:  I certainly cannot improve upon that answer. I would just direct your attention to a Tech Policy Lab project that is called Telling Stories. In that project, we brought in  policy and technology experts from every continent and did a short story workshop with them over a period of several days  and also brought in a diverse set of artists to illustrate those short stories.  We created a lovely bound volume where we have stories from all over the world about the cultural impacts of artificial intelligence in a variety of different contexts, such as South America, Sri Lanka, the continent of Africa, Europe, Australia, and China. There are 19 stories in all.

What we really like about the project is that it is obviously the case that not only are human rights conversations happening in some places in some ways, but also the technology itself is developed in certain contexts that have assumptions about how the world, culture, and society work. One of my favorite examples in that regard is the failure of the one-laptop-per-child initiative where a bunch of people in Silicon Valley — primarily some on the East Coast,  MIT and the like — developed this idea that they could just send laptops around the world and that a bunch of people would  utilize them to empower themselves.  It’s like taking a gifted and talented curriculum from the 1990’s and trying to send it to every continent.’

And so we’re trying to shake people loose from the idea that just because technology was developed in a particular context, that the cultural assumptions will obtain elsewhere. The power of stories is also great if you’re trying to shake people loose from their assumptions.

 

Q:  Following up on Professor Ramasastry’ s point about human rights being a more precise conversation, the White House Office of Science and Technology Policy recently issued a call for comments on a proposed Bill of Rights framework for AI, much of which involves surveillance technology and the right to be free of that kind of observation. Do you think that a shift toward a Bill of Rights approach and beyond these rapidly multiplying statements of AI ethics principles is moving in the right direction?

AR:  Yes and no. There’s a specific tension in the U.S. context, which is that the U.S. sees the world and human rights through the lens of the U.S. Constitution, right? It sort of tries to duck and weave, by saying there are these international standards, but everything is basically based on the Constitution. And  that’s a challenge, of course, because our Constitution from the 18th century is based on civil and political rights.  So it doesn’t take into account all of these other kinds of rights, such as economic, cultural and social which may be equally important. So that’s just one piece —  to say that the U.S. context is, I think, quite special.

The second piece, though, is that  I think a bill of rights can be empowering. When I served in the Obama Administration, I worked on a similar proposal for a privacy bill of rights but in the sense that it gives people standing  to assert some kind of right and remedy. So in that sense it’s empowering but I’m always fearful that it may be limiting.  We have constitutions and we have human rights already. So figuring out instead how to make those actionable to me is much more important than  trying to limit this to specific articulated rights.

 

Q:  Let me turn for our last question to this very interesting crossover between human rights and the rights of entities that may well occupy our future — intelligent robots.  Professor Calo, you’ve particularly been engaged with such questions, including in the big annual “We Robot Conference” you’ve co-founded and help lead. How do you think such rights are going to evolve?  

RC:   Consistent with my earlier answer to what the Tech Policy Lab is up to, I would continue for the foreseeable future to center technology around the human being.  I think of the conversation about rights for robots or artificial intelligence as being a distraction and one that may even be somewhat dangerous. I can understand why you would functionally attribute certain kinds of rights to non-human entities, but I think in doing so, we always have to remember that the normative basis for that convergence of rights or the practical basis of them is going to be people. For example, you might want to protect bot speech on the internet or speech by autonomous agents, but you do so on the basis of the right of the listener to receive that material and the right of the creator of the entity. That’s how I think we should continue.

For those people who are really interested in this topic, there’s a great paper in Computers and Society  from  2020  by Abeba Birhane and Jelle van Dijk. It’s basically asking the question of robot rights – “let’s talk about human welfare instead”.  I don’t think much is gained by  conferring rights  on entities that do not have our intellectual or spiritual depth.

There is a kind of commonality between those who worry that robots pose an existential threat to humanity because they’re going to wake up and kill us, and those who believe that there’s no  moral or rational basis by which to deny rights to things that present as though they were conscious. I see a commonality between the two. Both are assuming an implausible, certainly not near-term world, and having a debate that is quite divorced from material immediate conversations about the way in which artificial intelligence violates human rights and contributes to  exacerbating  asymmetries of power and information.  I think those are the conversations that we should have and that they should really foreground people.

 

Q:  That’s very helpful. Professor Ramasastry, another interesting rights intersection is that between human rights and animal rights, alongside the emerging discussion of possible future robot rights.  Can you comment on that?

AR:   Another field that I work in is the area of sustainable development. We often need to  see  living beings and even the planet as intertwined and interconnected.  So it’s really interesting to hear Ryan talk about rights and robots. There are rights of animals,  rights of people, and  rights of the planet, right? They’re all intertwined in what we call Agenda 2030, which is the Sustainable Development Agenda.

What I would say is coming from the human rights space,  in which I spend my time thinking about impacts to people. What we spend a lot of time doing is really focusing on the fact that respect for the planet and respect for animals is part of a long-term strategy of respect and dignity for people. So it’s really just about how intertwined and connected we are to these, which I think the U.N. has recognized through Agenda 2030.

 

Thanks very much, Professors Ramasastry and Calo, for sharing your experience at the leading edge of these approaches to managing technology risk and its impact on people around the world.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter