AI and Ethics: Reconsidering the Rome Call for AI Ethics

The Rome Call for AI Ethics, a Vatican-led effort to establish a set of ethical principles for AI, continues to be touted as an example of technology and faith working together to put some collective guard rails around the burgeoning use of AI.

While the Rome Call for AI Ethics establishes a baseline, our analysis proves it is far from exhaustive as a framework for secular, or faith-based implementation of ethical AI frameworks. To its credit, the Call recognizes its limitations and its place in the development of AI Ethics when it states, “This Call is a step forward with a view to growing with a common understanding and searching for a language and solutions we can share.”

As part of the launch of AI and Faith’s think tank, for which I am Research Director, I collected and analyzed the majority of publicly known ethical statements from commercial organizations, governments, and non-profits.  We built a database of those statements and an initial breakout of their relevant component issues.  As a test of our initial breakout, I used these components to analyze the Rome Call. This article offers suggestions for how to improve not only the Rome Call, but some lessons learned that can be applied to current and future policy statements about ethical AI.

Note: This document primarily relies on textual analysis and comparisons to other ethical AI principal statements. The goal is to make recommendations on how to write better ethical AI statements. The article also employs scriptural and sacred documents for reference, primarily from the Jewish tradition. I readily acknowledge that other faiths, or even other members of the Jewish faith, may interpret the referenced passages differently. The analysis does not intend to offer religious interpretation, but rather uses the scriptural and halakhic passages as examples of the need for conciseness in definitions. Writers of ethical AI frameworks should take away, not that any one religion’s perspective will make for better frameworks, but that good writing, honest references and acknowledgment of sources, and inclusion of faith-based ethical antecedents will make their work more inclusive and complete.

An analysis of the Rome Call principles

The Rome Call includes six principles, stated simply, at the end of a three-and-a-half page introduction.

The Rome Call Principles

  1. Transparency: in principle, AI systems must be explainable;
  2. Inclusion: the needs of all human beings must be taken into consideration so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop;
  3. Responsibility: those who design and deploy the use of AI must proceed with responsibility and transparency;
  4. Impartiality: do not create or act according to bias, thus safeguarding fairness and human dignity;
  5. Reliability: AI systems must be able to work reliably;
  6. Security and privacy: AI systems must work securely and respect the privacy of users.

Completeness: Does the list include all elements required for a comprehensive ethical AI framework?

After analyzing statements from over 30 other organizations, the six principles align with many of the statements collected in our  AI and Faith database, some written after, some before. Although our analysis of the collected statements continues to evolve, I have currently identified over 40 categories of principle statements, many of which are not reflected in the Rome Call. While the Rome Call statement does not conflict with the larger collection of principle statements, it represents one of the briefer and less explanatory statements.

As an example, the Rome Call does not include the concept of “accountability”. It says nothing about the consequences for signatory organizations should they violate any of the principles.

The Ethics Guidelines for Trustworthy AI from the European Commission’s High-Level Expert Group on Artificial intelligence offer detailed paragraphs that cover auditability, minimization and reporting of negative impact, trade-offs, and redress. The redress clause reads: “When unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress. Knowing that redress is possible when things go wrong is key to ensuring trust. Particular attention should be paid to vulnerable persons or groups.”

The Rome Call should be considered incomplete without an accountability clause.

Further, the lack of principles related to economics, education, employment, governance, justice, liability, propaganda, human rights, and others makes the Rome Call appear truncated in comparison to its peers.

 

Quality: Are the principles clear and concise?

Although the Rome Call’s principles appear short and concise, reading them may result in more confusion than clarity. The first principle, for instance, Transparency, assigns the value of “explainable” as a primary attribute of AI systems. This suggests that AI systems should not be black boxes acting without human oversight. It further implies that systems must be written so that humans can understand the reasoning behind inferences and recommendations. Those implications can be determined from the principle, but they are not explicit. The term “explainable” requires explanation.

The term “transparency” also appears as the last word in the Responsibility principle, which concludes that “AI must proceed with responsibility and transparency.” This introduces a different form of transparency. The Responsibility Principle addresses people “who design and deploy” AI, not the AI itself. This suggests transparency here compels developers to be transparent about their intentions and motivations, timelines, and reliability measures, just to name a few potential items.

Neither use of “transparency” is clear. Just as “explainable” requires further definition, so does what people need to be transparent about when developing or deploying AI.

The Inclusion clause equally leaves many terms undefined. Terms like “the needs of all human beings,” “benefits,” and “best possible conditions” offer significant room for interpretation, not all of which would necessarily prove beneficent. To which “needs” does the statement refer? To which “benefits”? And what constitutes the “best possible conditions”?

By contrast, the paper, A Unified Framework of Five Principles for AI in Society by Luciano Floridi and Josh Cowls, offers principles that define Beneficence and Non-Maleficence. The beginning of the Non- Maleficence principle clearly states that:

Though ‘do only good’ (beneficence) and ‘do no harm’ (non-maleficence) may seem logically equivalent, they are not, and represent distinct principles.

The Rome Call’s Impartiality clause offers the best example of conflated ideas contributing to vagueness. Like Beneficence and Non-Maleficence, the ideas of bias, fairness, and human dignity do not equate to impartiality. They are concepts that deserve their own principles. Our analysis found that many other organizations’ statements of principles broke these concepts out into their own categories, though others conflated them like the Rome Call.  For example,  the Preparing for the Future of Artificial Intelligence report from the President’s National Science and Technology Council dated October 2016,  conflates the ideas of fairness, justice, safety, accountability, and governance in one section. Its recommendation 16 first uses “fairness” without defining it.

The Rome Call’s Reliability clause is perhaps the least effective in offering insight, as it not only fails to define reliability but self-references the term in its assertion. Reliability means AI systems should work reliably. Drafters who assert particular principles like this need to ensure that their definitions are clear. Does “reliability” here, for instance, mean that AI systems should work well? Not crash? Or does it imply consistent recommendations or actions as outcomes? Or does it imply that the AI does not cause unintentional harm? It may mean all or none of these, but the way the principle is written precludes inclusion or differentiation among the potential range of meanings.

The final Rome Call clause, Security and Privacy, offers another example of conflating two ideas that deserve their own principles.  Security for instance, may refer to personal security, meaning that AI will do no physical harm, but it may also reflect social constructs, like job security, meaning that an AI will not take the job of a human through automation. Privacy may relate to who has access to the underlying data used to derive the AI’s learning, or it may mean privacy in the interactions between humans and the AI. Again, the openness of the statement does not allow for a determination of intent.

My analysis concluded that the Rome Call for AI Ethics principles does not offer clarity on their topics. They prove concise to a fault. The lack of clarity on key terms like transparency, bias, and reliability leaves too much room for interpretation for meaningful adoption.

 

Actionable: Do the principles suggest to readers ways in which the principles should apply to their work?

The answer to this question is clearly no. The general analysis of the language used in the Rome Call offers little clarity for the principles themselves, let alone any guidance for application by the reader.

The principles read much like commandments.

And like commandments in the Torah, they leave much room for interpretation. Consider the commandment to “Remember the sabbath day and keep it holy, Six days you shall labor and do all your work, but the seventh day is a sabbath of the Lord your God: you shall not do any work (Exodus 20:5).” This commandment does not define work (which in Judaism may be avodah – normal work, and melachah – ritual work). The Torah is not overly helpful elsewhere in defining work. It offers a few prohibitions on types of work, such as carrying (Jeremiah 17:22), the kindling of fire (Exodus 35: 2-3), a variety of farm work (Exodus 34:21, Numbers 15:15-18, Nehemiah 13:15-18), pursuing “affairs” (business) and striking bargains (Isaiah 58:13). As types of work, however, the terse admonitions leave plenty of room for interpretation.

The Torah defines melachah, the religiously defined work, as those things related to the construction of the Mishkan, or Tabernacle, the traveling sanctuary built in the wilderness.

It was not until the codification of the Mishnah that 39 categories of work related to the building of the Mishkan were elaborated. Oral tradition preceded the writing, but without the oral tradition to accompany the text, those reading the work commandment in isolation would be rather confused about what they were and were not permitted to do on the Sabbath. Responsa continues to elaborate on the meaning of work as new technologies develop.

The Rome Call likewise chooses short assertions of principle with little support for a deeper understanding. Although an oral tradition may exist to explain the reasoning behind the assertions, it has not yet been codified or shared.

The narrative precursor to the Rome Call introduces other questions about the actionability of the principles. Beyond ethics, it discusses education and human rights, but there are no principles that directly address those issues. It could be argued that the Inclusion principle covers human rights, but it only does so in ambiguous terms. The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems,   by contrast, offers an extensive narrative on human rights with several clauses, including clause 42, aimed at the private sector:

“Private sector actors have a responsibility to respect human rights; this responsibility exists independently of state obligations. As part of fulfilling this responsibility, private sector actors need to take ongoing proactive and reactive steps to ensure that they do not cause or contribute to human rights abuses – a process called ‘human rights due diligence.’”

The signatories of the Rome Call principles, representing Microsoft, IBM, the UN’s Food and Agriculture Organization, and the Vatican’s Pontifical Academy for Life, do a service by elevating the dialogue, but the brevity and lack of clarity in the execution of the principles leave much work for its readers and potential adherents. As recognized in the Overview and Context above, the document does state that this is a contribution to the dialogue—that said, many of the other documents that espouse principles offer a much more comprehensive list of principle categories, with a richer definition for the words they choose, and the context in which they appear.

When writing definitions, it is important to address a specific audience. In this case, the community expected to adopt these principles should be offered clear guidance and advice for how to act ethically as they build and deploy AI-based systems. Principles at this level of abstraction run the risk of being ignored by the very people they are meant to influence because their conceptual and aspirational tone fails to translate into practical and actionable application.

 

Generalization: What is the scope of an ethical AI framework?

The last line of the Rome Call, just before the signatures, reads: These principles are the fundamental elements of good innovation. That statement begs at least two questions:

  • How far-ranging should AI Ethics become (should all future innovation assume embedded AI)?
  • Are principles adopted for general ethical technological innovation good enough to cover AI?

Ethical AI frameworks should answer the second question as to why AI requires its own framework apart from other technological innovations.

Existing ethics related to weaponry would argue against the need for a new framework as they already incorporate the idea of an unusual threat of harm. Weapons represent technologies that can do great harm when wielded by those with malicious intent. AI, however, offers the potential to act without the agency of its creator. In other words, a gun requires a person to aim it and to pull the trigger. AI may aim and pull its trigger without human supervision. Regardless of the technology, humans should remain responsible and accountable for the decision to kill another human. The augmentation of weapon systems with AI should not, therefore, require an additional framework if those concerning weapons already apply.

The same basic technology, deployed in a weapons system, would allow an AI to determine an appropriate consumer for an ad. It can employ that algorithm hundreds of thousands of times a day with no human supervision. If the principals aim at a holistic framework, then the special cases should also be covered. This suggests a broad framework for ethics focused not just on AI or weapons, but on the relationship between people and technology.

While it may appear autonomous weapons should require their own ethical standards, a more universal framework such as those derived from the great religions, in which the emphasis goes to the ethical treatment of humans by other humans, should by extension include any current or future technology—implicating the actors as those accountable and responsible should their actions be directive or negligent through the abdication of responsibility in design.

Applying my own faith tradition to this aspect of AI, the Jewish story of the Golem raises several issues about what it means to be human. In his paper, Robotics and artificial intelligence Jewish ethical perspectives, Zvi Harry Rappaport of the Rabin Medical Center, offers the following summary of Jewish ethical issues related to the Golem from tradition:

  • Can one add a Golem to a religious quorum?
  • Should someone who destroys a Golem be held guilty of murder?
  • Is cruelty to a Golem to be distinguished from cruelty to animals?

The authors of From the Tree of Knowledge and the Golem of Prague to Kosher Autonomous Cars: The Ethics of Artificial Intelligence Through Jewish Eyes argue that “existing legal persons should continue to maintain legal control over artificial agents, while natural persons assume ultimate moral responsibility for choices made by artificial agents they employ in their service.”

This reasoning suggests that innovation should be covered at a level above AI.

That said, AI did not exist at the time when sacred traditions commenced nor when they were codified. All AI ethics must, therefore, draw upon analogous explanations for context. Like the discussion of work, AI will fall into various categories of interpretation depending on how it is applied. Its use in autonomous weapons may not call for additional ethical guidelines as that may be covered elsewhere. AI’s impact on human agency, its impact on social constructs, including interpersonal relationships, work, and commerce, its ability to proliferate propaganda, will draw on a variety of traditional sources—but in many cases, those connections remain to be made. That is the work that framework developers need to undertake.

The Rome Call states that it is about AI Ethics. The overt generalization about innovation reads as overreach in that context. The drafters should choose either to embrace broader ethics concerning human technological innovation or remain focused on those ideas related specifically to artificial intelligence. Ideally, ethical AI framework developers will choose to offer actionable guidance for their intended audience, and avoid introducing ideas that distract, are better addressed elsewhere, or those for which the writers are unable, or unwilling, to offer adequate insight.

Where is faith in the Rome Call?

Christians, Jews, Muslims and other Abrahamic faiths may well ask where G-d or even particular expressions of faith values are in the Rome Call, given the Vatican’s facilitation and endorsement of the document.  The direct answer is G-d is not referenced in the Rome Call, nor is there a reference to any faith perspective. The references that do exist are not properly annotated so their sources cannot be easily found.

As an example, the statement: New technology must be researched and produced in accordance with criteria that ensure it truly serves the entire “human family” (Preamble, Univ. Dec. Human Rights), refers to the Universal Declaration of Human Rights, a United Nations document, that can be found here. There is no connection between this statement and any religious context.

The absence of reference to faith doctrines and teaching to support assertions in the Rome Call is especially puzzling because such support is readily found in the Vatican’s primary reference source, the Bible. Given my own faith perspective, I’ll draw for examples on the Hebrew Bible, the Old Testament.  Only one example among many I could cite is Deuteronomy 22:8, which  states:

“When you build a new house, you shall make a parapet for your roof, so that you do not bring bloodguilt on your house if anyone should fall from it.”

Judaism considers this commandment to implore people to take safety precautions for building a home. The Rabbis extend this passage as an underlying argument for safety in all new development. Human safety must sit at the core of any discussion about technology. Insufficient protections lead to harm. The commandment demands building protections into built things, and that the builder should avoid injury to those working on it. This line of reasoning should be underly principles related to building protections within AI ethics frameworks.

To further take up that thread of scriptural reference, the Rome Call does not include a precept to avoid harming humans. This is a fundamental oversight from its drafters. Perhaps they purposefully avoided any hint of the golden rule (‘That which is hateful to you, do not do to your fellow. That is the Whole Torah; The Rest is Interpretation’ from the Elder Hillel in Babylonian Talmud, Shabbat 31a). By avoiding a precept that may have been considered a cliché, they missed an important connection between emergent ethics of AI and the millennial old faith traditions from which such principles derive if they are acknowledged or not. Doing no harm, and more fundamentally, treating all human life as sacred should be easily found in the Rome Call’s principles.

Other faiths will also offer context through traditions that would help documents like the Rome Call provide more context for its readers.

 

What’s Next?

The Rome Call states one collaborative perspective. Other documents offer deeper explorations of the topics, often with more robust research, some at the expense of becoming increasingly difficult to consume. The market needs to create a set of principles that are balanced between depth and completeness. All ethical frameworks should also acknowledge and properly reference the sources of their principles, including arguments derived from traditional religious literature when it applies. I offer the following preliminary recommendations to future drafters of ethical AI frameworks, or those seeking to review existing ones.

 

Recommendations for Future Ethical AI Frameworks Derived from an Analysis of the Rome Call

  • Include full references with proper citations and links.
  • Support assertions with faith-based foundational texts when applicable.
  • Clarify the definition and context of words that have multiple meanings within ethical or technological discussions.
  • Avoid self-reverential definitions.
  • Remove conflated ideas and replace them with clear, single-topic assertions. Describe clear relationships between ideas where they exist (must go beyond typographic adjacency).
  • Research other AI ethics statements and documents to create a more comprehensive and nuanced set of principles.

AI and Faith will continue their research and will publish additional findings and recommendations from our work at https://aiandfaith.org.

X
%d bloggers like this: