Abstract
As artificial intelligence grows and continues to become a part of human society, governments, organizations, companies and experts have been publishing lists of ethical principles — “AI Creeds,” if you will — in order to establish legal and moral guidelines for the development of AI as we move into the future. For example, as of May of 2019, 42 countries have released different guidelines for AI. As such, it is instructive to compare the writings of various people and organizations, governmental and non-governmental, religious and secular, regarding the creation and implementation of artificial intelligence in all parts of society. The discussion should revolve around why these “AI Creeds” are being made and how they differ between religious and secular organizations. The “AI Creeds” included in this paper were chosen based on the following criteria: who or what wrote it and whether they are a governmental, non-governmental, or religious organization. There is no restriction based on content, as most “AI Creeds” include general statements on human rights, safety, and transparency. The hope is to build solid ground on which to compare AI principles as they continue to be published, because they will help guide the development of artificial intelligence.
Introduction
This paper revolves around a discussion of published articles that each define a set of ethical principles for AI. Papers already exist that make a comparison between these documents such as Luciano Floridi and Josh Cowls’ A Unified Framework of Five Principles for AI in Society, which will be analyzed later in this paper, there are also collections of Creeds, like the Harvard Law School Cyberlaw Clinic Principled Artificial Intelligence Project, which breaks down creeds into various categories and visualizes the data. The hope here is to add and compare a religious perspective to the secular perspectives of the majority of such documents and further compare how each document goes about setting ethical guidelines for AI and machine learning processes.
Each of the AI Creeds were chosen based on the influence and expertise of their creators. This is necessary because not every AI Creed creator has technical experience in the specifics of machine learning processes, especially the religious ones. What these religious creeds lack in technical knowledge they make up for it in strong ethical foundations. Five AI Creeds are discussed here. Many more have been put out by other organizations, private companies, etc, and international coalitions such as the EU have multiple documents. Only five were chosen to prevent too much overlap between the main points of each document. The important factor across all these Creeds is ease of use and application to a general citizenry, allowing such a general audience to be informed about how AI and machine learning process have, are, and will affect it.
This paper includes religious viewpoints, specifically Christian ones, in part because of the nature of the grant sponsoring this work. Another reason is the dominance of the Christian faith in America, and worldwide. According to the Pew Research Center, in 2015, 31.2% of the world’s population is affiliated with Christianity, the most out of any world religion, and in America, as of 2014, Christians make up 92% of Congress, and 71% of the general public. Additionally, the religious perspective comes from a different ethical background than secular sources. This will be discussed at greater length in the “Differences” section of the paper.
Amnesty International Toronto Declaration
Amnesty International is a human rights-centered organization that works to change violations of human rights through their three steps of research, advocacy and lobbying, and campaigns and action. They see a large potential for human rights violations in the use of AI and machine learning processes, and thus wrote an AI Creed as a part of “research.”
Amnesty International published this Creed, the Toronto Declaration, on May 16th, 2018. This document is not written by professionals in the AI community, but instead “ on the right to equality and non-discrimination.” The document as a whole is meant to be a projection of international human rights law onto machine learning, data systems and other AI processes that are and will be used by states and private companies. It is broken into four major sections:
- Using the framework of international human rights law
- Duties of states: Human rights obligations
- Responsibilities of private sector actors: human rights due diligence
- The right to an effective remedy.
The first section places obligations on states and responsibility to “private sector actors.” It emphasizes that human rights law is based on the rule of law: the idea that those who make the law are subject to human rights law, and that human rights law, which transcends national borders, is “well suited for borderless technologies” like machine learning processes. Ultimately, it lies with governments to prevent discrimination via legislation, and the private sector must follow those laws, including human rights law that is already in place. Lastly, this section stresses that inclusion is key to protecting the rights of marginalized groups. Article 20 of the document acknowledges that as machine learning processes are developed, the biases of the groups that make them are included in those processes. Article 21, immediately following, defines inclusion and is intended to counteract built-in bias that can be found in machine learning processes.
The following section outlines the responsibilities of states to protect human rights in the use of machine learning. One of the key points is found in the introduction: “The state obligations outlined in this section also apply to public use of machine learning in partnerships with private sector allies.” The declaration acknowledges the use of machine learning systems in the public sector, most notably social welfare, healthcare, and criminal justice. Three goals are suggested: Identify risks, Ensure transparency and accountability, and Enforce oversight. “Conducting regular impact assessments… publicly disclose where machine learning systems are used in the public sphere…, proactively adopt diverse hiring practices” are all part of the steps to accomplish those goals. Lastly, there is a requirement to hold private sector actors to account for their actions before they violate any law regarding discrimination, including new ones going into effect.
The next section of the declaration focuses on the responsibilities of private sector actors in “human rights due diligence.” Again, there are three steps:
- Identify potential discriminatory outcomes
- Take effective action to prevent and mitigate discrimination and track responses
- Be transparent about efforts to identify, prevent and mitigate against discrimination in machine learning systems.
Put together these steps should help private sector actors eliminate the effects of discrimination and human rights violations in their systems.
The Toronto Declaration concludes with a discussion on “the right to an effective remedy,” i.e., the ability of those who will or have faced discrimination at the hands of machine learning processes to respond via due process of law. This is akin to the Sixth Amendment of the United States Constitution, promising the right to a speedy trial.
Institute of Electrical and Electronic Engineers (IEEE): Ethically Aligned Design
The Institute of Electrical and Electronic Engineers’ Global Initiative on Ethics of Autonomous and Intelligent Systems released Ethically Aligned Design in December 2016. It was then re-released a year later based on feedback from the first version. Its main benefit comes from being constructed by professionals in the field, an “open community of over 2000 global experts.” There are eight “imperatives” that guide the nearly 300-page document:
- Human Rights–A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
- Well-being–A/IS creators shall adopt increased human well-being as a primary success criterion for development.
- Data Agency–A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity.
- Effectiveness–A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
- Transparency–The basis of a particular A/IS decision should always be discoverable.
- Accountability–A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
- Awareness of Misuse–A/IS creators shall guard against all potential misuses and risks of A/IS in operation.
- Competence–A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.
Together these imperatives make for the best secular choice for professionals in AI or machine learning development. The document’s weaknesses are apparent: it totally lacks in citizen transparency and, while easily and freely available, it is almost 300 pages long. An average citizen cannot be expected to trudge through it. The creators of AI or institutions who use this creed as their framework will bear the burden of explaining it to a non-technical audience.
A Unified Framework of Five Principles for AI in Society – Luciano Floridi and Josh Cowls
Luciano Floridi and Josh Cowls are both very knowledgeable about machine learning processes and AI. Floridi is the director of the Oxford Internet Institute and Cowls is a member of the Alan Turing Institute and part of their data ethics group.
Floridi and Cowls define five values based on six “AI Initiatives.” These initiatives were chosen because they have a “high profile.” Each was picked as being reputable, recent and relevant to AI. The five values are derived from bioethics values, plus one addition that Floridi and Cowls find to be necessary to bring the other values together. They justify the use of bioethics as, “the one that most closely resembles digital ethics in dealing ecologically with new forms of agents, patients, and environments.”
The five values are: beneficence, non-maleficence, autonomy, justice, and explicability. There should be extra emphasis placed on the fifth added value of explicability, which is further separated into two parts: intelligibility and accountability. These by far are the most important parts of the five values. They help show how the other values should be applied.
ERLC Southern Baptist Convention – An Evangelical Statement of Principles
The Ethics and Religious Liberty Commission is a part of the Southern Baptist Convention. Its goal is to speak about issues in the public eye and work with churches to navigate the difficulties of those issues.
On April 11, 2019, the ERLC published their statement: “Artificial Intelligence: An Evangelical Statement of Principles.” This document hopes to include Christians in the global conversation on AI in a proactive way, and holds up best when compared to similar faith-oriented documents of ethics. “In light of existential questions posed anew by the emergent technology of artificial intelligence (AI), we affirm that God has given us wisdom to approach these issues in light of Scripture and the gospel message.” The various articles address the following: Image of God, AI as Technology, Relationship of AI & Humanity, Medicine, Bias, Sexuality, Work, Data & Privacy, Security, War, Public Policy, and The Future of AI. Here is a summary of the key Principles:
- Humanity’s dominion over creation includes technology.
- The use of AI is not “morally neutral.”
- AI should be used “to inform and aid human reasoning and moral decision-making because it is a tool.”
- The use of AI in medical advances is positively affirmed.
- AI has biases built in because it is a human creation, and AI should be used to “eliminate bias inherent in human decision-making.” Also noteworthy is the recognition of “human autonomy under the power of the state.”
- AI should not be developed for the purpose of “sexual pleasure” and in accordance with “God’s design for human marriage.”
- AI is to be used for work, but not in a way that diminishes the value of human labor; a human is worth more than their economic contributions. Also, AI should not be used for “lives of pure leisure.”
- Individual rights of privacy and property should be affirmed and misuse of data avoided. The most interesting point made is this: “We further deny that consent, even informed consent…is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data.”
- AI in cybersecurity must protect human rights and justice.
- Moral responsibility for use of AI during war must remain with “human agents.”
- AI must not be used against human rights by governments or private sector actors. AI should also not be given any authority to govern people.
- As to the future of AI: We (as humans) do not know how AI will evolve, but it will not make us any more or less human, and it will not help create a perfect world.
This document has several advantages. First, it uses language of legal as well as moral principles more than other religious AI creeds. This approach gives the document added legitimacy. Second, it is the first of its kind as a specifically and very legitimate religious AI Creed.
Steven Croft’s Ten Commandments of AI
Steven Croft is the Bishop of the Anglican Diocese of Oxford and Appointee to the British House of Lords Select Committee on Artificial Intelligence. He offers five key issues on “why all this matters:”
- Personal privacy
- Change in work
- Political influence
- Weaponization of AI
- Superintelligence
These five are Croft’s central tenets on how to construct a competent ethical basis for AI. Although they sound similar to other tenets previously discussed, Croft puts his own spin on each. His emphasis is placed on the individual. How are individuals affected, how will their lives change, and how will they be protected? Croft writes: “As a Christian, I want to be part of that conversation. As Christians we need think seriously about these questions and engage in the debate.” For each point, Croft promotes a need for a Christian voice to be included. This paper has already discussed why that voice is important, Bishop Croft provides a strong confirmation for that idea.
Similarities
The constants across each of the mentioned AI Creeds are similar. The themes of human rights, security, and transparency appear within each, which shows just how important each of these characteristics are in regard to creating and implementing AI and machine learning into the public sphere. This helps assuage the problem of “principle proliferation” highlighted in Floridi and Cowl’s paper. Human rights are universal across law and ethics, making them an obvious choice for inclusion in an AI Creed, whether religious or secular. Security is just as important as the protection of human rights because it deals with the “how” of protection: legislation, enumerated powers of government, and judicial precedent. Lastly, transparency is most important because it bridges the gap of technical knowledge between the private sector creators of AI and citizens. This means that even the most average or common of citizenry should be able to understand the implications and reasons for the implementation of AI or machine learning processes.
Differences
The differences between religious and secular creeds are more telling than the similarities. For example, the ERLC document aligns least with some parts of other creeds. While the article on sexuality is important and relevant, this view of marriage and gender is at variance with current mainstream secular cultural norms. At the same time, it is consistent with the view of many churches. This demonstrates the somewhat different ethical backgrounds of religious and secular AI Creeds.
First, the ethical foundations are different. Christian creeds come from the gospel, the word of God, and the thinking and traditions of Christianity as a whole which have been in place for thousands of years. Secular foundations come from other places, like human rights as epitomized in the Toronto Declaration. Part of the desire for participation by Christian arises from these differences in ethical foundation. Both the ERLC and Steven Croft expressed an explicit need for religious and Christian thinkers to be involved in the conversation on AI ethics.
Another key difference is the nature of work and whether it defines a human’s value. From a Christian perspective, a person’s value is not based entirely on the output of their work. Both faith and secular creeds agree, however, that AI should bring us to a point where our work is enhanced, and human endeavor enhanced through the use of AI.
Acknowledgements
This paper is a part of the grant project “Christian Responses to the Ascendancy of Artificial Intelligence,” as part of the “Bridging the Two Cultures of Science and the Humanities II” program run by Scholarship and Christianity in Oxford (SCIO), with funding by Templeton Religion Trust and The Blankemeyer Foundation.
“What We Do.” What we do | Amnesty International, n.d. https://www.amnesty.org/en/what-we-do/.
Bacciarelli, Anna, et. al. “The Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems.” Amnesty International. Amnesty International and Access Now, May 16, 2018. https://www.amnesty.org/download/Documents/POL3084472018ENGLISH.PDF.
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. IEEE, 2019. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/ autonomous-systems.html
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
“Artificial Intelligence: An Evangelical Statement of Principles.” Ethics and Religious Liberties Commision, April 11, 2019. https://erlc.com/resource-library/statements/artificial-intelligence-an-evangelical-statement-of-principles.
Croft, Steven. “Artificial Intelligence: A Guide to the Key Issues.” Bishop Steven’s Blog, August 25, 2017. https://blogs.oxford.anglican.org/artificial-intelligence-guide-key-issues/.