New! Become A member
Subscribe to our newsletter
Insights

The Rome Call for AI Ethics: Co-Responsibility and Commitment

At the close of the XXVI General Assembly of the Pontifical Academy for Life in February 2020 – just days before the Italian government would implement nationwide quarantine measures due to the COVID-19 outbreak – academicians, dignitaries, and a host of guests crowded into the Auditorium Conciliazione, just a stone’s throw from Saint Peter’s Basilica. The ambiance was different – quite different – from what we members of the Academy had experienced over the course of the two previous conference days. Having just participated in three intense sessions that were replete with a dozen presentations and a handful of moderated discussions on the relationship between – and the challenges arising at the intersection of – artificial intelligence, ethics, human health, and the law, we took our seats in the auditorium as viewers of what was panning out to be the pièce de résistance of this year’s Assembly.

One could not miss the giant screen upstage nor the word “renAIssance” repeatedly projected upon it, undoubtedly to underline the parallels between momentous periods in human history – the Renaissance and the age, as it were, of AI – marked by humanism, innovation, and imagination. An impressive line-up of speakers was introduced: Msgr Vincenzo Paglia (president of the Pontifical Academy for Life), Mr Brad Smith (president of Microsoft), Mr John Kelly III (executive vice-president of IBM), Mr David Sassoli (president of the European Parliament), and Mr Qu Dongyu (director general of the Food and Agriculture Organization). This was an unprecedented roster, I thought, considering that the event was being held on the heels of a primarily ecclesial meeting on artificial intelligence.

Msgr Paglia delivered words penned for the event by Pope Francis, who could not attend because of illness. In the address, the pope spoke of “the digital galaxy, and specifically artificial intelligence,” as being “at the very heart of the epochal change we are experiencing,” which is transforming the way we think about space, time, and the human body. He underlined how decisions made in medical, economic, and social contexts increasingly reveal “the point of convergence between an input that is truly human and an automatic calculus.” One cannot ignore how “a simple ideological calculation of functional performance and sustainable costs” could dismiss the biographical dimension of humanhood in favor of a mechanistic view. Further, the pope urged caution regarding how “algorithms now extract data that enable mental and relational habits to be controlled, for commercial or political ends, frequently without our knowledge. This asymmetry, by which a select few know everything about us while we know nothing about them, dulls critical thought and the conscious exercise of freedom.” We might add to these concerns: the datafication of human behaviour, implicit profiling, the possibility of firms enticing poorer countries with financial compensation in return for medical data, and the possibility that algorithms will replicate or amplify prejudices, assumptions, or biases that may have been programmed – consciously or not – into them.

Although the risk to deepen the divide between the haves and the have-nots through the steering of knowledge, wealth, and power into the hands of but a few must not go unchallenged, the pope made plain that while new technologies are neither neutral nor value-free, one must also not lose sight of their immense potential. Applauding the bringing together of persons from the Church, industry, politics, and science in (what appeared to be) a public commitment to the Common Good, Pope Francis proposed that “the ethical development of algorithms – algor-ethics – can be a bridge enabling those principles to enter concretely into digital technologies through an effective cross-disciplinary dialogue.”

Mr Smith and Mr Kelly III took the stage with much dynamism and spoke of a “new generation of opportunity.” They touted AI as perhaps the most powerful tool in the world, boasting incredible promise as well as a host of new challenges (including the weaponization of certain technologies, the link between AI and cyberattacks, AI and the fuelling of mass surveillance, the automation of jobs, etc). Mr Smith called the Catholic Church a fundamental voice in the ethics of emerging technologies; he praised the invitation set out by the Rome Call for AI Ethics for encouraging the inclusion of a plurality of voices in discussions on AI, while underscoring the importance of the humanities, liberal arts, and ethics alongside STEM disciplines for the acquiring of a more complex and integrative set of skills. “The future of humanity,” Mr Smith concluded, “depends on us making this right.” Mr Kelly echoed these sentiments, reminding that AI is very much a reflection of us as human beings, certainly to the extent that AI “learns” based on the data and processes that we choose to give it. He pressed that our view ought not to be human versus machine, as is popular fodder for the movie industry, but both human and machine working together for the democratization of knowledge and in pursuit of the Common Good. At the end of his presentation, Mr Kelly cited a line from Pope Pius VI that was pronounced in the Angelus of July 20, 1969 a few hours before the first moon landing: “The human heart absolutely must become freer, better and more religious as machines, weapons and the instruments people have at their disposition become more powerful.”

Many of these points raised here by the speakers are featured in the Rome Call for AI Ethics, a ground-breaking document that seeks to engage AI “movers and shakers” – in the Church, industry, NGOs, public institutions, politics – in committing to serious ethical reflection regarding the development and applications of artificial intelligence. Although spearheaded by Church leaders and later submitted to the Secretary of State of the Holy See for approval, the Rome Call is not an official document of the Pontifical Academy for Life, but is the fruit of some of the world’s leading experts (including some members of the PAL) on the subject of AI.

At the end of the event, the sponsors of the Rome Call (Vincenzo Paglia, Brad Smith, John Kelly III, Qu Dongyu, and Paola Pisano) officially became the first signatories, publicly expressing:

 

their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics,’ namely the ethical use of AI as defined by the following principles: 1) Transparency: in principle, AI systems must be explainable; 2) Inclusion: the needs of all human beings must be taken into consideration so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop; 3) Responsibility: those who design and deploy the use of AI must proceed with responsibility and transparency; 4) Impartiality: do not create or act according to bias, thus safeguarding fairness and human dignity; 5) Reliability: AI systems must be able to work reliably; 6) Security and privacy: AI systems must work securely and respect the privacy of users. These principles are fundamental elements of good innovation.

 

In his encyclical letter, Caritas in Veritate, Pope Emeritus Benedict XVI writes that “technology is never merely technology. It reveals man and his aspirations towards development, it expresses the inner tension that impels him gradually to overcome material limitations. Technology, in this sense, is a response to God’s command to till and to keep the land (cf. Gen 2:15) that he has entrusted to humanity, and it must serve to reinforce the covenant between human beings and the environment, a covenant that should mirror God’s creative love.” For the Church, technology – of whatever form – must have the good of human beings and the whole of the human family at its heart; must be an expression of stewardship and service; must contribute to genuine progress (that is, a progress that will lead human beings “to exercise a wider solidarity;” must respect the inherent dignity of human beings and all natural environments; and must recognize the delicate complexity of ecosystems and the interdependencies spelled out within them. In many ways, the first signatories of the Rome Call for AI Ethics – voices of the Church, industry, and government alike – agree on these points.

The charge at present is to properly elucidate and elaborate on the principles of the Rome Call; to deliberate on how said principles might seriously and constructively influence policymaking and the development of AI at the industry level (especially when the upholding of these principles appears to stand in the way of industrial innovation and productivity); and increase this solidarity as we move ever so quickly into this new generation of opportunity.

Paul VI, Angelus Domini, 20 July 1969.

Benedict XVI, Caritas in Veritate, 29 June 2009, n. 69.


Cory Andrew Labrecque, PhD

is Associate Professor of Theological Ethics and Bioethics and the inaugural Chair of Educational Leadership in the Ethics of Life at the Faculty of Theology and Religious Studies at the University of Laval in Quebec City, Canada, where he was also recently named Vice-Dean. Previously, Professor Laval taught and conducted research at the Center for Ethics at Emory University, and was Co-Director of Catholic Studies in Emory’s College of Arts and Sciences. He earned his Ph.D. in Religious Ethics at McGill University.  Professor Labrecque’s research lies at the intersection of religion, medicine, biotechnology, environment, and ethics; he is interested in the impact of emerging/transformative technologies (especially those related to regenerative and anti-ageing medicine) on philosophical and theological perspectives on human nature and the human-God-nature relationship.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter