New! Become A member
Subscribe to our newsletter
Insights

What can AI Ethics learn from Bioethics and Genetics?

In the coffee rooms of tech research labs our AI engineers are talking daily about the ethical challenges that future developments might bring. Might we steal what is valuable from predecessor reflections in bioethics and genethics?

Fields such as bioethics, biomedical ethics, and medical ethics sprouted in the 1970s and 1980s, fertilized by Christian, Jewish, and humanistic ethicists trying to press secular institutions into protecting the preciousness of human wellbeing. Tom Beauchamp and James Childress planted fertile seeds in four principles: autonomy, justice, nonmaleficence, and beneficence. Geneticists picked the fruits of bioethics during the storm of challenges wrought by the Human Genome Project in the 1990s followed by the stem cell controversy in the first decade of our new century.

Here’s a quick ‘n’ dirty summary. (1) We should respect the autonomy of the individual person or patient, protecting the individual’s capacity to direct his or her life according to self-guiding goals.  (2) Justice refers to our moral obligation to act on the basis of fair adjudication between competing claims as well as distribute the fruits of laboratory research justly around the world. (3) Non-maleficence, going back to Asclepios and Hippocrites, means avoiding the causation of harm. (4) Beneficence may involve balancing the benefits of treatment against the risks and costs involved. The principle of beneficence also inspires the larger society to fund generously biomedical research in hopes of enhancing the health of every individual living on the planet.

I wonder if we could cross-breed bioethics with the pioneering work in AI ethics by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn and DeepMind research scientist Viktoriya Krakovnahas. Let’s look at some of these principles already in initial formulation.

  • The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
  • Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

Clearly, autonomy is the principle at stake here. On the one hand, at stake is individual autonomy over against the loss of privacy due to marketing algorithms. On the other hand, we fear that AI systems may take over, threatening the autonomy of the human race as a whole. Might a well thought-through analysis of individual and collective autonomy help here in drawing out implications?

  • The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
  • Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • AI technologies should benefit and empower as many people as possible.

These principles protect Homo sapiens from losing their autonomy to pre-programmed robots and weapons of targeted destruction. By mandating that AI developments become “beneficial,” might we see the principle of beneficence at work here? Might we gain insight here by a study in classical virtues such as love, charity, kindness, caring? Might contribution to the global common good enter our calculus?

  • If an AI system causes harm, it should be possible to ascertain why.

Might broad reflection on non-maleficence help expand this principle? We want more than merely ascertaining why harm is caused. We want to prevent harm.

  • Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
  • An arms race in lethal autonomous weapons should be avoided.

Avoid causing harm. Definitely. It will take political savvy to prevent an arms race in lethal autonomous weapons. Do AI engineers themselves share in the moral responsibility here? The military-industrial complex could wave big checks in front of AI techies that will be hard to resist. The profiteering industry is almost as autonomous as the weapons they might build.

  • A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

This looks to me like justice at work. Could we amplify this concern with a broad theory of justice that specifies who should benefit from AI research?

Both our classic list of bioethical principles as well as more recent AI ethical principles could benefit from placing AI innovation into the basket of the common good. The House of Lords in the UK has made this quite specific. Even though the concept of the common good must be cast in secular and non-sectarian terms, we must thank our Roman Catholic ethicists for thinking it through with heuristic value for everyone living on our planet. Pope Paul VI defined the common good as “the sum of those conditions of social life which allow social groups and their individual members relatively thorough and ready access to their own fulfillment.” Certainly AI at its best could aid in providing both individuals and groups with access to their own fulfillment.

The future that AI research envisions for us is tantalizing, exciting, and promising. No one wants to pull the plug. Yet, the dangers are becoming more and more clear. What this generation needs is sensitive, insightful, and courageous moral reflection. We need ingenuity and innovation in ethics at the same level of ingenuity and innovation in AI technology itself.

Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics (Oxford: Oxford University Press, 8th ed., 2019).

See: Ted Peters, For the Love of Children: Genetic Technology and the Future of the Family (Louisville KY:  Westminster/ John Knox Press, 1996) and Ted Peters, ed., Genethics: Issues of Social Justice (Cleveland OH: Pilgrim, 1998); and Ted Peters, Playing God? Genetic Determinism and Human Freedom (London: Routledge, 2nd ed., 2002).

Ivy Wigmore, “AI Code of Ethics,” WhatIs;  https://whatis.techtarget.com/definition/AI-code-of-ethics

House of Lords Select Committee on Artificial Intelligence, Report of Session 2017-2019; https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf

“Pastoral Constitution on the Church in the Modern World: Gaudium Et Spes, promulgated by His Holiness, Pope Paul VI on December 7, 1965,” No. 26, The Holy See, accessed May 7, 2016, http://www.vatican.va/archive/hist_councils/ii_vatican_council/documents/vat-ii_const_19651207_gaudium-et-spes_en.html

Ted Peters, ed., AI and IA: Utopia or Extinction? Volume 5 of  Agathon: A Journal of Ethics and Value in the Modern World (Adelaide: Australian Theological Forum, 2019); DOI: 10.2307/j.ctvrnfpwx. http://atfpress.com/?s=Ai+and+IA&post_type=product  OR  https://www.jstor.org/stable/j.ctvrnfpwx?turn_away=true .


Ted Peters, Ph.D.

is co-editor of the journal, Theology and Science, published by the Center for Theology and the Natural Sciences. He teaches theology and ethics at the Graduate Theological Union in Berkeley, California. He is the author of Playing God? Genetic Determinism and Human Freedom (Routledge, 2nd ed., 2002). Ted is co-editor with Brian Patrick Green and Arvin Gouw of Religious Transhumanism and Its Critics (forthcoming in 2021 with Lexington). See his website: tedstimelytake.com.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter