New! Become A member
Subscribe to our newsletter
Insights

Ethics and Tech Conference: Where AI meets humanity in healthcare: What’s next?

Post-Conference Report:

Seattle University, June 27, 2024

The 2024 Ethics and Tech Conference was a resounding success, bringing together thought leaders from across disciplines.

Seattle University is rooted in the Jesuit values of tackling challenges through thoughtful reflection and has integrated moral reasoning and ethical analysis within its core curriculum. The university anticipates the return of a prominent visiting scholar this summer, Fr. Paolo Benanti, an Italian priest, bioethicist, theologian, and advisor to Pope Francis on AI ethics.

Dr. Onur Bakiner, the newly appointed head of the Seattle University Technology Ethics Initiative (TEI) began the conference with an uplifting speech aimed at strengthening interdisciplinary connections and meaningful conversations among diverse stakeholders on ethical use of AI. The conference highlighted three principal keynote addresses, and a series of short talks and discussions by and with ten leading professionals from medicine, neuroscience, biotechnology, law, and business.

This report provides a summary of two keynote addresses that provoke theological contemplation.

1. An Ethical Framework: AI as a Sociotechnical System in Healthcare

Dr. Alex John London, K&L Gates Professor of Ethics and Computational Technologies at Carnegie Melon University is a member of the WHO Expert Group on Ethics and Governance of AI. Dr. London argued for a conceptual shift in how we view AI tools. He described this shift as “AI as a sociotechnical system,” in which AI tools are not just standalone technologies, but parts of a larger “intervention ensemble” – a set of knowledge, practices, and procedures that are necessary to deliver care to patients. Here are some key points:

  • The hyped AI field complicates our ability to develop systems that truly benefit health systems to make them safer, more effective, efficient, and equitable. Addressing inherent biases and false beliefs, built into the existing health systems, research and clinical practices, is a critical step in building such new systems.
  • It is crucial to articulate the most pressing healthcare priorities we face and create data that will enable us to address these concerns effectively. This entails a symbiotic connection between AI systems, health system competencies, and patient necessities.
  • To maximize the benefit of AI in real-world healthcare settings, we must change the data that we generate, our ability to learn, and the way we deliver healthcare in order to advance equitable innovation and support an ecosystem that will be responsive to the needs of a broader population and will be able to address inequities as they arise.
  • Many healthcare duties involve intervention rather than prediction. Validating interventional AI systems in healthcare before deployment will be crucial. The downstream risk of adopting unvalidated AI systems is profound. The widespread use of the Epic Sepsis Model despite its performance (false positives, false negatives, alert fatigue) raises concerns about sepsis management.
  • Disclosing the level of validation targeted by AI-involved trials is urgently needed. By making this information clear, it becomes possible to compare these trials to clinical trials (i.e., proof of concept studies, Phase I-IV trials) in a meaningful way.

For further readings, his book, “For the Common Good: Philosophical Foundations of Research Ethics,” is freely available. For his advisory work and scholarly contributions, please visit “Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-modal Models” (WHO 2024) and “Toward Equitable Innovation in Health and Medicine: A Framework” (U.S. National Academy of Medicine, 2023).

2. The Mind-Body Problem in 2024: Philosophy, Science, and the Nature of Consciousness

Dr. Christof Koch is the President, Chief Scientific Officer, and Meritorious Investigator at the Allen Institute for Brain Science in Seattle. In collaboration with Dr. Francis Crick (died 2004), he articulated an empirical framework for studying consciousness in the brain. Subsequently, Dr. Koch co-developed the Integrated Information Theory (IIT) of consciousness, in collaboration with a neuroscientist-psychiatrist Dr. Giulio Tononi. This thought-provoking theory presents a contrast to reductionism. In his presentation, he described the potential promises and danger of AI in relation to consciousness. Here are some key points:

  • IIT integrates theoretical, experimental, and computational neuroscience with philosophy, and it provides a framework for understanding and measuring consciousness in different systems. This necessitates major discussions on the nature of consciousness and AI.
  • The IIT definition of consciousness: a system’s consciousness (subjective experience) is conjectured to be identical to its causal properties (objective behavior).
  • Dr. Koch believes that Generative AI will ultimately be able to do the very thing humans can do yet never feel any of it, as Generative AI will never be what humans are, which is conscious.
  • It is essential to distinguish artificial intelligence from artificial consciousness.
  • The potential promise and danger of AI systems stems from them being intelligent or super intelligent.
  • The topic of consciousness carries significant ethical implications in healthcare. For example, there are numerous patients in the ICU who are referred to as being in a behaviorally unresponsive state. However, research shows that approximately 20% of these patients maintain some level of consciousness. The knowledge of consciousness challenges commonly held scientific beliefs and provides insights into the ongoing bioethics debates on healthcare decision-making such as discontinuation of life-support and determination of self-awareness of a fetus.

For further readings, his two books, “Then I Am Myself the World: What Consciousness Is and How to Expand It.” “The Feeling of Life Itself” are recommended.

Reflection:

Drs. London and Koch’s work tackle the emerging intersection of AI technologies and ethics from different disciplines, yet their scholarship addresses the common challenge: AI and its relation to human existence, social trust, and existential responsibilities for other humans. Both presentations shed light on the value of human dignity, which is one of the core principles of religious traditions including Catholic Social Teaching which emphasizes preferential treatment of the poor and vulnerable. Building ethical AI tools that are effective, efficient, reliable, safe, equitable, and sustainable would remain challenging and likely impossible if we keep training AI tools using the biased algorithms “baked” into current healthcare systems. Eventually we all will become patients who could become poor and vulnerable at any time. Responsible use of AI equates with responsibilities for all.

In the context of innovation, where there is a need to move forward and make progress, the common expression is building an airplane while flying. AI is evolving in a real-world context where it is being applied. The inherent contradiction of an intolerance for errors in healthcare and the experimental nature of AI evolution creates an additional challenge. Dr. London asked us to consider the lifecycle of AI: it starts with defining tasks, gathering pertinent data to build a model capable of performing those tasks, and then rigorously testing and validating to ensure task accuracy. Dr. London said “reaching this point equates to just the initial stages, in football terms—the 20-yard line. What matters most is not how many times we reach the early milestone of the 20-yard line, but whether we score” . It is especially challenging to devise tasks that will genuinely improve prioritized health outcomes, given the intricate nature of work processes and the limitations of existing datasets and achievable inquiries.

The 2024 Ethics and Tech conference left a lasting impact on me as a bioethicist who previously worked in healthcare for more than two decades. What are our tasks ahead? Are we, humans, worth the challenges? Are we responsible for the dignity, sanctity, and well-being of others and our future generations? If so, each of us is required to consider how we can contribute to the improvement of AI in healthcare and must act to carry the ball across the goal line.


Yuriko Ryan

Is a bioethicist-gerontologist with over 20 years of international experience in healthcare ethics and policy research. Based in Vancouver, Canada, she holds a Doctorate in Bioethics from Loyola University of Chicago and is a certified Healthcare Ethics Consultant (HEC-C). She is a contributing writer/member of the AI and Faith Editorial Board. She writes on AI Ethics, Public Health Ethics, Business Ethics, and Healthcare Ethics.

1 Comment
  • barbara ANN reynolds
    2:42 AM, 1 August 2024

    I am interested in following discussion on Faith and technology. I have just finished a book on the Rise and Fall of the TechnoMessiah and want to find people with whom I can share my views.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter