New! Become A member
Subscribe to our newsletter
Insights

Mark Graves on Auditing Medical AI

Prelude on Mark Graves

My healthcare projects combine AI and ethics to cover a range of topics, from technical AI projects to more philosophical and future-oriented explorations. In my technical work, ethical considerations help to guide the type of projects I pursue, including my overall focus on healthcare. By contrast, other projects I work on pull ethics into the social processes surrounding AI development. I am especially focused on how ethics can affect design and auditing of AI systems. As I attempt to implement ethics more directly into AI systems themselves, my work becomes more speculative and philosophical. I still ground these projects in current developments, as I consider how future AI systems could oversee the ethical aspects of systems like the current task-focused ones.

My primary technical focus involves using AI technology to analyze unstructured textual data, like medical records, biomedical research articles, and regulatory documents. As a data scientist, I use natural language processing (NLP) tools on this data to extract meaningful insights that can contribute to patient care and healthcare advances. Ethics plays a critical role in helping me identify a broader range of concerns than I would have otherwise investigated. These include disparities in how different groups benefit from healthcare, the impact of automation on patient autonomy, explaining opaque AI predictions in a clinically relevant way, and aligning technical metrics with disparate clinical outcomes.

One way I address these issues is through projects aimed at extracting social determinants of health (SDoH) from medical notes. These social factors, such as limited access to medical services due to distance, transportation, discrimination, or economic hardship directly and indirectly affect healthcare outcomes. Identifying the effect of these important indicators for healthcare outcomes enables care providers to better examine and address healthcare discrepancies. Although this work is primarily technical, ethical considerations focus my attention on SDoH and how identifying them could improve healthcare disparities.

One of my ethically broader projects involves investigating how to incorporate ethical theories directly into data science and machine learning (ML) projects. Making ethical decisions while building healthcare AI is challenging. Bioethicists have identified numerous ethical dilemmas that can arise when a physician cares for a patient. Additional issues arise when data scientists or ML engineers, armed only with broad principles like “do no harm”, make detailed technical decisions in complex systems that could affect the health of thousands or millions of patients.

To help address the gaps between what to do and how to do it, I have collaborated with Emanuele Ratti to develop an ethical framework for attending to the ethical dimension of technical decisions. 1 In this framework, data scientists and engineers shift their perspective from focusing solely on patient’s health to include the “capability” people have to convert their health choices into desired outcomes. Capabilities encompass both internal factors affecting one’s decision making and external factors that impact one’s use of resources. The capability approach is an ethical theory grounded in an individual’s freedom to make choices. Whether and how those choices can be exercised can serve as a solid foundation for healthcare ethics.

By attending to capabilities, those building and using AI systems can consider how technical decisions affect people’s choices, their access to resources, and the ways to convert those resources into desired outcomes. For instance, instead of merely analyzing a diabetic patient’s blood sugar levels, one can also model the financial and transportation factors that influence whether a person can afford to purchase insulin, take the time to pick it up, and understands the impact of high blood sugar levels on their body. Learning to attend to ethical implications of technical decisions is a skill that can be developed, similar to technical or clinical skills, and the capability approach serves as a guide that directs attention toward factors that influence patent autonomy and healthcare justice.

I am also exploring how to apply that ethical framework to auditing medical AI. 2 Currently, an internal or external auditor of AI software would assess whether the software complies with regulations (if they exist) or other stated criteria. However, ethics-based auditing evaluates whether the software aligns with ethical values and norms. Our approach specifically examines how such software affects a person’s capability to achieve their desired choices about their health. As part of this project, I am developing a software tool to search for ethical concepts in documents and highlight them for further analysis.

A more forward-looking extension to this effort involves automating aspects of the AI auditing and monitoring using AI. When building a system, engineers often use ML to configure and build better systems (often known as AutoML). Similarly, we can build AI systems that monitor other AI healthcare systems. As AI healthcare systems become more complex and pervasive and interact with patients in unpredictable ways, it becomes difficult for engineers to navigate this rapidly changing complexity. An ethical oversight system can augment human monitoring. For an ethical oversight system within healthcare to be effective, it needs to be trained on both a basic understanding of patient and population health and ethical considerations. Our work uses both a bottom-up approach to tool building and a top-down philosophical exploration to address this challenge. 3

As someone involved with AI research, my aim is to help build AI systems that incorporate compassion. This means going beyond responsible or ethical AI and tapping into the deeper moral values that are present in many world religions. Any single principle can be dangerously misconstrued—a point Isaac Asimov makes well in his science fiction stories. But if we work together to interpret what it means for AI to act compassionately, we can guide the moral development of AI in healthcare and other areas. In addition to incorporating ethical values into AI design and training, we can build AI systems to identify whether other AI systems are aligned with those commitments. This will create a cycle where ethically tuned AI can help adjust the performance of task-specific AI in a more ethical direction. For this to work, moral AI will need to interact with other moral AI and with people. Paired together, an AI can balance ethical tradeoffs and resolve apparent ethical dilemmas, much as people do when facing their own quandaries. The challenge is to find ways to give AI relevant feedback that will improve its attention to the ethical dimension of a situation. Our work aims to guide AI toward acting compassionately and supporting human freedom and dignity.

  1. Graves, M., & Ratti, E. (2021). Microethics for Healthcare Data Science: Attention to Capabilities in Sociotechnical Systems. The Future of Science and Ethics, 6, 64–73 (pdf). Ratti, E., & Graves, M. (2021). Cultivating Moral Attention: A Virtue-Oriented Approach to Responsible Data Science in Healthcare. Philosophy & Technology, 34(4), 1819–1846 (open access).
  2. Funding for the project is provided by the University of Notre Dame-IBM Technology Ethics Lab.
  3. Graves, M. (2022). Apprehending AI moral purpose in practical wisdom. AI & SOCIETY (link).

Mark Graves

Mark Graves is a Research Fellow and Director at AI & Faith, and a Research Associate Professor of Psychology at Fuller Theological Seminary. He has developed AI and data solutions in the biotech, pharmaceutical, and healthcare industries. Mark’s current research focuses on using text analysis and other natural language processing techniques for understanding and modeling human morality, ethical approaches to data science and machine learning, and philosophical and psychological foundations for constructing moral AI. Mark holds a PhD in computer science from the University of Michigan and a master‘s degree in theology from the Jesuit School of Theology and the Graduate Theological Union in Berkeley.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter