Become A member
Subscribe to our newsletter
Insights

Just What The Doctor Ordered?: The Ethics of AI in Healthcare

For those bullish on artificial intelligence (AI), the technology may be a panacea for many problems facing medical professionals. In their eyes, its potential benefits go well beyond eliminating unnecessary paperwork. Improving cancer screening, for instance, is one use case that AI proponents cite when explaining this innovation’s remarkable capabilities.1 Doctors with limited bandwidth may find an invaluable asset in AI-powered tools. These solutions may not only reduce the risk of misdiagnosis; they could also ensure patients receive care appropriate for their specific needs. Physicians would then be able to prioritize tasks best suited for humans, namely building relationships with those seeking their help. This example, among advocates for AI’s adoption in healthcare, shows how the technology may redefine what it means to practice medicine.

Despite the attention AI has received from practitioners in the medical space, not everyone has bought into the hype. In fact, many have argued that its widespread deployment in this sector raises ethical concerns. Critics charge that AI models can produce medical misinformation given that many rely on flawed or inaccurate data, a drawback that may complicate their integration into clinical settings. Generated outputs can also contain stereotypes and falsehoods about certain populations, leading skeptics to debate whether these tools can truly help all patients. Questions like these should give medical professionals pause before they add AI solutions into their workplace. A measured approach to AI adoption, one that balances practical benefits against legitimate risks, is crucial for healthcare systems grappling with this powerful technology.

Disseminating health misinformation

Issues with AI in healthcare run deep, starting with the information that is fed to systems. Many large language models (LLMs) ingest massive quantities of data scraped from online sources. Ars Technica points out that this practice can be problematic, citing an experiment where an AI solution began producing medical falsehoods after a modicum of misinformation was incorporated into its data sample.2 Researchers at New York University found that a dataset containing just 0.01 percent of erroneous or misleading information caused a significant spike in the quantity of misinformation produced by a LLM.3 This study’s implications for clinical scenarios are alarming. If only a minor modification in the data sample used by an AI-powered system produces troubling results, medical professionals may think twice about adopting these mistake-prone tools. Without assessing the quality of data provided to LLMs, expecting these solutions to reliably deliver accurate outputs may be misguided at best, and dangerous at worst.

“Hallucinations,” or instances where AI models present misleading information as factually correct, are more than a bug to be fixed by developers. If unaddressed, they can have serious consequences when healthcare decisions are involved. The Verge identified how the existence of this phenomenon should temper expectations about how AI can assist doctors and their patients.4 The fact that these tools can produce baffling errors, such as inventing a body part when diagnosing an illness, is unsettling enough. However, knowing that outputs may not be routinely checked by those with prerequisite expertise is especially disconcerting. In the absence of mechanisms where model outputs can be checked for their veracity, professionals in the industry who adopt these solutions may be inclined to accept these insights at face value. Trusting AI in these scenarios could ultimately jeopardize the wellbeing of those seeking care, all while tanking the credibility of physicians who sought to leverage these tools.

Devaluing patient experiences

AI models deployed in clinical contexts can boost other untruths that are equally harmful. For instance, data incorporated by these solutions may be rife with stereotypes about communities that have been historically marginalized, leading to outputs that only amplify pre-existing biases and prejudices. As TechCrunch raised, AI-powered chatbots have generated offensive responses about non-white communities when asked health-related questions, such as concluding that there are biological differences between racial groups.5 While many developers have stated their intentions to address these issues, doctors interested in offering culturally-sensitive care may shy away from adopting these products for now. Patients from these communities are also directly affected by this problem. Individuals who have not been served by the traditional healthcare system may see AI tools as a resource, as they can receive answers to their medical-related questions without a doctor’s approval. Yet their propensity to provide answers that are problematic, in more ways than one, may leave these users without the support they deserve.

Professionals in the healthcare space who depend on AI tools, without acknowledging that their applicability may be limited, risk further alienating those seeking their help. In some cases, this inclination is understandable, given how the utility and value provided by AI is aggressively sold by developers. The MIT Technology Review noted that this phenomenon could give rise to a new form of “medical paternalism,” where a technology is seen as best positioned to make decisions regarding a person’s health.6 Considering how these solutions can perpetuate biases, ceding control to them could spell disaster. This shift would hamper the ability of trained professionals to debunk biased misinformation circulated by these models. It would also strip more autonomy away from patients looking to offer input on what treatments may be best for them. Deference to these technologies, in other words, may only revive a cycle of marginalization that people from disadvantaged communities have long tried to disrupt.

Lowering the temperature

Doubts about the effectiveness and suitability of AI tools in healthcare may be deflating to advocates of the technology. Even so, these anxieties are not without merit. It is true that AI-enhanced products are susceptible to spreading falsehoods about certain conditions and, in some cases, inventing facts wholecloth. Without measures to identify how tools came to these conclusions, doctors may find them to be liabilities rather than assets. Furthermore, models’ tendency to spread misinformation about vulnerable communities is a glaring concern. Integrating these tools into the workplace, knowing the defects they possess, would signal to patients from marginalized groups that their needs are inconsequential to those offering care. Although the allure of AI-powered tools can be hard to resist, their deficiencies are also difficult to deny.

The promise presented by AI in healthcare is not totally empty. Early evidence has shown that, with proper guardrails, it can make a positive difference. In Brazil, a tool called NoHarm has helped understaffed clinics manage patient prescriptions, which may improve operational efficiency in the long run.7 The key to success in cases like these, ultimately, are safeguards. Providing a feature to check the results of these tools, all while debunking any errors they generate, is essential. Moreover, sourcing data that genuinely reflects the experiences of overlooked and underserved populations increases the likelihood that AI tools are beneficial. Steps like these may appropriately moderate expectations of what AI can achieve. They may also lead to the creation of tools that help medical professionals better serve those entrusted to their care.


References

  1. Editorial Board. “This year, be thankful for AI in medicine.” The Washington Post, November 27, 2024. https://www.washingtonpost.com/opinions/2024/11/27/ai-medicine-health-care-thanksgiving/.
  2. Timmer, John. “It’s remarkably easy to inject new medical misinformation into LLMs.” Ars Technica, January 8, 2025. https://arstechnica.com/science/2025/01/its-remarkably-easy-to-inject-new-medical-misinformation-into-llms/.
  3. Daniel Alexander Alber et al. “Medical large language models are vulnerable to data-poisoning attacks.” Nature Medicine, January 8 2025. https://www.nature.com/articles/s41591-024-03445-1.  
  4. Field, Hayden. “Google’s healthcare AI made up a body part⎻what happens when doctors don’t notice?” The Verge, August 4, 2025. https://www.theverge.com/health/718049/google-med-gemini-basilar-ganglia-paper-typo-hallucination.
  5. Wiggers, Kyle. “Generative AI is coming for healthcare, and not everyone’s thrilled.” TechCrunch, April 14, 2024. https://techcrunch.com/2024/04/14/generative-ai-is-coming-for-healthcare-and-not-everyones-thrilled/.
  6. Hamzelou, Jessica. “Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions.” The MIT Technology Review, April 21, 2023. https://www.technologyreview.com/2023/04/21/1071921/ai-is-infiltrating-health-care-we-shouldnt-let-it-make-decisions/.
  7. Nakamura, Pedro. “AI is making health care safer in the remote Amazon.” Rest of World, June 11, 2025. https://restofworld.org/2025/brazil-amazon-ai-healthcare-prescriptions/.

Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.


Aaron Spitler

Aaron Spitler is a researcher whose work lies at the intersection of digital technologies, human rights, and democratic governance. He has worked at various organizations within the technology policy space, including the Internet Society, Harvard University's Berkman Klein Center, and the International Telecommunication Union. He received his master's degree, with a focus on technology regulation, from Tufts University's Fletcher School of Law and Diplomacy.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter