Penn Undergraduate Law Journal
  • Home
  • About
    • Mission
    • Masthead
    • Faculty Advisory Board
    • Partner Journals
    • Sponsors
  • Submissions
  • Full Issues
  • The Roundtable
    • Pre-Law Corner
  • Events
  • Contact
    • Contact
    • Apply
    • FAQs
  • Home
  • About
    • Mission
    • Masthead
    • Faculty Advisory Board
    • Partner Journals
    • Sponsors
  • Submissions
  • Full Issues
  • The Roundtable
    • Pre-Law Corner
  • Events
  • Contact
    • Contact
    • Apply
    • FAQs

The Roundtable


Welcome to the Roundtable, a forum for incisive commentary and analysis
on cases and developments in law and the legal system.


AI in Healthcare: Friend or Foe?

5/12/2024

0 Comments

 
Picture
By Catherine Tang

Catherine Tang is a freshman at the University of Pennsylvania majoring in Health and Societies with a concentration in Health Policy & Law. 

In December 2023, JoAnne Barrows and Susan Hagwood filed a class action lawsuit in Kentucky against Humana. The plaintiffs alleged that the health insurer had used an AI model called nHPredict to wrongfully deny medically necessary Medicare Advantage claims to elderly and disabled patients despite knowing that approximately 90% of the tools’ denials on coverage were faulty [1]. This marks one of the first formal lawsuits brought against the use of artificial intelligence (AI) specifically in the healthcare industry, yet the groundwork for AI medical applications had been laid as early as the 1950s [2]. However, nearly eight decades later, the United States still lacks a well-defined regulatory structure for AI use in medicine. 
According to a 2021 report by business management consultant group Sage Growth Partners, over 90% of hospitals now have an AI strategy—a dramatic increase from 37% just two years before in 2019 [3]. This rapid progression of AI technology presents an unprecedented opportunity for its incorporation into clinical practice, potentially revolutionizing healthcare services. In biological research, AI has been used to analyze large datasets and identify subtle patterns with dramatically reduced time, expenses, and the possibility of human error, which has led to major breakthroughs in genomics and drug discovery [4]. Furthermore, predictive algorithms have transformed preventive care medicine [5], particularly for terminal and chronic illnesses, by reducing the workload of physicians and providers when performing functions such as serving as primary image interpreters for triaging and cancer screening, using nudges to suggest precision therapy for complex illnesses to reduce medical errors, and incorporating key socioeconomic determinants of health like education, income, and demographic characteristics in analyses of illness predictors. Finally, in recent years, AI has even been used in operating rooms to provide direct care to patients [6], often to make precision surgical procedures safer or to aid surgeons with decision-making in high-pressure situations. 

However, despite the multiple avenues to include AI in healthcare, equal attention must be paid to emerging concerns about safety and risks—chief among them issues of informed consent and patient privacy. One of the hallmarks of AI is its ability to aid in medical decision-making, yet providers often fail to inform patients that algorithms have been used to shape their treatment plans. This lack of disclosure represents a concrete threat to the long-established bioethical principle of informed consent [7], which calls for direct communication between a patient and their healthcare provider to ensure the patient’s explicit authorization to undergo a specific medical procedure. It is presently reasonable for patients to assume that only humans, not AI systems, are involved in making their healthcare decisions. However, according to Dr. Beth Hallowell, the Communications Research Director of the American Friends Committee, “AI can’t comprehend the complexity of human interaction” and the operations of “AI systems are often not transparent or interpretable.” Consequently, clinicians, unaware of how AI-generated outputs are calculated, may not be able to fully explain the risks associated with an AI-recommended procedure to a patient, leading to a violation of sufficient informed consent. [8]

Additionally, as a generative system, many AI tools, especially for research, are trained using large amounts of patients’ medical data, most of which constitute protected health information under the 1966 Health Insurance Portability and Accountability Act (HIPAA), which are then accessed and analyzed at high speeds with substantial heterogeneity across individuals and data types. However, although protecting patient information and privacy is a prerequisite for proceeding with any kind of ethical research, there is currently no centralized protocol for data encryption or sharing AI-based research. This lack of standardization can be attributed to the fact that under current research norms, it is sufficient to decide such a protocol on a project-by-project basis and gain approval from the individual institutional review board (IRB), so informed patient consent in a retroactive study could theoretically be waived if deemed appropriate by the ethics committee. Even if the patient data is de-identified and anonymized, emerging computational strategies run the risk of eventually tracing the data back to its source [9]. Compounding this issue, patients are not presently given the option to “opt out” of their de-identified information being shared, resulting in their information being used for research against their knowledge or will. 

Furthermore, concerns have been raised that the data selected to train algorithms can by themselves represent a source of bias. Since most information is collected from electronic health records, AI applications may disproportionately rely on findings based on socio-economic classes that can afford formal healthcare and health insurance, meaning that marginalized populations without such access may be absent from big health data. As such, conclusions drawn from AI systems may not fully account for socioeconomic differences between groups, and the resulting recommendations are thereby skewed toward majority populations [10]. This in part can be attributed to the “black box” problem, whereby algorithms’ methods and “reasoning” can be partially or entirely opaque to human observers. 
Another aspect of the debates surrounding AI in healthcare is the pervasive issue of liability in medical malpractice cases if AI misdiagnoses or fails to diagnose at all. The hallmark of AI tools is often their ability to offer non-standard advice, such as telling a provider to give a patient a different drug dosage than the usual based on unique patient indicators. While successes in such instances are widely celebrated, equal attention must be paid to the possibility of failure. Under current tort law’s preference for following standard care, providers who accept a personalized AI recommendation to provide nonstandard care will increase their risk of medical malpractice liability if the procedure fails. Furthermore, given that there have been no lawsuits to date explicitly centering around medical malpractice where AI tools were used, there is a lack of legal precedent addressing whether responsibility resides with the provider or the developer of the algorithm [11]. One potential solution suggested by legal scholars is to confer personhood to AI devices to allow plaintiffs to bring direct lawsuits against them in the case of malpractice, where the algorithm will be required to be insured so that claims can be paid out from the insurance. Similarly, instead of assigning fault to a specific person or entity, another tort framework could be common enterprise liability, where all parties jointly bear some responsibility for the design and implementation of the AI system [12].

However, such solutions are reactive rather than prevention, and the limiting factor is the lack of uniform federal frameworks regulating AI use. In 2021, the FDA released the Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices to identify 5 overarching guiding principles for safe and ethical AI use, yet these provisions have proven difficult to enforce across all levels of government. Compounding this lack of regulation, there is no well-articulated testing process given that AI tools are simply tested by the companies or developers that create them rather than undergoing a vigorous FDA approval process like that with pharmaceutical drugs [13]. Furthermore, in the absence of concrete formal regulation, it largely falls to the courts to define inappropriate use of these new technologies, yet judges who oversee these cases rarely have a clear understanding of how AI tools work, which may lead to shaky legal precedents set that result in more harm than good. 
​

In conclusion, the possibilities of AI in healthcare appear to be limitless with their horizons constantly expanding and beginning to overhaul traditional healthcare practices, yet innovation must be balanced with regulation. The stakes of this conversion extend well beyond the hospital ward or the courtroom to the emerging marketplace for new healthcare technologies, touching the world we all occupy as patients. Per Dr. Hallowell, “there is a human side to medicine” that is not easily replicable by AI, and ultimately, in the face of privacy and liability concerns, it is unclear whether AI technology can truly abide by medicine's maxim—to do no harm. 

[1] “Barrows et al. v. Humana, Inc.” Justia, December 2023. https://dockets.justia.com/docket/kentucky/kywdce/3:2023cv00654/132899 
[2] Kaul et al. “History of artificial intelligence in AI.” Gastrointestinal Endoscopy, October 2020. https://www.sciencedirect.com/science/article/pii/S0016510720344667?via%3Dihub 
[3] “New Report Finds 90 Percent of Hospitals Have an AI Strategy; Up 37% from 2019”. PR Newsire, March 2021. https://www.prnewswire.com/news-releases/new-report-finds-90-percent-of-hospitals-have-an-ai-strategy-up-37-percent-from-2019-301242756.html 
[4] Vilhekar and Rawkar. “Artificial Intelligence in Genetics.” Cureus, January 2024. https://www.cureus.com/articles/177320-artificial-intelligence-in-genetics#!/ 
[5] Phillips et al. “Artificial intelligence and predictive algorithms in medicine.” Canadian Family Physician, August 2022. https://www.cfp.ca/content/68/8/570 
[6] “The AI Revolution: Surgeons Share Insights on Integrating AI into Surgical Care.” American College of Surgeons, October 2023. https://www.facs.org/for-medical-professionals/news-publications/news-and-articles/press-releases/2023/ai-revolution-surgeons-share-insights-on-integrating-ai-into-surgical-care/ 
[7] Perni et al. “Patients should be informed when AI systems are used in clinical trials.” Nature Medicine, May 2023. https://www.nature.com/articles/s41591-023-02367-8 
[8] Yadav et al. “Data Privacy in Healthcare: In the Era of Artificial Intelligence.” Indian Dermatology Online Journal, November 2023. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10718098/ 
[9] Murdoch. “Privacy and artificial intelligence: challenges for protecting health information in a new era.” BMC Medical Ethics, September 2021. https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00687-3#citeas 
[10] Obermeyer et al. “Dissecting racial bias in an algorithm used to manage the health of populations.” Science, October 2019. https://www.science.org/doi/10.1126/science.aax2342 
[11] Tobia et al. “When Does Physician Use of AI Increase Liability?” Journal of Nuclear Medicine, January 2021. https://jnm.snmjournals.org/content/62/1/17 
[12] Sullivan and Schweikart. “Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?” American Medical Association Journal of Ethics, February 2019. https://journalofethics.ama-assn.org/article/are-current-tort-liability-doctrines-adequate-addressing-injury-caused-ai/2019-02 
[13] “Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles.” US Food and Drug Administration, October 2023. https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles 
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.


    Categories

    All
    Aaron Tsui
    Akshita Tiwary
    Alana Bess
    Alana Mattei
    Albert Manfredi
    Alexander Saeedy
    Alexandra Aaron
    Alexandra Kanan
    Alexandra Kerrigan
    Alice Giannini
    Alicia Augustin
    Alicia Kysar
    Ally Kalishman
    Ally Margolis
    Alya Abbassian
    Amanda Damayanti
    Anika Prakash
    Anna Schwartz
    Arshiya Pant
    Ashley Kim
    Astha Pandey
    Audrey Pan
    Benjamin Ng'aru
    Brónach Rafferty
    Bryce Klehm
    Cary Holley
    Catherine Tang
    Christina Gunzenhauser
    Christine Mitchell
    Christopher Brown
    Clarissa Alvarez
    Cole Borlee
    Connor Gallagher
    Dan Spinelli
    Dan Zhang
    David Katz
    Davis Berlind
    Derek Willie
    Dhilan Lavu
    Edgar Palomino
    Edna Simbi
    Ella Jewell
    Ella Sohn
    Emma Davies
    Esther Lee
    Evelyn Bond
    Filzah Belal
    Frank Geng
    Gabrielle Cohen
    Gabriel Maliha
    Georgia Ray
    Graham Reynolds
    Habib Olapade
    Hailie Goldsmith
    Haley Son
    Hannah Steinberg
    Harshit Rai
    Hennessis Umacta
    Henry Lininger
    Hetal Doshi
    Ingrid Holmquist
    Iris Zhang
    Irtaza Ali
    Isabela Baghdady
    Ishita Chakrabarty
    Jack Burgess
    Jessica "Lulu" Lipman
    Joe Anderson
    Jonathan Lahdo
    Jonathan Stahl
    Joseph Squillaro
    Justin Yang
    Kaitlyn Rentala
    Kanishka Bhukya
    Katie Kaufman
    Kelly Liang
    Keshav Sharma
    Ketaki Gujar
    Khlood Awan
    Lauren Pak
    Lavi Ben Dor
    Libby Rozbruch
    Lindsey Li
    Luis Bravo
    Lyan Casamalhuapa
    Lyndsey Reeve
    Madeline Decker
    Maja Cvjetanovic
    Maliha Farrooz
    Marco DiLeonardo
    Margaret Lu
    Matthew Caulfield
    Michael Keshmiri
    Michael Merolla
    Mina Nur Basmaci
    Muskan Mumtaz
    Natalie Peelish
    Natasha Darlington
    Natasha Kang
    Nathan Liu
    Nayeon Kim
    Nicholas Parsons
    Nicholas Williams
    Nicole Greenstein
    Nicole Patel
    Nihal Sahu
    Omar Khoury
    Owen Voutsinas Klose
    Owen Voutsinas-Klose
    Paula Vekker
    Pheby Liu
    Pragat Patel
    Rachel Bina
    Rachel Gu
    Rachel Pomerantz
    Rebecca Heilweil
    Regina Salmons
    Ritha Igout
    Sajan Srivastava
    Samantha Graines
    Sandeep Suresh
    Sanjay Dureseti
    Sarah Simon
    Saranya Das Sharma
    Saranya Sharma
    Sasha Bryski
    Saxon Bryant
    Sean Foley
    Sebastian Bates
    Serena Camici
    Shahana Banerjee
    Shannon Alvino
    Shiven Sharma
    Siddarth Sethi
    Sneha Parthasarathy
    Sneha Sharma
    Sophie Lovering
    Steven Jacobson
    Suaida Firoze
    Suprateek Neogi
    Takane Shoji
    Tanner Bowen
    Taryn MacKinney
    Thomas Cribbins
    Todd Costa
    Tyler Larkworthy
    Tyler Ringhofer
    Vatsal Patel
    Vikram Balasubramanian
    Vishwajeet Deshmukh
    Wajeeha Ahmad
    Yeonhwa Lee

    Archives

    September 2025
    July 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    September 2024
    May 2024
    April 2024
    January 2024
    December 2023
    November 2023
    May 2023
    March 2023
    January 2023
    December 2022
    November 2022
    September 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    May 2019
    April 2019
    March 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    April 2016
    March 2016
    February 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    November 2014
    October 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    December 2013
    November 2013
    October 2013
    September 2013

Powered by Create your own unique website with customizable templates.