The Roundtable
Welcome to the Roundtable, a forum for incisive commentary and analysis
on cases and developments in law and the legal system.
on cases and developments in law and the legal system.
By Catherine Tang Catherine Tang is a freshman at the University of Pennsylvania majoring in Health and Societies with a concentration in Health Policy & Law. In December 2023, JoAnne Barrows and Susan Hagwood filed a class action lawsuit in Kentucky against Humana. The plaintiffs alleged that the health insurer had used an AI model called nHPredict to wrongfully deny medically necessary Medicare Advantage claims to elderly and disabled patients despite knowing that approximately 90% of the tools’ denials on coverage were faulty [1]. This marks one of the first formal lawsuits brought against the use of artificial intelligence (AI) specifically in the healthcare industry, yet the groundwork for AI medical applications had been laid as early as the 1950s [2]. However, nearly eight decades later, the United States still lacks a well-defined regulatory structure for AI use in medicine. According to a 2021 report by business management consultant group Sage Growth Partners, over 90% of hospitals now have an AI strategy—a dramatic increase from 37% just two years before in 2019 [3]. This rapid progression of AI technology presents an unprecedented opportunity for its incorporation into clinical practice, potentially revolutionizing healthcare services. In biological research, AI has been used to analyze large datasets and identify subtle patterns with dramatically reduced time, expenses, and the possibility of human error, which has led to major breakthroughs in genomics and drug discovery [4]. Furthermore, predictive algorithms have transformed preventive care medicine [5], particularly for terminal and chronic illnesses, by reducing the workload of physicians and providers when performing functions such as serving as primary image interpreters for triaging and cancer screening, using nudges to suggest precision therapy for complex illnesses to reduce medical errors, and incorporating key socioeconomic determinants of health like education, income, and demographic characteristics in analyses of illness predictors. Finally, in recent years, AI has even been used in operating rooms to provide direct care to patients [6], often to make precision surgical procedures safer or to aid surgeons with decision-making in high-pressure situations.
However, despite the multiple avenues to include AI in healthcare, equal attention must be paid to emerging concerns about safety and risks—chief among them issues of informed consent and patient privacy. One of the hallmarks of AI is its ability to aid in medical decision-making, yet providers often fail to inform patients that algorithms have been used to shape their treatment plans. This lack of disclosure represents a concrete threat to the long-established bioethical principle of informed consent [7], which calls for direct communication between a patient and their healthcare provider to ensure the patient’s explicit authorization to undergo a specific medical procedure. It is presently reasonable for patients to assume that only humans, not AI systems, are involved in making their healthcare decisions. However, according to Dr. Beth Hallowell, the Communications Research Director of the American Friends Committee, “AI can’t comprehend the complexity of human interaction” and the operations of “AI systems are often not transparent or interpretable.” Consequently, clinicians, unaware of how AI-generated outputs are calculated, may not be able to fully explain the risks associated with an AI-recommended procedure to a patient, leading to a violation of sufficient informed consent. [8] Additionally, as a generative system, many AI tools, especially for research, are trained using large amounts of patients’ medical data, most of which constitute protected health information under the 1966 Health Insurance Portability and Accountability Act (HIPAA), which are then accessed and analyzed at high speeds with substantial heterogeneity across individuals and data types. However, although protecting patient information and privacy is a prerequisite for proceeding with any kind of ethical research, there is currently no centralized protocol for data encryption or sharing AI-based research. This lack of standardization can be attributed to the fact that under current research norms, it is sufficient to decide such a protocol on a project-by-project basis and gain approval from the individual institutional review board (IRB), so informed patient consent in a retroactive study could theoretically be waived if deemed appropriate by the ethics committee. Even if the patient data is de-identified and anonymized, emerging computational strategies run the risk of eventually tracing the data back to its source [9]. Compounding this issue, patients are not presently given the option to “opt out” of their de-identified information being shared, resulting in their information being used for research against their knowledge or will. Furthermore, concerns have been raised that the data selected to train algorithms can by themselves represent a source of bias. Since most information is collected from electronic health records, AI applications may disproportionately rely on findings based on socio-economic classes that can afford formal healthcare and health insurance, meaning that marginalized populations without such access may be absent from big health data. As such, conclusions drawn from AI systems may not fully account for socioeconomic differences between groups, and the resulting recommendations are thereby skewed toward majority populations [10]. This in part can be attributed to the “black box” problem, whereby algorithms’ methods and “reasoning” can be partially or entirely opaque to human observers. Another aspect of the debates surrounding AI in healthcare is the pervasive issue of liability in medical malpractice cases if AI misdiagnoses or fails to diagnose at all. The hallmark of AI tools is often their ability to offer non-standard advice, such as telling a provider to give a patient a different drug dosage than the usual based on unique patient indicators. While successes in such instances are widely celebrated, equal attention must be paid to the possibility of failure. Under current tort law’s preference for following standard care, providers who accept a personalized AI recommendation to provide nonstandard care will increase their risk of medical malpractice liability if the procedure fails. Furthermore, given that there have been no lawsuits to date explicitly centering around medical malpractice where AI tools were used, there is a lack of legal precedent addressing whether responsibility resides with the provider or the developer of the algorithm [11]. One potential solution suggested by legal scholars is to confer personhood to AI devices to allow plaintiffs to bring direct lawsuits against them in the case of malpractice, where the algorithm will be required to be insured so that claims can be paid out from the insurance. Similarly, instead of assigning fault to a specific person or entity, another tort framework could be common enterprise liability, where all parties jointly bear some responsibility for the design and implementation of the AI system [12]. However, such solutions are reactive rather than prevention, and the limiting factor is the lack of uniform federal frameworks regulating AI use. In 2021, the FDA released the Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices to identify 5 overarching guiding principles for safe and ethical AI use, yet these provisions have proven difficult to enforce across all levels of government. Compounding this lack of regulation, there is no well-articulated testing process given that AI tools are simply tested by the companies or developers that create them rather than undergoing a vigorous FDA approval process like that with pharmaceutical drugs [13]. Furthermore, in the absence of concrete formal regulation, it largely falls to the courts to define inappropriate use of these new technologies, yet judges who oversee these cases rarely have a clear understanding of how AI tools work, which may lead to shaky legal precedents set that result in more harm than good. In conclusion, the possibilities of AI in healthcare appear to be limitless with their horizons constantly expanding and beginning to overhaul traditional healthcare practices, yet innovation must be balanced with regulation. The stakes of this conversion extend well beyond the hospital ward or the courtroom to the emerging marketplace for new healthcare technologies, touching the world we all occupy as patients. Per Dr. Hallowell, “there is a human side to medicine” that is not easily replicable by AI, and ultimately, in the face of privacy and liability concerns, it is unclear whether AI technology can truly abide by medicine's maxim—to do no harm. [1] “Barrows et al. v. Humana, Inc.” Justia, December 2023. https://dockets.justia.com/docket/kentucky/kywdce/3:2023cv00654/132899 [2] Kaul et al. “History of artificial intelligence in AI.” Gastrointestinal Endoscopy, October 2020. https://www.sciencedirect.com/science/article/pii/S0016510720344667?via%3Dihub [3] “New Report Finds 90 Percent of Hospitals Have an AI Strategy; Up 37% from 2019”. PR Newsire, March 2021. https://www.prnewswire.com/news-releases/new-report-finds-90-percent-of-hospitals-have-an-ai-strategy-up-37-percent-from-2019-301242756.html [4] Vilhekar and Rawkar. “Artificial Intelligence in Genetics.” Cureus, January 2024. https://www.cureus.com/articles/177320-artificial-intelligence-in-genetics#!/ [5] Phillips et al. “Artificial intelligence and predictive algorithms in medicine.” Canadian Family Physician, August 2022. https://www.cfp.ca/content/68/8/570 [6] “The AI Revolution: Surgeons Share Insights on Integrating AI into Surgical Care.” American College of Surgeons, October 2023. https://www.facs.org/for-medical-professionals/news-publications/news-and-articles/press-releases/2023/ai-revolution-surgeons-share-insights-on-integrating-ai-into-surgical-care/ [7] Perni et al. “Patients should be informed when AI systems are used in clinical trials.” Nature Medicine, May 2023. https://www.nature.com/articles/s41591-023-02367-8 [8] Yadav et al. “Data Privacy in Healthcare: In the Era of Artificial Intelligence.” Indian Dermatology Online Journal, November 2023. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10718098/ [9] Murdoch. “Privacy and artificial intelligence: challenges for protecting health information in a new era.” BMC Medical Ethics, September 2021. https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00687-3#citeas [10] Obermeyer et al. “Dissecting racial bias in an algorithm used to manage the health of populations.” Science, October 2019. https://www.science.org/doi/10.1126/science.aax2342 [11] Tobia et al. “When Does Physician Use of AI Increase Liability?” Journal of Nuclear Medicine, January 2021. https://jnm.snmjournals.org/content/62/1/17 [12] Sullivan and Schweikart. “Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?” American Medical Association Journal of Ethics, February 2019. https://journalofethics.ama-assn.org/article/are-current-tort-liability-doctrines-adequate-addressing-injury-caused-ai/2019-02 [13] “Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles.” US Food and Drug Administration, October 2023. https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Archives
November 2024
|