Penn Undergraduate Law Journal
  • Home
  • About
    • Mission
    • Masthead
    • Faculty Advisory Board
    • Partner Journals
    • Sponsors
  • Submissions
  • Full Issues
  • The Roundtable
    • Pre-Law Corner
  • Events
  • Contact
    • Contact
    • Apply
    • FAQs
  • Home
  • About
    • Mission
    • Masthead
    • Faculty Advisory Board
    • Partner Journals
    • Sponsors
  • Submissions
  • Full Issues
  • The Roundtable
    • Pre-Law Corner
  • Events
  • Contact
    • Contact
    • Apply
    • FAQs

The Roundtable


Welcome to the Roundtable, a forum for incisive commentary and analysis
on cases and developments in law and the legal system.


Contemporary Approaches to Auditing AI Algorithms in Tort Litigation

12/7/2022

0 Comments

 
Picture
image credits: ​https://www.nature.com/collections/ceiajcdbeb

Sam Jung

Sam Jung is a first-year student in the College of Arts and Sciences at the University of Pennsylvania. He plans to major in Computer Science & Political Science.
​
​
AI systems are becoming increasingly relevant. Recent lawsuits involving Tesla’s self-driving cars raise two pressing questions: to what extent is there a need for accountability when harms are caused by AI systems, and how does one objectively evaluate both the cause and extent of harm from AI systems?

These tort suits are already producing interesting effects: AI developers are beginning to take precautions against various relevant safety risks [1].  Still, these precautions lead to  a confrontation with the fundamental issue in AI regulation—namely, AI opacity—a term describing the difficulty of determining how and why AI systems make decisions using complex algorithms. Recent advancements in legal scholarship address various approaches to mitigate AI opacity through a combination of expert witnesses, civil procedure, legal argumentation, and Explainable AI [2].

For modern developers and stakeholders, explainability is a key objective in both the maintenance of AI systems and a prerequisite for AI accountability. Explainable AI, or ‘XAI’ is the interdisciplinary field which addresses AI opacity from perspectives in computer science, law, and psychology [2]. 

Currently, there are 4 primary causes for AI opacity. Burrell, an expert on AI opacity, classifies these issues as intentional secrecy, technical illiteracy, inherent inscrutability, and inadequate documentation [3, 4]. Intentional secrecy, generally speaking, is the intentional obstruction of information regarding AI systems. Specifically, it encompasses the secrecy surrounding the development process for these algorithms, and cases when developers assert confidentiality outside against audits to protect ‘trade secrets’ contained within the algorithm. Technical illiteracy refers to the difficulty of plaintiffs, judges, and the general public to substantively inspect and analyze critical issues in AI governance. Inherent inscrutability refers to the size, complexity, and non-linear nature of deep learning algorithms and neural networks that hold them from being completely explainable, even to their authors. Additionally, the sheer difficulty in implementing certain systems often leads to inadequate documentation—in both the design and development of systems—even when developers are not intentionally secretive, leading to key information being lost during the development process. 

In the U.S., most law cases pertaining to AI can be reduced to two types: product liability and negligence claims. In product liability cases, some states require plaintiffs to prove they were harmed as a result of some defect in the product. Other states ask for a ‘reasonable alternative design test’ wherein the defendant could have created a modified system that would have been safer at a reasonable cost when compared to the overall reduction of risk [5]. However, product liability claims have a key issue in the litigation of AI cases—it is often difficult to map an abstract concept like ‘safety’ in regards to specific parts of a computer algorithm. 

Negligence claims, although often more complex, have the potential to be an effective tool for any litigator dealing with XAI. These claims involve 4 parts: the defendant owes the plaintiff some legal duty, the defendant breached said duty, the plaintiff was injured in some way, and the defendant’s breach caused the injury [6]. In the U.S., most courts apply a “but for” or “proximate cause” test for causation concerning a connection between breach of duty and injury caused. The potential application for XAI here is the fact that probabilistic and counterfactual tests for causation are well suited to the statistical nature of XAI methods for algorithmic audits [8]. Specifically, XAI methods allow for the assessment of how AI systems would behave when presented with different data/inputs— providing more comprehensive evidence than courts traditionally have in negligent design cases.

In the Tesla case, five Texas police officers are suing for injuries caused by a collision where the vehicle’s Autopilot allegedly failed to detect two police cars at a traffic stop with emergency vehicle lights [7]. They are suing for design defects, negligence, and a failure to resolve known problems (this accident is just one of multiple under similar conditions) . The strategy of this suit’s approach is interesting because it 1) simplifies the plaintiff’s inquiry to three distinct categories and 2) reduces the necessity of the plaintiff to gain access to Tesla’s proprietary algorithms and explain the opaque AI system in question. One of the easiest ways to prove breach of duty in an AI negligence claim involves identifying “information about the design, development and deployment process, and a description of the functionality of the system” to identify “some aspect of this description or process that is clearly inconsistent with what a reasonable developer would do” [2]. The National Highway Traffic Safety Administration (NHTSA) has taken this approach in its investigation of Tesla.

One of the biggest issues in XAI cases is technical illiteracy. The courts and public are not experienced in handling such issues, and not well-versed in interpreting complex technical information. For other technical cases (such as medical negligence), courts have called on ‘expert witnesses’ [9]. This action is not new. What is new, however, are recent approaches to dealing with XAI evidence. For instance, in ACCC v. Trivago, a case argued in Australia, the trial judge allowed for two panels of experts—one appointed by the plaintiffs, and the other appointed by the defendants. To mitigate bias between contrasting parts of both experts’ explanations, the court mandated the experts confer for a joint report. This approach has two distinct advantages: it simplified analysis of the issues by explicitly stating common points while also highlighting points of disagreement. In Trivago, the joint report agreed that the AI algorithm in question contained ‘weights’ in favor of corporate sponsors for Trivago. The court inferred these weights significantly contributed to its bias, which contradicted Trivago’s claim that the algorithm acted in consumers’ best interest [10, 11]. 

Overall, AI Opacity threatens the capacity for meaningful tort litigation. For instance, intentional secrecy prohibits the capacity for meaningful audits and accurate analyses of AI algorithms. Drawing adverse inferences from such practices would be a way for courts to disincentivize such practices without the need for additional legislation. Europe already has several laws mandating documentation in certain instances of development [12]. 

Courts must also develop a standard approach to the discovery, disclosure, and documentation of algorithms that ensures sufficient access for both parties of litigants such as in Trivago [2]. AI opacity in relation to accountability and law bring substantive opportunities for creative legal thought. However, litigation in this field does not require a fundamental change in the approach to law. Legal and technical tools already exist to accomplish such objectives, and regulatory interventions would not necessarily correct current practices within XAI. 


The opinions and views expressed in this publication are the opinions of the designated authors and do not reflect the opinions or views of the Penn Undergraduate Law Journal, our staff, or our clients.

[1] https://techpolicy.press/assessing-the-safety-risks-of-software-written-by-artificial-intelligence/
​

[2] Henry Fraser, Rhyle Simcock, and Aaron J. Snoswell. 2022. AI Opacity and Explainability in Tort Litigation. In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 185–196. https://doi.org/10.1145/3531146.3533084

[3] Burrell, Jenna, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data & Society 1

[4] Selbst, Andrew D and Solon Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87(3) Fordham Law Review 1085

[5] Connelly v. Hyundai Motor Co [2003] 351 F.3d 535, 541 (1st Cir. 2003)

[6] https://www.law.cornell.edu/wex/negligence

[7] https://www.businessinsider.com/texas-cops-sue-tesla-car-reportedly-on-autopilot-hit-police-2021-9

[8] Miller, Tim, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’ (2019) 267 Artificial Intelligence 1

[9] Freckelton, Ian and Hugh Selby, Expert Evidence: Law, Practice, Procedure and Advocacy  (Lawbook Co, 5th ed, 2013)

[10] Australian Competition and Consumer Commission (ACCC) v. Trivago NV [2016] FCA 196

[11] Trivago NV v. Australian Competition and Consumer Commission (ACCC) [2020] FCAFC 185

[12] Proposal for a Regulation Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) | Shaping Europe’s Digital Future 2021

​
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.


    Categories

    All
    Aaron Tsui
    Akshita Tiwary
    Alana Bess
    Alana Mattei
    Albert Manfredi
    Alexander Saeedy
    Alexandra Aaron
    Alexandra Kanan
    Alexandra Kerrigan
    Alice Giannini
    Alicia Augustin
    Alicia Kysar
    Ally Kalishman
    Ally Margolis
    Alya Abbassian
    Amanda Damayanti
    Anika Prakash
    Anna Schwartz
    Arshiya Pant
    Ashley Kim
    Astha Pandey
    Audrey Pan
    Benjamin Ng'aru
    Brónach Rafferty
    Bryce Klehm
    Cary Holley
    Catherine Tang
    Christina Gunzenhauser
    Christine Mitchell
    Christopher Brown
    Clarissa Alvarez
    Cole Borlee
    Connor Gallagher
    Dan Spinelli
    Dan Zhang
    David Katz
    Davis Berlind
    Derek Willie
    Dhilan Lavu
    Edgar Palomino
    Edna Simbi
    Ella Jewell
    Ella Sohn
    Emma Davies
    Esther Lee
    Evelyn Bond
    Filzah Belal
    Frank Geng
    Gabrielle Cohen
    Gabriel Maliha
    Georgia Ray
    Graham Reynolds
    Habib Olapade
    Hailie Goldsmith
    Haley Son
    Hannah Steinberg
    Harshit Rai
    Hennessis Umacta
    Henry Lininger
    Hetal Doshi
    Ingrid Holmquist
    Iris Zhang
    Irtaza Ali
    Isabela Baghdady
    Ishita Chakrabarty
    Jack Burgess
    Jessica "Lulu" Lipman
    Joe Anderson
    Jonathan Lahdo
    Jonathan Stahl
    Joseph Squillaro
    Justin Yang
    Kaitlyn Rentala
    Kanishka Bhukya
    Katie Kaufman
    Kelly Liang
    Keshav Sharma
    Ketaki Gujar
    Khlood Awan
    Lauren Pak
    Lavi Ben Dor
    Libby Rozbruch
    Lindsey Li
    Luis Bravo
    Lyan Casamalhuapa
    Lyndsey Reeve
    Madeline Decker
    Maja Cvjetanovic
    Maliha Farrooz
    Marco DiLeonardo
    Margaret Lu
    Matthew Caulfield
    Michael Keshmiri
    Michael Merolla
    Mina Nur Basmaci
    Muskan Mumtaz
    Natalie Peelish
    Natasha Darlington
    Natasha Kang
    Nathan Liu
    Nayeon Kim
    Nicholas Parsons
    Nicholas Williams
    Nicole Greenstein
    Nicole Patel
    Nihal Sahu
    Omar Khoury
    Owen Voutsinas Klose
    Owen Voutsinas-Klose
    Paula Vekker
    Pheby Liu
    Pragat Patel
    Rachel Bina
    Rachel Gu
    Rachel Pomerantz
    Rebecca Heilweil
    Regina Salmons
    Sajan Srivastava
    Samantha Graines
    Sandeep Suresh
    Sanjay Dureseti
    Sarah Simon
    Saranya Das Sharma
    Saranya Sharma
    Sasha Bryski
    Saxon Bryant
    Sean Foley
    Sebastian Bates
    Serena Camici
    Shahana Banerjee
    Shannon Alvino
    Shiven Sharma
    Siddarth Sethi
    Sneha Parthasarathy
    Sneha Sharma
    Sophie Lovering
    Steven Jacobson
    Suaida Firoze
    Suprateek Neogi
    Takane Shoji
    Tanner Bowen
    Taryn MacKinney
    Thomas Cribbins
    Todd Costa
    Tyler Larkworthy
    Tyler Ringhofer
    Vatsal Patel
    Vikram Balasubramanian
    Vishwajeet Deshmukh
    Wajeeha Ahmad
    Yeonhwa Lee

    Archives

    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    September 2024
    May 2024
    April 2024
    January 2024
    December 2023
    November 2023
    May 2023
    March 2023
    January 2023
    December 2022
    November 2022
    September 2022
    June 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    May 2019
    April 2019
    March 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    December 2017
    November 2017
    October 2017
    August 2017
    July 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    April 2016
    March 2016
    February 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    November 2014
    October 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014
    December 2013
    November 2013
    October 2013
    September 2013

Powered by Create your own unique website with customizable templates.