Welcome to the Roundtable, a forum for incisive commentary and analysis
on cases and developments in law and the legal system.
on cases and developments in law and the legal system.
image credits: https://www.nature.com/collections/ceiajcdbeb
Sam Jung is a first-year student in the College of Arts and Sciences at the University of Pennsylvania. He plans to major in Computer Science & Political Science.
These tort suits are already producing interesting effects: AI developers are beginning to take precautions against various relevant safety risks . Still, these precautions lead to a confrontation with the fundamental issue in AI regulation—namely, AI opacity—a term describing the difficulty of determining how and why AI systems make decisions using complex algorithms. Recent advancements in legal scholarship address various approaches to mitigate AI opacity through a combination of expert witnesses, civil procedure, legal argumentation, and Explainable AI .
For modern developers and stakeholders, explainability is a key objective in both the maintenance of AI systems and a prerequisite for AI accountability. Explainable AI, or ‘XAI’ is the interdisciplinary field which addresses AI opacity from perspectives in computer science, law, and psychology .
Currently, there are 4 primary causes for AI opacity. Burrell, an expert on AI opacity, classifies these issues as intentional secrecy, technical illiteracy, inherent inscrutability, and inadequate documentation [3, 4]. Intentional secrecy, generally speaking, is the intentional obstruction of information regarding AI systems. Specifically, it encompasses the secrecy surrounding the development process for these algorithms, and cases when developers assert confidentiality outside against audits to protect ‘trade secrets’ contained within the algorithm. Technical illiteracy refers to the difficulty of plaintiffs, judges, and the general public to substantively inspect and analyze critical issues in AI governance. Inherent inscrutability refers to the size, complexity, and non-linear nature of deep learning algorithms and neural networks that hold them from being completely explainable, even to their authors. Additionally, the sheer difficulty in implementing certain systems often leads to inadequate documentation—in both the design and development of systems—even when developers are not intentionally secretive, leading to key information being lost during the development process.
In the U.S., most law cases pertaining to AI can be reduced to two types: product liability and negligence claims. In product liability cases, some states require plaintiffs to prove they were harmed as a result of some defect in the product. Other states ask for a ‘reasonable alternative design test’ wherein the defendant could have created a modified system that would have been safer at a reasonable cost when compared to the overall reduction of risk . However, product liability claims have a key issue in the litigation of AI cases—it is often difficult to map an abstract concept like ‘safety’ in regards to specific parts of a computer algorithm.
Negligence claims, although often more complex, have the potential to be an effective tool for any litigator dealing with XAI. These claims involve 4 parts: the defendant owes the plaintiff some legal duty, the defendant breached said duty, the plaintiff was injured in some way, and the defendant’s breach caused the injury . In the U.S., most courts apply a “but for” or “proximate cause” test for causation concerning a connection between breach of duty and injury caused. The potential application for XAI here is the fact that probabilistic and counterfactual tests for causation are well suited to the statistical nature of XAI methods for algorithmic audits . Specifically, XAI methods allow for the assessment of how AI systems would behave when presented with different data/inputs— providing more comprehensive evidence than courts traditionally have in negligent design cases.
In the Tesla case, five Texas police officers are suing for injuries caused by a collision where the vehicle’s Autopilot allegedly failed to detect two police cars at a traffic stop with emergency vehicle lights . They are suing for design defects, negligence, and a failure to resolve known problems (this accident is just one of multiple under similar conditions) . The strategy of this suit’s approach is interesting because it 1) simplifies the plaintiff’s inquiry to three distinct categories and 2) reduces the necessity of the plaintiff to gain access to Tesla’s proprietary algorithms and explain the opaque AI system in question. One of the easiest ways to prove breach of duty in an AI negligence claim involves identifying “information about the design, development and deployment process, and a description of the functionality of the system” to identify “some aspect of this description or process that is clearly inconsistent with what a reasonable developer would do” . The National Highway Traffic Safety Administration (NHTSA) has taken this approach in its investigation of Tesla.
One of the biggest issues in XAI cases is technical illiteracy. The courts and public are not experienced in handling such issues, and not well-versed in interpreting complex technical information. For other technical cases (such as medical negligence), courts have called on ‘expert witnesses’ . This action is not new. What is new, however, are recent approaches to dealing with XAI evidence. For instance, in ACCC v. Trivago, a case argued in Australia, the trial judge allowed for two panels of experts—one appointed by the plaintiffs, and the other appointed by the defendants. To mitigate bias between contrasting parts of both experts’ explanations, the court mandated the experts confer for a joint report. This approach has two distinct advantages: it simplified analysis of the issues by explicitly stating common points while also highlighting points of disagreement. In Trivago, the joint report agreed that the AI algorithm in question contained ‘weights’ in favor of corporate sponsors for Trivago. The court inferred these weights significantly contributed to its bias, which contradicted Trivago’s claim that the algorithm acted in consumers’ best interest [10, 11].
Overall, AI Opacity threatens the capacity for meaningful tort litigation. For instance, intentional secrecy prohibits the capacity for meaningful audits and accurate analyses of AI algorithms. Drawing adverse inferences from such practices would be a way for courts to disincentivize such practices without the need for additional legislation. Europe already has several laws mandating documentation in certain instances of development .
Courts must also develop a standard approach to the discovery, disclosure, and documentation of algorithms that ensures sufficient access for both parties of litigants such as in Trivago . AI opacity in relation to accountability and law bring substantive opportunities for creative legal thought. However, litigation in this field does not require a fundamental change in the approach to law. Legal and technical tools already exist to accomplish such objectives, and regulatory interventions would not necessarily correct current practices within XAI.
The opinions and views expressed in this publication are the opinions of the designated authors and do not reflect the opinions or views of the Penn Undergraduate Law Journal, our staff, or our clients.
 Henry Fraser, Rhyle Simcock, and Aaron J. Snoswell. 2022. AI Opacity and Explainability in Tort Litigation. In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 185–196. https://doi.org/10.1145/3531146.3533084
 Burrell, Jenna, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data & Society 1
 Selbst, Andrew D and Solon Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87(3) Fordham Law Review 1085
 Connelly v. Hyundai Motor Co  351 F.3d 535, 541 (1st Cir. 2003)
 Miller, Tim, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’ (2019) 267 Artificial Intelligence 1
 Freckelton, Ian and Hugh Selby, Expert Evidence: Law, Practice, Procedure and Advocacy (Lawbook Co, 5th ed, 2013)
 Australian Competition and Consumer Commission (ACCC) v. Trivago NV  FCA 196
 Trivago NV v. Australian Competition and Consumer Commission (ACCC)  FCAFC 185
 Proposal for a Regulation Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) | Shaping Europe’s Digital Future 2021