Tanner Bowen is a junior at the University of Pennsylvania studying business.
In my last blog post, we started talking about potential legal hurdles endemic to the usage of machine learning algorithms by federal and state governments. Particularly, we mentioned some issues that courts will grapple with in deciding whether these algorithms violate the nondelegation provision of Article I of the United States Constitution. But, as one can imagine, there are a plethora of legal issues that might make it onto the dockets of federal courts concerning the proliferation of technology in government rulemaking decisions. In this post, we will examine the question: Do machine learning algorithms in lieu of rulemaking violate our guarantee of due process as citizens?
Due process considerations are not novel when it comes to the federal government. Since federal agencies have been making and executing rules, they have had to justify actions that could deprive individuals of property or entitlements for centuries. Yet, as discussed in my last post, it seems that if these algorithms are used in a more support or research function, then we might not cross the issue of due process. But let us assume that these algorithms will be used for large-scale policy decisions. What does our current legal system hint at concerning these optimization techniques?
The points are: “First, the private interest that will be affected by the official action; second, the risk of an erroneous deprivation of such interest through the procedures used, and probable value, if any, of additional procedural safeguards; and finally, the Government’s interest, including the function involved and the fiscal and administrative burdens that the additional or substitute procedural requirement would entail.”  But more so, “…due process is flexible and calls for such procedural protections as the particular situation demands.” 
We will discuss each of these prongs of the test in turn, with keeping in mind an important point of administrative jurisprudence: the fact pattern matters. Many of these algorithms will be context dependent, and so generalizations may be difficult and in some cases usage of machine learning might violate due process. But, we can conjecture based on legal precedent in the Mathews test about how machine learning algorithms might fair.
The first prong is that there is a private interest at stake. Particularly, the government will have to prove in litigation that its usage of a particular algorithm will have low error rates. Calculating a confusion table for this algorithm will be a key statistic in judicial review, where we can see the rate of false positives and false negatives if the government wishes to use the algorithm for classification purposes. . If the government can prove that this algorithm has lower error rates than an in-person hearing, for example, then this algorithm might pass the first part of this due process challenge.
The second hearing goes to a recent rift in legal jurisprudence. Ever since the D.C. Circuit in Business Roundtable v. SEC struck down the SEC’s proxy access rule because the SEC did not “adequately consider all the costs” of the proposed rule, the rule was designated “arbitrary and capricious” – the death sentence for federal regulations.  Although a separate legal argument can develop around if the courts should be checking such cost-benefit analyses of agencies, we see in the Mathews test that this Court envisioned some sort of quantitative cost-benefit analysis to justify any infringement on due process. Here, we may even see courts try to determine a level of error that a machine learning algorithm should not cross. But, here, any statistician will find fault in this argument. There is a general divergence upon what is a “minimum error rate.”  Although asymptotically there may be arguments concerning acceptable error rates, it will be up to the agencies to develop these quantitative metrics for any judicial oversight involving due process.
As for the final point concerning the government’s interest, the agency in this situation will have have the burden to provethat the usage of this algorithm will defeat any potential fiscal or administrative burdens the government will entail. Here, once again doing some cost-benefit analysis will be helpful, as well as noting whether there are fiscal advantages of this algorithm such as decreasing agency costs. Overall, it seems likely that the agency will be able to prove that there is a government’s interest in this algorithm so long as it has documented the evaluation of the algorithm through training, evaluation, and finally test data to get (mostly) unbiased results. 
Although many of these topics concerning the case law and machine learning can get rather technical, this is a potential reality of our courts in the future. Will we have independent court statisticians evaluate the soundness of machine learning algorithms? Will the historic Chevron doctrine still play a role and courts end up deferring to agencies in interpretations of rules via machine learning ends? Undoubtedly, with the proliferated usage of technology, our potentially antiquated courts might be thrust into the 21st Century.
. Mathews v. Eldridge, 424 U.S. 319 (1976).
. Id. at 335.
. Morrisey v. Brewer, 408 U.S. 471, 481 (1972).
. Berk, Richard (2015). Statistical Learning from a Regression Perspective. Springer, pp. 108-110.
. Business Roundtable v. SEC, 647 F. 3d 1144 (2011)
. Coglianese, Cary and David Lehr (2017). Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, pp. 80-81.
 Berk, pp. 15
Photo Credit Flickr User: Christiaan Colen
The opinions and views expressed through this publication are the opinions of the designated authors and do not reflect the opinions or views of the Penn Undergraduate Law Journal, our staff, or our clients.