By Tanner Bowen
Tanner Bowen is a junior at the University of Pennsylvania studying business.
It is well known that the private sector has thoroughly employed machine learning algorithms into their products in order to gain a competitive edge in the market place. This can be seen from Google’s attempt at creating a self-driving car to high frequency traders in the financial sector. One sector that we might not think about when it comes to machine learning is the government. Often criticized for layers of red tape and inefficiency, expansion of machine learning into the public sector might seem like a natural extension of these technological advances. But is this legal?
There seems to be four potential concerns of using machine learning of which regulators will have to be cautious if they decide to adopt these algorithms into the administrative framework. In particular, these topics include: nondelegation, due process, anti-discrimination, and transparency. We will quickly touch upon each of these legal issues and highlight the relevant case law and legal theory where applicable in a series of blog posts. This blog post will focus on nondelegation.
The first and most important legal concept from an administrative standpoint is that of nondelegation. Specifically, the U.S. Constitution provides that “all legislative powers” of the federal government “shall be vested” in Congress.  However, as we all know, the jurisprudence of the courts have allowed that Congress can delegate certain authorities to administrative agencies headed by officers appointed by the President. There are limits to this authority, and historically we have assumed that this authority would be delegated to a human being. The issue then becomes what happens if machine learning is liberally adopted by governments in the future, but then these agencies use computerized algorithms to to carry out some of these delegated authorities.
However, some case law might assume that establishing rules via robots would not inherently offend the nondelegation principle. In general, courts give a broad deference to delegations of authority so long as they act within the “public interest.”  More specifically, as long as Congress lays out an “intelligible principle” to constrain agency action, then the courts will more than likely uphold agency delegations.  If one considers what machine learning is, it is basically a non-parametric method of analyzing data by allowing the data scientist to write an algorithm that dictates how information contained in the predictor variables is used to forecast a response variable.  Given that machine learning algorithms are mathematical in nature and seek to optimize some objective function, it seems true that machine learning on its surface could muster enough strength to pass the “intelligible principle” of any agency delegation.
The idea of an algorithm being a “black box” of sorts is what particularly scares some individuals. Compared to parametric forms of estimation such as regression, where you achieve a closed-form equation used for predictive purposes, machine learning algorithm is used for estimation and to highlight relationships within the data. However, we do not know the relationships between each of the variables and how it affects the response. “Coefficients” in the machine learning universe essentially do not exist.  Thus, for courts to be comfortable with accepting agency usage of machine learning, algorithms will probably have to be coded where humans are still “in control” and oversee the implementation of these algorithms.
Thus, one could make the argument that in this situation, using machine learning algorithms is more like using measurement tools to perform a delegated duty more so than anything else. We must remember that the goal of machine learning is to eventually estimate relationships between data so it can be useful for agency decisions. The only fault happens when agencies rely entirely on using algorithms as the basis for their decision-making processes. Causality is typically hard to prove, so recognizing what machine learning is and its purpose will help guide us and courts in determining whether agencies have overstepped this “intelligible principle” of congressional nondelegation.
There may nevertheless be idiosyncratic cases where governmental agencies might abuse nondelegation. Although the courts have given a wide deference to interpreting and executing Congressional mandates under the Chevron doctrine, we are still not sure if the courts will don this deference to the implementation of algorithms.  Everything here is in the abstract, but we should be looking forward to future court cases where the legality of machine learning will be tested by judges who rely on expert testimony and their knowledge of law that was developed for human actions.
For more information on machine learning and administrative law, see the University of Pennsylvania’s “Optimizing Government Series” as part of the Penn Program on Regulation.
. United States Constitution, Article 1, § 1.
. National Broadcasting Co., Inc. v. United States, 319 U.S. 190, 225-26 (1943).
. J.W. Hampton, Jr. & Co. v. United States, 276 U.S. 394, 409 (1928).
 Richard A. Berk, Statistical Learning from a Regression Perspective, 15 (2016).
. Ibid, 26.
. Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984).
Photo Credit: Flickr User UCSB Engineering
The opinions and views expressed through this publication are the opinions of the designated authors and do not reflect the opinions or views of the Penn Undergraduate Law Journal, our staff, or our clients.