The Roundtable
Welcome to the Roundtable, a forum for incisive commentary and analysis
on cases and developments in law and the legal system.
on cases and developments in law and the legal system.
By Luis Bravo Luis Bravo is a sophomore at the University of Pennsylvania studying Sociology. Spotify playlists, Google advertisements, and Amazon product search results are all tailored to individuals utilizing the power of algorithms. While algorithms are quickly becoming a consequential component of our everyday lives, we are only starting to learn about their limitations and potentially detrimental impacts. Incidents like Facebook’s trending topic controversy which centered around the company’s suppression of conservative media indicate that algorithms might be anything but neutral [1]. While the legal system can be a powerful deterrent against algorithmic discrimination, it has yet to adapt to the digital age. Oftentimes referred to as artificial intelligence, algorithms are mathematical formulas performed by computers that can be used to describe data, predict trends, and prescribe courses of action [2]. Algorithms work by analyzing input data with mathematical formulas which results in an output, usually in the form of a recommendation. While many presume algorithms cannot be biased, algorithms can face constraints at every step of the algorithmic process. Perhaps the most concerning biases are those that replicate and reinforce societal disadvantages. This is the case with Northpointe’s COMPASS sentencing system, an application that is designed to help judges decide parole sentences by predicting an individual’s chance of recidivism [3]. While the company claims it omits race as a variable in predicting an individual’s chance of reoffending, African-Americans are disproportionately categorized as dangerous in comparison to white criminal offenders. This effect is a result of Northpointe’s integration of historical crime data in its algorithm. America’s legacy of discrimination against blacks in the criminal justice system portrays African-Americans as more dangerous than their white counterparts. Though the algorithm itself does not discriminate against minorities, the incorporation of biased source data results in a feedback loop that ultimately skews results. As more and more cities across the country adopt similar programs to determine other aspects of life, such as policing and mortgage loans, algorithms could disadvantage protected classes behind the guise of mathematical fairness.
Unfortunately, the legal system has not evolved to the 21st century, rendering it largely unable to combat algorithmic discrimination. Existing discrimination laws clearly outline protected classes and distinguish between different types of discrimination [4]. While Title VII lays out some broad provisions regarding discrimination, the United States has mostly adopted a sectoral approach, banning discrimination in specific areas of law like housing and employment [5]. As a result, algorithmic discrimination has been left largely unchecked. Though private litigation and public pressure could serve as a mechanism to curtail the inappropriate usage of algorithms by companies, regulation is necessary to ensure all people are treated fairly. The Consumer Financial Protection Bureau, Federal Trade Commission, and Federal Communications Commission are all governmental agencies that could aid in combating different types of algorithmic discrimination if given expanded authority. Prior to the passage of any legislation, however, Congress must address an array of questions, like how can discrimination be measured and what constitutes fairness Though algorithms may one day prove to be a positive tool in correcting past injustices, they can also perpetuate institutionalized discrimination. Nevertheless, not all hope is lost. Steps can and should be taken to mitigate the detrimental impacts of algorithms. Technology companies can adopt voluntary ethical standards guiding the utilization of algorithms and rigorously test the implications of their applications prior to their release. In turn, the government can take an active approach in investigating the potential ramifications of algorithms and adopt regulations to curtail their detrimental impacts. Above all, society should continue to critically evaluate the implications of new technology and work to ensure a fair digital experience for all. Works Cited [1] Thompson, Nicholas, and Fred Vogelstein. “Inside Facebook’s Two Years of Hell.” WIRED. Accessed March 22, 2018. https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/ [2] The Economist. “What Are Algorithms?” Accessed March 22, 2018. https://www.economist.com/blogs/economist-explains/2017/08/economist-explains-24 [3] Angwin, Julia, Jeff Larson, and Surya Mattu. “Machine Bias.” Text/html. ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [4] The U.S. National Archives and Records Administration. “EEO Terminology.” National Archives, August 15, 2016. https://www.archives.gov/eeo/terminology.html [5] U.S. Equal Employment, and Opportunity Commission. “Laws Enforced by EEOC.” Accessed March 22, 2018. https://www.eeoc.gov/laws/statutes/index.cfm Photo Credit: Pixabay User geralt The opinions and views expressed through this publication are the opinions of the designated authors and do not reflect the opinions or views of the Penn Undergraduate Law Journal, our staff, or our clients.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Archives
September 2024
|