The Roundtable
Welcome to the Roundtable, a forum for incisive commentary and analysis
on cases and developments in law and the legal system.
on cases and developments in law and the legal system.
Written by Aaron Tsui, Edited by Lyan Casamalhuapa Aaron Tsui is a junior studying computer engineering and robotics in the School of Engineering and Applied Science interested in technology law and intellectual property. While it is more than likely that you have heard the term “AI” in the news or in conversations, have you ever asked yourself: “What is AI?” The obvious answer is “artificial intelligence.” From here, you can derive a simple definition that AI is computer-programmed intelligence that can perform actions or reasoning that would otherwise require human intelligence. Simple enough, right? Not quite. Developers, researchers, law and policy makers, and even pioneers of the field itself have historically held various definitions of AI. For instance, John McCarthy, a notable pioneer of the field, defines AI as the “science and engineering of making intelligent machines, especially intelligent computer programs.” More specifically, he defines ‘intelligence’ as “computational part of the ability to achieve goals in the world” [1]. However, this raises the question: to what extent does one computer program achieve a goal while another does not? Has AI evolved so significantly that such a definition now holds little to no weight?
The reality is that a single, universally accepted definition of AI is unlikely to be agreed upon within a specific field– let alone across multiple fields or internationally. While this may pose minimal issues on the technological side, aligning technology with law can create a rather problematic cloud of ambiguity. As AI grows in popularity, both in terms of research and in terms of consumer-use, there is a simultaneous growth in concerns of its use. Whether it be data privacy, bias, accountability, intellectual property and copyright issues, or even antitrust and anti-competitive practice, there is an undoubtedly daunting set of obstacles when it comes to putting any technology, but especially AI, into practice. Further, while each iteration of a particular AI technology may be increasingly “better” than the previous, arguably no iteration is – or ever will be – perfect. This is precisely where the law and technology can work simultaneously to ensure that fair, safe, and adequately regulated technologies are put into practice. However, issues arise when law and policy makers attempt to enforce oversight and regulation without first adequately defining the relevant technology and its respective scope. For instance, in a 2020 white paper, the European Commission, the EU’s executive arm , defined AI as “a collection of technologies that combine data, algorithms and computing power” [2]. Considering how wide-ranging the field of AI has become, incorporating other fields such as machine learning to develop more advanced methods, this broad definition poses serious implications when it comes to regulation. While laws and policy are often imperfect, with some being overinclusive and others underinclusive, a definition that is so far removed from the technology it is meant to regulate is a priority for necessary change. The United States has notably taken initiatives such as the White House’s “Blueprint for an AI Bill of Rights.” This blueprint covers a wide range of issues from safety to algorithmic discrimination protections to data privacy, yet it fails to state a precise definition of AI or the technology at hand [3]. Beyond this blueprint, the U.S. still lacks both concrete AI regulation laws or policies and relevant definitions. For instance, the National Artificial Intelligence Initiative Act of 2020 (Bill H.R. 6216) remains introduced as of March 12, 2020 [4]. More recently, bills such as the ‘No Robot Bosses Act’ (S.2419) , were s introduced on July 20th, 2023 [5]. However, as noted by the European Parliament, despite these initiatives, the U.S. has been slow to enact AI legislation. Specifically, of the at least 75 relevant bills introduced in the 117th Congress, 6 were enacted and of the at least 40 relevant bills introduced in the 118th Congress as of June 2023, none were enacted [6]. On the other hand, the EU recently put into effect Regulation 2024/1689, the EU AI Act, which tackles the aforementioned broadness issue of previous definitions by both the European Commision and the U.S. Particularly, it is emphasized that definitions of AI should be created such that they can “distinguish it from simpler traditional software systems or programming approaches” and that the key distinguishing characteristics pertain to “learning, reasoning or modeling” capabilities [7]. While the argument over whether the U.S. should follow in the regulatory-focused footsteps of the EU is undoubtedly a major point of contention in and of itself, the steps taken by the EU in their recent AI Act to clarify ambiguity in the applicable technological scope should be modeled in U.S. law policy going forward, should the U.S. enact such regulations. The opinions and views expressed in this publication are the opinions of the designated authors and do not reflect the opinions or views of the Penn Undergraduate Law Journal, our staff, or our clients. Bibliography [1]http://jmc.stanford.edu/artificial-intelligence/what-is-ai/#:~:text=It%20is%20the%20science%20and,methods%20that%20are%20biologically%20observable [2] https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf [3] https://www.whitehouse.gov/ostp/ai-bill-of-rights/ [4] https://www.congress.gov/bill/116th-congress/house-bill/6216 [5] https://www.congress.gov/bill/118th-congress/senate-bill/2419 [6] https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/757605/EPRS_ATA(2024)757605_EN.pdf [7] https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Archives
November 2024
|