
Dissertation The Unforeseen Consequences of Artificial Intelligence (AI) on Society A Systematic Review of Regulatory Gaps Generated by AI in the U.S. Carlos Ignacio Gutierrez Gaviria This document was submitted as a dissertation in January 2020 in partial fulfillment of the requirements of the doctoral degree in public policy analysis at the Pardee RAND Graduate School. The faculty committee that supervised and approved the dissertation consisted of Dave Baiocchi (Chair), Nidhi Kalra, John Seely Brown, and William Welser IV. This work was funded by the Government of Mexico, the Horowitz Foundation for Social Policy, and by the Pardee RAND Graduate School through its Redesign Dissertation Award. PARDEE RAND GRADUATE SCHOOL For more information on this publication, visit http://www.rand.org/pubs/rgs_dissertations/RGSDA319-1.html Published 2020 by the RAND Corporation, Santa Monica, Calif. R® is a registered trademark Limited Print and Electronic Distribution Rights This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions.html. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors. Support RAND Make a tax-deductible charitable contribution at www.rand.org/giving/contribute www.rand.org Abstract As a formal discipline, Artificial Intelligence (AI) is over 60 years old. In this time, breakthroughs in the field have generated technologies that compare to or outperform humans in tasks requiring creativity and complex reasoning. AI’s growing catalog of applications and methods has the potential to profoundly affect public policy by generating instances where regulations are not adequate to confront the issues faced by society, also known as regulatory gaps. The objective of this dissertation is to improve our understanding of how AI influences U.S. public policy. It systematically explores, for the first time, the role of AI in the generation of regulatory gaps. Specifically, it addresses two research questions: 1. What U.S. regulatory gaps exist due to AI methods and applications? 2. When looking across all of the gaps identified in the first research question, what trends and insights emerge that can help stakeholders plan for the future? These questions are answered through a systematic review of four academic databases of literature in the hard and social sciences. Its implementation was guided by a protocol that initially identified 5,240 candidate articles. A screening process reduced this sample to 241 articles (published between 1976 and February of 2018) relevant to answering the research questions. This dissertation contributes to the literature by adapting the work of Bennett-Moses and Calo to effectively characterize regulatory gaps caused by AI in the U.S. In addition, it finds that most gaps: do not require new regulation or the creation of governance frameworks for their resolution, are found at the federal and state levels of government, and AI applications are recognized more often than methods as their cause. iii Executive Summary As a formal discipline, Artificial Intelligence (AI) is over 60 years old. In this time, breakthroughs in the field have generated technologies that compare to or outperform humans in tasks requiring creativity and complex reasoning. All sectors of the economy are increasingly subject to this technology’s influence thanks to rapid advances in information processing and consumer demand for competitive offerings. Many of AI’s applications or methods have no discernible effect on how existing regulations or policies are interpreted or applied.1 In other words, they are policy agnostic. However, AI has the potential to profoundly impact public policy. The progress towards achieving parity between machine processing and human cognition has generated instances where public policies are not adequate to confront the issues faced by society, also known as regulatory gaps. The literature on the relationship between policy and AI is generally siloed and, as Calo points out, limited resources have been dedicated to taking a broad look across the corpus of this technology’s impact.2,3 The objective of this dissertation is to respond to the challenge for a thorough and systematic analysis of the literature on the intersection between AI and policy. It contributes to this field’s scholarship by systematically identifying, for the first time, the role of AI in the generation of regulatory gaps. Specifically, it addresses two research questions: 1. What U.S. regulatory gaps exist due to AI methods and applications? 2. When looking across all of the gaps identified in the first research question, what trends and insights emerge that can help stakeholders plan for the future? To answer these questions, I performed a systematic review of the literature. This methodology was selected because it “attempts to collect and analyze all evidence that answers a specific question” through a “broad and thorough search of the literature.”4 The implementation of the systematic review was guided by a protocol that initially identified 5,240 candidate articles within four academic literature databases that incorporate different lenses in the hard and social sciences (they include legal and computer science scholarship, among others). A screening process reduced the sample to a final set of 241 articles (published between 1976 and February of 2018) that were directly relevant to answering these research questions. Two ideas were fundamental in characterizing the regulatory gaps in the systematic review. The first is a framework adapted from Bennett-Moses’s work that describes the origin of regulatory gaps. 5 The left side of Table 1 identifies four ways in which technology can create a gap. The second idea, on the right side of Table 1, is adapted from Ryan Calo’s work on uncovering the 1 Lyria Bennett-Moses, Recurring Dilemmas: The Law's Race to Keep up with Technological Change, UNSW LAW RESEARCH PAPER (2007). 2 Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, SSRN (2017). 3 Calo alludes that “notably missing is any systematic review of the ways AI challenges existing legal doctrines.” Id. at. 4 CDC, Systematic Reviews(2019), available at https://www.cdc.gov/library/researchguides/sytemsaticreviews.html. 5 Bennett-Moses, UNSW LAW RESEARCH PAPER, (2007). v social themes where policy interacts with AI. 6 These ideas made it possible to carefully review the regulatory gaps within the 241 articles and explore their trends. Table 1 – Key Ideas Used to Detect and Categorize Regulatory Gaps Characterization of Regulatory Gaps by Regulatory Gap Themes by Bennett-Moses (2007) Ryan Calo (2017) Technology creates behavior that Utilization of autonomous Novelty Use of Force requires bespoke government action. weaPon systems. Role of government in Preventing With resPect to a Policy goal, technology Safety and humans from experiencing causes circumstances in which its Certification harms. application is not directed to the goal but Shielding an individual’s fall within its scoPe (over-inclusiveness) Privacy Targeting information from society. or if there are circumstances falling Assigning human rights and outside its scope where its application Personhood responsibilities to non-humans. would further the goal (under- Displacement Role of technology in replacing inclusiveness). of Labor humans in the labor force. Conflict arises because there are Justice Effects of technology on the Uncertainty contradictions, inconsistencies, or doubts System operation of courts. about a technology’s classification. A technology makes a regulation ResPonsibility for Pecuniary and Obsolescence Accountability irrelevant or unenforceable. non-pecuniary harms. Classification Utilization of labels to of Individuals discriminate PeoPle. This systematic review identified 50 regulatory gaps caused by AI methods or applications. These were catalogued based on: the type of gap (Bennett-Moses’s framework), theme they fell under (Ryan Calo’s taxonomy), level of government involved (federal, state, or local), their temporality (whether it describes a gap experienced today or speculates of one in the future), and if the gap is caused by an application (a technology’s purpose) or method (process/procedure to accomplish its purpose) of AI. It is important to note that articles in the systematic review were not screened based on a specific definition of AI. Instead, it relied on the review process within academic publications to validate the use of the term. Each characterization of a gap roughly follows the same format. It begins by asserting the type of regulatory gap identified (based on Bennett-Moses’ framework), includes background information on the subject, and presents evidence that supports its classification. To analyze
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages215 Page
-
File Size-