Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments
Total Page:16
File Type:pdf, Size:1020Kb
Washington and Lee Law Review Volume 78 Issue 2 Article 4 Spring 2021 Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments Sonia M. Gipson Rankin University of New Mexico School of Law, [email protected] Follow this and additional works at: https://scholarlycommons.law.wlu.edu/wlulr Part of the Criminal Law Commons, Criminal Procedure Commons, Law and Race Commons, and the Science and Technology Law Commons Recommended Citation Sonia M. Gipson Rankin, Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments, 78 Wash. & Lee L. Rev. 647 (2021), https://scholarlycommons.law.wlu.edu/wlulr/vol78/iss2/4 This Article is brought to you for free and open access by the Washington and Lee Law Review at Washington & Lee University School of Law Scholarly Commons. It has been accepted for inclusion in Washington and Lee Law Review by an authorized editor of Washington & Lee University School of Law Scholarly Commons. For more information, please contact [email protected]. Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments Sonia M. Gipson Rankin* Abstract Issues of racial inequality and violence are front and center today, as are issues surrounding artificial intelligence (“AI”). This Article, written by a law professor who is also a computer scientist, takes a deep dive into understanding how and why hacked and rogue AI creates unlawful and unfair outcomes, particularly for persons of color. Black Americans are disproportionally featured in criminal justice, and their stories are obfuscated. The seemingly endless back-to-back murders of George Floyd, Breonna Taylor, Ahmaud Arbery, and heartbreakingly countless others have finally shaken the United States from its slumbering journey towards intentional criminal justice reform. Myths about Black crime and criminals are embedded in the data collected by AI and do not tell the truth about race and crime. However, the number of * Assistant Professor of Law, University of New Mexico School of Law. J.D. 2002, University of Illinois College of Law; B.S. Computer Science 1998, Morgan State University. I would like to thank my husband, Eric Rankin, for introducing this issue to me. Thank you to my colleagues, Nathalie Martin, Alfred Mathewson, Ernest Longa, Joshua Kastenberg, Marsha K. Hardeman, DeForest Soaries, David Gipson, and Naomi Rankin, who provided comments on this paper. I am grateful to my wonderful research assistants, Lauren Jones, Marcos Santamaria, Marissa Basler, and Deanna Warren for their sharp insight and the Interdisciplinary Working Group for Algorithmic Justice for their support. This paper is in honor of my children, Naomi, Sarai, and Isaac, and is dedicated to the memory of George Stinney Jr. Say Their Names. 647 648 78 WASH. & LEE L. REV. 647 (2021) Black people harmed by hacked and rogue AI will dwarf all historical records, and the gravity of harm is incomprehensible. The lack of technical transparency and legal accountability leaves wrongfully convicted defendants without legal remedies if they are unlawfully detained based on a cyberattack, faulty or hacked data, or rogue AI. Scholars and engineers acknowledge that the artificial intelligence that is giving recommendations to law enforcement, prosecutors, judges, and parole boards lacks the common sense of an eighteen-month-old child. This Article reviews the ways AI is used in the legal system and the courts’ response to this use. It outlines the design schemes of proprietary risk assessment instruments used in the criminal justice system, outlines potential legal theories for victims, and provides recommendations for legal and technical remedies to victims of hacked data in criminal justice risk assessment instruments. It concludes that, with proper oversight, AI can increase fairness in the criminal justice system, but without this oversight, AI-based products will further exacerbate the extinguishment of liberty interests enshrined in the Constitution. According to anti-lynching advocate, Ida B. Wells-Barnett, “The way to right wrongs is to turn the light of truth upon them.” Thus, transparency is vital to safeguarding equity through AI design and must be the first step. The Article seeks ways to provide that transparency, for the benefit of all America, but particularly persons of color who are far more likely to be impacted by AI deficiencies. It also suggests legal reforms that will help plaintiffs recover when AI goes rogue. Table of Contents INTRODUCTION .................................................................. 650 I. ARTIFICIAL INTELLIGENCE CAN BE HACKED AND WEAPONIZED............................................................ 654 A. What Is Artificial Intelligence and How Is It Used? ................................................................. 654 B. How Data Is Gathered, Analyzed, and Utilized .............................................................. 661 C. Public Fears and the Reality of Concerns Related to AI ................................................................... 666 TECHNOLOGICAL TETHEREDS 649 1. Cyberattacks ................................................ 668 2. Untrustworthy Data ................................... 675 3. Disparate Outcomes .................................... 678 D. Prevention Efforts by Scientists Are Insufficient ........................................................ 681 II. RISK ASSESSMENT INSTRUMENTS IN CRIMINAL JUSTICE ARE NOT SECURE ...................................... 685 III. LEGAL OPTIONS FOR PEOPLE HARMED BY UNTRUSTWORTHY AI IN CRIMINAL JUSTICE ........... 692 A. Impact of Data Errors Outside of the Criminal Justice System .................................................. 693 B. Nefarious Actors Provide False Scientific Evidence in Criminal Cases, Infringing on Civil Liberties ............................................................ 699 C. Theories of Accountability and Liability from Case Law for Victims of Potentially Untrustworthy AI Used in Criminal Cases ..... 701 1. Legal Response to Data Requests in Criminal and Civil Cases ............................................ 702 a. State v. Loomis ...................................... 702 b. Rodgers v. Laura & John Arnold Foundation............................................. 704 2. Legal Remedies for Wrongfully Convicted Criminal Defendants Impacted by Hacked Data .............................................................. 706 3. Legal Remedies for Third Parties Injured Due to Flawed Data or Algorithms .................... 710 IV. RECOMMENDATIONS FOR ADDRESSING UNTRUSTWORTHY AI ............................................... 712 A. Hacking Will Have a Disproportionate Impact on Black Communities in the Criminal Justice System ............................................................... 713 B. There Must Be Stronger Oversight of AI Used in Criminal Justice ............................................... 716 C. Legal Remedies Are Needed for Parties Harmed by Data Hacks in Criminal Justice Risk Assessment Instruments ................................... 721 CONCLUSION ...................................................................... 723 650 78 WASH. & LEE L. REV. 647 (2021) INTRODUCTION In Jordan Peele’s 2019 widely praised film, Us, protagonist Adelaide meets her doppelgänger, who then proceeds to terrorize Adelaide’s family.1 This look-alike is described as her “Tether,” the product of a failed and disbanded United States government experiment, deemed harmless and left to remain below ground.2 The “Tethered” escape from their confinement intent on replacing the above-ground community.3 This film captures a growing tension in science and society of artificial intelligence (“AI”) today: what happens when AI does not operate as intended? Having AI serve as the foundation of risk assessment instruments, particularly in criminal justice allows instances of benign neglect, nefarious actors, and unintended consequences to change the outcomes in ways that harm society.4 In other words, AI is as likely to contribute to racism in the law as it is a means to end it. The courts have yet to address legal issues related to easily hackable AI. Further, the current remedies in place do not offer sufficient recourse for either wrongfully incarcerated defendants or harmed third parties. The criminal justice flowchart is well established. A defendant is arrested for a crime, granted or denied bail, convicted, and sentenced, sometimes with a possibility of being eligible for parole. AI has been used to augment every stage of the criminal justice 1. US (Universal Pictures 2019); see Tasha Robinson, Jordan Peele’s Us Turns a Political Statement into Unnerving Horror, VERGE (Mar. 22, 2019, 10:47 AM), https://perma.cc/TK7F-RA6E. 2. See Tasha Robinson, Does the Ending of Jordan Peele’s Us Play Fair With the Audience?, VERGE (Mar. 25, 2019, 3:32 PM), https://perma.cc/ALM7- 5K8M. 3. See id. This theme is also explored in the highly regarded Netflix series, Stranger Things. The “upside-down” reflects a distorted version of a small community where the corrupted inhabitants of the mirrored society cross blurred boundaries. See Ashley Strickland, The Weird Upside Down Science Behind ‘Stranger Things’, CNN (July 4, 2019, 9:55 AM), https://perma.cc/68F6-TAXJ. 4. See United States v. Curry, 965 F.3d 313, 344–46 (4th Cir. 2020) (Thacker, J., concurring) (emphasizing the