Playing Prejudice: the Impact of Game-Play on Attributions of Gender and Racial Bias

Total Page:16

File Type:pdf, Size:1020Kb

Playing Prejudice: the Impact of Game-Play on Attributions of Gender and Racial Bias Playing Prejudice: The Impact of Game-Play on Attributions of Gender and Racial Bias Jessica Hammer Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy under the Executive Committee of the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2014 © 2014 Jessica Hammer All rights reserved ABSTRACT Playing Prejudice: The Impact of Game-Play on Attributions of Gender and Racial Bias Jessica Hammer This dissertation explores new possibilities for changing Americans' theories about racism and sexism. Popular American rhetorics of discrimination, and learners' naïve models, are focused on individual agents' role in creating bias. These theories do not encompass the systemic and structural aspects of discrimination in American society. When learners can think systemically as well as agentically about bias, they become more likely to support systemic as well as individual remedies. However, shifting from an agentic to a systemic model of discrimination is both cognitively and emotionally challenging. To tackle this difficult task, this dissertation brings together the literature on prejudice reduction and conceptual change to propose using games as an entertainment-based intervention to change players' attribution styles around sexism and racism, as well as their attitudes about the same issues. “Playable model – anomalous data” theory proposes that games can model complex systems of bias, while instantiating learning mechanics that help players confront the limits of their existing models. The web-based game Advance was designed using playable model – anomalous data theory, and was used to investigate three questions. First, can a playable model – anomalous data game change players' likelihood of using systemic explanations for bias, and how does it compare to the effectiveness of a control text? Second, how does the game change players' attitudes as compared to a control text? Finally, are there differences between three different versions of the game that offer players different rewards for investigating the bias in the game system? Advance did not outperform the control text at changing players' likelihood of using systemic attributions for racism and sexism, nor did it outperform the control text in changing players' attitudes. However, significant differences were found between White and non-White player populations in their sensitivity to the different game conditions. White players were unaffected by differences between versions of the game, while non-White players showed differences in play behaviors, in systemic attribution likelihood, and in attitude. Given that White Americans may have more entrenched ideas about discrimination in America, we consider the impacts of the game on non-White player populations as an indicator of what future development of playable model – anomalous data games may be able to achieve. Table of Contents List of Tables ................................................................................................................................. iv List of Figures ................................................................................................................................ vi Chapter 1: Introduction ................................................................................................................... 1 Chapter 2: Literature Review ........................................................................................................ 14 Models of Discrimination .......................................................................................................... 14 Reducing Prejudice ................................................................................................................... 20 Achieving Conceptual Change .................................................................................................. 23 Game Design for Conceptual Change ....................................................................................... 28 Chapter 3: Design ......................................................................................................................... 38 Playable Models, Anomalous Data ........................................................................................... 38 PMAD Design Principles .......................................................................................................... 43 Game Design Overview ............................................................................................................ 45 Sample of Gameplay ................................................................................................................. 49 Modeling Race and Gender ....................................................................................................... 53 Modeling Bias ........................................................................................................................... 55 Reward System Design ............................................................................................................. 63 Chapter 4: Methods ....................................................................................................................... 72 Research Questions ................................................................................................................... 73 Procedures ................................................................................................................................. 77 Subjects ..................................................................................................................................... 81 Instruments ................................................................................................................................ 82 Attribution tests. .................................................................................................................... 83 Attitude analysis. ................................................................................................................... 86 In-game data collection. ........................................................................................................ 87 Demographic data. ................................................................................................................. 89 Data Processing ......................................................................................................................... 89 Attribution data. ..................................................................................................................... 90 Attitude data. ......................................................................................................................... 91 In-game data. ......................................................................................................................... 91 Demographic data. ................................................................................................................. 92 Data Analysis ............................................................................................................................ 93 i Conclusion ................................................................................................................................. 98 Chapter 5: Results ....................................................................................................................... 100 Player Source Analyses ........................................................................................................... 100 Analysis of Web-Recruited Players ........................................................................................ 106 Demographics. ..................................................................................................................... 106 Mortality and priming. ......................................................................................................... 106 Attribution type.................................................................................................................... 109 Attitudes............................................................................................................................... 115 Analysis of Mechanical Turk Players ..................................................................................... 121 Demographics. ..................................................................................................................... 121 Mortality and priming. ......................................................................................................... 121 Attribution type.................................................................................................................... 125 Attitudes............................................................................................................................... 134 Analysis by Player Group ....................................................................................................... 146 Chapter 6: Summary and Discussion .......................................................................................... 150 Project Summary ..................................................................................................................... 150 Literature ................................................................................................................................. 151 Design.....................................................................................................................................
Recommended publications
  • ALGORITHMIC BIAS EXPLAINED How Automated Decision-Making Becomes Automated Discrimination
    ALGORITHMIC BIAS EXPLAINED How Automated Decision-Making Becomes Automated Discrimination Algorithmic Bias Explained | 1 Table of Contents Introduction 3 • What Are Algorithms and How Do They Work? • What Is Algorithmic Bias and Why Does it Matter? • Is Algorithmic Bias Illegal? • Where Does Algorithmic Bias Come From? Algorithmic Bias in Healthcare 10 Algorithmic Bias in Employment 12 Algorithmic Bias in Government Programs 14 Algorithmic BIas in Education 16 Algorithmic Bias in Credit and Finance 18 Algorithmic Bias in Housing and Development 21 Algorithmic BIas in Everything Else: Price Optimization Algorithms 23 Recommendations for Fixing Algorithmic Bias 26 • Algorithmic Transparency and Accountability • Race-Conscious Algorithms • Algorithmic Greenlining Conclusion 32 Introduction Over the last decade, algorithms have replaced decision-makers at all levels of society. Judges, doctors and hiring managers are shifting their responsibilities onto powerful algorithms that promise more data-driven, efficient, accurate and fairer decision-making. However, poorly designed algorithms threaten to amplify systemic racism by reproducing patterns of discrimination and bias that are found in the data algorithms use to learn and make decisions. “We find it important to state that the benefits of any technology should be felt by all of us. Too often, the challenges presented by new technology spell out yet another tale of racism, sexism, gender inequality, ableism and lack of consent within digital 1 culture.” —Mimi Onuoha and Mother Cyborg, authors, “A People’s Guide to A.I.” The goal of this report is to help advocates and policymakers develop a baseline understanding of algorithmic bias and its impact as it relates to socioeconomic opportunity across multiple sectors.
    [Show full text]
  • Chevron's Abusive Litigation in Ecuador
    Rainforest Chernobyl Revisited† The Clash of Human Rights and BIT Investor Claims: Chevron’s Abusive Litigation in Ecuador’s Amazon by Steven Donziger,* Laura Garr & Aaron Marr Page** a marathon environmental litigation: Seventeen yearS anD Counting he last time the environmental lawsuit Aguinda v. ChevronTexaco was discussed in these pages, the defen- Tdant Chevron Corporation1 had just won a forum non conveniens dismissal of the case from a U.S. federal court to Ecuador after nine years of litigation. Filed in 1993, the lawsuit alleged that Chevron’s predecessor company, Texaco, while it exclusively operated several oil fields in Ecuador’s Amazon from 1964 to 1990, deliberately dumped billions of gallons of toxic waste into the rainforest to cut costs and abandoned more than 900 large unlined waste pits that leach toxins into soils and groundwater. The suit contended that the contamination poisoned an area the size of Rhode Island, created a cancer epi- demic, and decimated indigenous groups. During the U.S. stage of the litigation, Chevron submitted fourteen sworn affidavits attesting to the fairness and adequacy of Ecuador’s courts. The company also drafted a letter that was By Lou Dematteis/Redux. Steven Donziger, attorney for the affected communities, speaks with signed by Ecuador’s then ambassador to the United States, a Huaorani women outside the Superior Court at the start of the Chevron former Chevron lawyer, asking the U.S. court to send the case trial on October 21, 2003 in Lago Agrio in the Ecuadoran Amazon. to Ecuador.2 Representative of Chevron’s position was the sworn statement from Dr.
    [Show full text]
  • Accountability As a Debiasing Strategy: Does Race Matter?
    Accountability as a Debiasing Strategy: Does Race Matter? Jamillah Bowman Williams, J.D., Ph.D. Georgetown University Law Center Paper Presented at CULP Colloquium Duke University May 19th, 2016 1 Introduction Congress passed Title VII of the Civil Rights Act of 1964 with the primary goal of integrating the workforce and eliminating arbitrary bias against minorities and other groups who had been historically excluded. Yet substantial research reveals that racial bias persists and continues to limit opportunities and outcomes for racial minorities in the workplace. It has been argued that having a sense of accountability, or “the implicit or explicit expectation that one may be called on to justify one’s beliefs, feelings, and actions to others,” can decrease the influence of bias.1 This empirical study seeks to clarify the conditions under which accountability to a committee of peers influences bias and behavior. This project builds on research by Sommers et al. (2006; 2008) that found whites assigned to racially diverse groups generated a wider range of perspectives, processed facts more thoroughly, and were more likely to discuss polarizing social issues than all-white groups.2 This project extends this line of inquiry to the employment discrimination context to empirically examine how a committee’s racial composition influences the decision making process. For example, when making a hiring or promotion decision, does accountability to a racially diverse committee lead to more positive outcomes than accountability to a homogeneous committee? More specifically, how does race of the committee members influence complex thinking, diversity beliefs, acknowledgement of structural discrimination, and inclusive promotion decisions? Implications for antidiscrimination law and EEO policy will be discussed.
    [Show full text]
  • How to Prevent Discriminatory Outcomes in Machine Learning
    White Paper How to Prevent Discriminatory Outcomes in Machine Learning Global Future Council on Human Rights 2016-2018 March 2018 Contents 3 Foreword 4 Executive Summary 6 Introduction 7 Section 1: The Challenges 8 Issues Around Data 8 What data are used to train machine learning applications? 8 What are the sources of risk around training data for machine learning applications? 9 What-if use case: Unequal access to loans for rural farmers in Kenya 9 What-if use case: Unequal access to education in Indonesia 9 Concerns Around Algorithm Design 9 Where is the risk for discrimination in algorithm design and deployment? 10 What-if use case: Exclusionary health insurance systems in Mexico 10 What-if scenario: China and social credit scores 11 Section 2: The Responsibilities of Businesses 11 Principles for Combating Discrimination in Machine Learning 13 Bringing principles of non-discrimination to life: Human rights due diligence for machine learning 14 Making human rights due diligence in machine learning effective 15 Conclusion 16 Appendix 1: Glossary/Definitions 17 Appendix 2: The Challenges – What Can Companies Do? 23 Appendix 3: Principles on the Ethical Design and Use of AI and Autonomous Systems 22 Appendix 4: Areas of Action Matrix for Human Rights in Machine Learning 29 Acknowledgements World Economic Forum® This paper has been written by the World Economic Forum Global Future Council on Human Rights 2016-18. The findings, interpretations and conclusions expressed herein are a result of a © 2018 – All rights reserved. collaborative process facilitated and endorsed by the World Economic Forum, but whose results No part of this publication may be reproduced or Transmitted in any form or by any means, including do not necessarily represent the views of the World Economic Forum, nor the entirety of its Photocopying and recording, or by any information Storage Members, Partners or other stakeholders, nor the individual Global Future Council members listed and retrieval system.
    [Show full text]
  • I Appreciate the Spirit of the Rule Change, and I Agree That We All
    From: Javed Abbas To: stevens, cheryl Subject: Comment On Proposed Rule Change - 250.2 Date: Saturday, March 27, 2021 8:42:43 PM Attachments: image002.png I appreciate the spirit of the rule change, and I agree that we all likely benefit by understanding the fundamental ways in which we are all connected, and that our society will either succeed or fail primarily based on our ability to cooperate with each other and treat each other fairly. However, I do not support the rule change because I do not think it will have the effect of making us more inclusive because of the ways humans tend to react when forced to do things. There will inevitably be push back from a group of people who do not support the spirit of the rule, that push back will probably be persuasive to people who don’t have strong feelings about the ideas one way or the other but will feel compelled to oppose the ideas because it will be portrayed as part of a political movement that they are inclined to oppose, and ultimately the Rule that is supposed to help us become more inclusive will actually help drive us further apart. I do think it’s a great idea to offer these CLEs and let them compete on equal footing in the marketplace of ideas. Rule Change: "Rule 250.2. CLE Requirements (1) CLE Credit Requirement. Every registered lawyer and every judge must complete 45 credit hours of continuing legal education during each applicable CLE compliance period as provided in these rules.
    [Show full text]
  • Bibliography: Bias in Artificial Intelligence
    NIST A.I. Reference Library Bibliography: Bias in Artificial Intelligence Abdollahpouri, H., Mansoury, M., Burke, R., & Mobasher, B. (2019). The unfairness of popularity bias in recommendation. arXiv preprint arXiv:1907.13286. Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. G. (2020, January). Roles for computing in social change. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 252-260. Aggarwal, A., Shaikh, S., Hans, S., Haldar, S., Ananthanarayanan, R., & Saha, D. (2021). Testing framework for black-box AI models. arXiv preprint. Retrieved from https://arxiv.org/pdf/2102.06166.pdf Ahmed, N., & Wahed, M. (2020). The De-democratization of AI: Deep learning and the compute divide in artificial intelligence research. arXiv preprint arXiv:2010.15581. Retrieved from https://arxiv.org/ftp/arxiv/papers/2010/2010.15581.pdf. AI Now Institute. Algorithmic Accountability Policy Toolkit. (2018). Retrieved from: https://ainowinstitute.org/aap-toolkit.html. Aitken, M., Toreini, E., Charmichael, P., Coopamootoo, K., Elliott, K., & van Moorsel, A. (2020, January). Establishing a social licence for Financial Technology: Reflections on the role of the private sector in pursuing ethical data practices. Big Data & Society. doi:10.1177/2053951720908892 Ajunwa, I. (2016). Hiring by Algorithm. SSRN Electronic Journal. doi:10.2139/ssrn.2746078 Ajunwa, I. (2020, Forthcoming). The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo, L. Rev. Amini, A., Soleimany, A. P., Schwarting, W., Bhatia, S. N., & Rus, D. (2019, January). Uncovering and mitigating algorithmic bias through learned latent structure. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 289-295. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D.
    [Show full text]
  • Implicit Bias: How to Recognize It and What Campus Gcs Should Do
    2019 Annual Conference Hyatt Regency Denver at Colorado Convention Center · Denver, CO June 23 – 26, 2019 Implicit Bias: 02B How to Recognize It and What Campus GCs Should Do About It NACUA members may reproduce and distribute copies of materials to other NACUA members and to persons employed by NACUA member institutions if they provide appropriate attribution, including any credits, acknowledgments, copyright notice, or other such information contained in the materials. All materials available as part of this program express the viewpoints of the authors or presenters and not of NACUA. The content is not approved or endorsed by NACUA. The content should not be considered to be or used as legal advice. Legal questions should be directed to institutional legal counsel. IMPLICIT BIAS: WHAT IS IT AND WHAT CAN GENERAL COUNSELS DO ABOUT IT? June 23 – 26, 2019 Kathlyn G. Perez Shareholder, Baker Donelson New Orleans, Louisiana and Kathy Carlson J.D. Executive Director, Department of Internal Investigations Office of Legal & Regulatory Affairs, University of Texas Medical Branch Galveston, Texas I. What is Implicit Bias? ............................................................................................................. 1 II. Implicit Bias in Corporations and Elementary and Secondary Education ........................... 2 III. Higher Education Institutions as Employers........................................................................ 3 A. Implicit Bias in Hiring, Grant Making, and Assignments .....................................
    [Show full text]
  • Filtering Practices of Social Media Platforms Hearing
    FILTERING PRACTICES OF SOCIAL MEDIA PLATFORMS HEARING BEFORE THE COMMITTEE ON THE JUDICIARY HOUSE OF REPRESENTATIVES ONE HUNDRED FIFTEENTH CONGRESS SECOND SESSION APRIL 26, 2018 Serial No. 115–56 Printed for the use of the Committee on the Judiciary ( Available via the World Wide Web: http://judiciary.house.gov U.S. GOVERNMENT PUBLISHING OFFICE 32–930 WASHINGTON : 2018 VerDate Sep 11 2014 23:54 Nov 28, 2018 Jkt 032930 PO 00000 Frm 00001 Fmt 5011 Sfmt 5011 E:\HR\OC\A930.XXX A930 SSpencer on DSKBBXCHB2PROD with HEARINGS COMMITTEE ON THE JUDICIARY BOB GOODLATTE, Virginia, Chairman F. JAMES SENSENBRENNER, JR., JERROLD NADLER, New York Wisconsin ZOE LOFGREN, California LAMAR SMITH, Texas SHEILA JACKSON LEE, Texas STEVE CHABOT, Ohio STEVE COHEN, Tennessee DARRELL E. ISSA, California HENRY C. ‘‘HANK’’ JOHNSON, JR., Georgia STEVE KING, Iowa THEODORE E. DEUTCH, Florida LOUIE GOHMERT, Texas LUIS V. GUTIE´ RREZ, Illinois JIM JORDAN, Ohio KAREN BASS, California TED POE, Texas CEDRIC L. RICHMOND, Louisiana TOM MARINO, Pennsylvania HAKEEM S. JEFFRIES, New York TREY GOWDY, South Carolina DAVID CICILLINE, Rhode Island RAU´ L LABRADOR, Idaho ERIC SWALWELL, California BLAKE FARENTHOLD, Texas TED LIEU, California DOUG COLLINS, Georgia JAMIE RASKIN, Maryland KEN BUCK, Colorado PRAMILA JAYAPAL, Washington JOHN RATCLIFFE, Texas BRAD SCHNEIDER, Illinois MARTHA ROBY, Alabama VALDEZ VENITA ‘‘VAL’’ DEMINGS, Florida MATT GAETZ, Florida MIKE JOHNSON, Louisiana ANDY BIGGS, Arizona JOHN RUTHERFORD, Florida KAREN HANDEL, Georgia KEITH ROTHFUS, Pennsylvania SHELLEY HUSBAND, Chief of Staff and General Counsel PERRY APELBAUM, Minority Staff Director and Chief Counsel (II) VerDate Sep 11 2014 23:54 Nov 28, 2018 Jkt 032930 PO 00000 Frm 00002 Fmt 5904 Sfmt 5904 E:\HR\OC\A930.XXX A930 SSpencer on DSKBBXCHB2PROD with HEARINGS C O N T E N T S APRIL 26, 2018 OPENING STATEMENTS Page The Honorable Bob Goodlatte, Virginia, Chairman, Committee on the Judici- ary ........................................................................................................................
    [Show full text]
  • Debugging Software's Schemas
    \\jciprod01\productn\G\GWN\82-6\GWN604.txt unknown Seq: 1 16-JAN-15 10:36 Debugging Software’s Schemas Kristen Osenga* ABSTRACT The analytical framework being used to assess the patent eligibility of software and computer-related inventions is fraught with errors, or bugs, in the system. A bug in a schema, or framework, in computer science may cause the system or software to produce unexpected results or shut down altogether. Similarly, errors in the patent eligibility framework are causing unexpected results, as well as calls to shut down patent eligibility for software and com- puter-related inventions. There are two general schemas that are shaping current discussions about software and computer-related invention patents—that software patents are generally bad (the bad patent schema) and that software patent holders are problematic (the troll schema). Because these frameworks were created and are maintained through a series of cognitive biases, they suffer from a variety of bugs. A larger flaw in the system, however, is that using these two schemas to frame the issue of patent eligibility for software and computer-related inven- tions misses the underlying question that is at the heart of the analysis—what is an unpatentable “abstract idea.” To improve the present debate about the patent eligibility for these inventions, it is therefore critical that the software patent system be debugged. TABLE OF CONTENTS INTRODUCTION ................................................. 1833 R I. THE STATE OF SOFTWARE PATENTS .................... 1836 R A. What Is Software .................................... 1836 R B. The Software Patent Mess ........................... 1838 R II. THE BIASES IN SOFTWARE’S SCHEMAS .................. 1843 R A.
    [Show full text]
  • Censorship, Free Speech & Facebook
    Washington Journal of Law, Technology & Arts Volume 15 Issue 1 Article 3 12-13-2019 Censorship, Free Speech & Facebook: Applying the First Amendment to Social Media Platforms via the Public Function Exception Matthew P. Hooker Follow this and additional works at: https://digitalcommons.law.uw.edu/wjlta Part of the First Amendment Commons, Internet Law Commons, and the Privacy Law Commons Recommended Citation Matthew P. Hooker, Censorship, Free Speech & Facebook: Applying the First Amendment to Social Media Platforms via the Public Function Exception, 15 WASH. J. L. TECH. & ARTS 36 (2019). Available at: https://digitalcommons.law.uw.edu/wjlta/vol15/iss1/3 This Article is brought to you for free and open access by the Law Reviews and Journals at UW Law Digital Commons. It has been accepted for inclusion in Washington Journal of Law, Technology & Arts by an authorized editor of UW Law Digital Commons. For more information, please contact [email protected]. Hooker: Censorship, Free Speech & Facebook: Applying the First Amendment WASHINGTON JOURNAL OF LAW, TECHNOLOGY & ARTS VOLUME 15, ISSUE 1 FALL 2019 CENSORSHIP, FREE SPEECH & FACEBOOK: APPLYING THE FIRST AMENDMENT TO SOCIAL MEDIA PLATFORMS VIA THE PUBLIC FUNCTION EXCEPTION Matthew P. Hooker* CITE AS: M HOOKER, 15 WASH. J.L. TECH. & ARTS 36 (2019) https://digitalcommons.law.uw.edu/cgi/viewcontent.cgi?article=1300&context= wjlta ABSTRACT Society has a love-hate relationship with social media. Thanks to social media platforms, the world is more connected than ever before. But with the ever-growing dominance of social media there have come a mass of challenges. What is okay to post? What isn’t? And who or what should be regulating those standards? Platforms are now constantly criticized for their content regulation policies, sometimes because they are viewed as too harsh and other times because they are characterized as too lax.
    [Show full text]
  • Alliance the Color of Justice-2019
    THE COLOR OF JUSTICE The Landscape of Traumatic Justice: Youth of Color in Conflict with the Law The Alliance of National Psychological Associations for Racial and Ethnic Equity The Association of Black Psychologists The Asian American Psychological Association The National Latina/o Psychological Association The American Psychological Association The Society of Indian Psychologists 2019 THE COLOR OF JUSTICE The Landscape of Traumatic Justice: Youth of Color in Conflict with the Law The Alliance of National Psychological Associations for Racial and Ethnic Equity The Association of Black Psychologists The Asian American Psychological Association The National Latina/o Psychological Association The American Psychological Association The Society of Indian Psychologists Authors: Roberto Cancio, Ph.D. Cheryl Grills, Ph.D. Jennifer García, Ph.D. With Contributions from: The Youth Justice Coalition Publication, 2019. Recommented Citation: Acknowledgments Cancio, R., Grills, C. T., and J. García. (2019). This document was developed by The Alliance of The Color of Justice: The Landscape of National Psychological Associations for Racial and Traumatic Justice: Youth of Color in Conflict with Ethnic Equity. the Law. The Alliance of National Psychological Associations for Racial and Ethnic Equity. We would like to thank the following people for their invaluable contributions and revisions to this document: Claudette Antuña, Leah Rouse Arndt, Eddie Becton, Kim McGill, Gayle Morse, Kevin Nadal, Amorie Robinson, and Sandra Villanueva. We would also like to thank the Youth Justice Coalition youth who generously shared their stories. The Annie E. Casey Foundation provided support for this report. The Casey Foundation is a private philanthropy that creates a brighter future for the nation’s children by developing solutions to strengthen families, build paths to economic opportunity and transform struggling communities into safer and healthier places to live, work and grow.
    [Show full text]
  • The Intersection of Race and Algorithmic Tools in the Criminal Legal System
    Maryland Law Review Volume 80 Issue 3 Article 1 The Intersection of Race and Algorithmic Tools in the Criminal Legal System Vincent M. Southerland Follow this and additional works at: https://digitalcommons.law.umaryland.edu/mlr Part of the Law and Race Commons Recommended Citation Vincent M. Southerland, The Intersection of Race and Algorithmic Tools in the Criminal Legal System, 80 Md. L. Rev. 487 (2021) Available at: https://digitalcommons.law.umaryland.edu/mlr/vol80/iss3/1 This Article is brought to you for free and open access by the Academic Journals at DigitalCommons@UM Carey Law. It has been accepted for inclusion in Maryland Law Review by an authorized editor of DigitalCommons@UM Carey Law. For more information, please contact [email protected]. THE INTERSECTION OF RACE AND ALGORITHMIC TOOLS IN THE CRIMINAL LEGAL SYSTEM VINCENT M. SOUTHERLAND* A growing portion of the American public—including policymakers, advocates, and institutional stakeholders—have accepted the fact that racism endemic to the United States infects every stage of the criminal legal system. Acceptance of this fact has resulted in efforts to address and remedy pervasive and readily observable systemic bias. Chief among those efforts is a turn toward technology—specifically algorithmic decision-making and actuarial tools. Many have welcomed the embrace of technology, confident that technological tools can solve a problem—race-based inequity—that has bedeviled humans for generations. This Article engages that embrace by probing the adoption of technological tools at various sites throughout the criminal legal system and exploring their efficacy as a remedy to racial inequality.
    [Show full text]