Algorithmic Equity: a Framework for Social Applications Secure, Healthier and More Prosperous

Total Page:16

File Type:pdf, Size:1020Kb

Algorithmic Equity: a Framework for Social Applications Secure, Healthier and More Prosperous Algorithmic Equity A Framework for Social Applications Osonde A. Osoba, Benjamin Boudreaux, Jessica Saunders, J. Luke Irwin, Pam A. Mueller, Samantha Cherney C O R P O R A T I O N For more information on this publication, visit www.rand.org/t/RR2708 Library of Congress Cataloging-in-Publication Data is available for this publication. ISBN: 978-1-9774-0313-1 Published by the RAND Corporation, Santa Monica, Calif. © Copyright 2019 RAND Corporation R® is a registered trademark. Cover: Data image/monsitj/iStock by Getty Images; Scales of Justice/DNY59/Getty Images. Limited Print and Electronic Distribution Rights This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors. Support RAND Make a tax-deductible charitable contribution at www.rand.org/giving/contribute www.rand.org Preface This report is an examination of potential decisionmaking pathologies in social institu- tions that increasingly leverage algorithms. We frame social institutions as operating as collections of decision pipelines in which algorithms can increasingly feature either as decision aids or decisionmakers. Then we examine how different perspectives on equity or fairness inform the use of algorithms in the context of three sample institu- tions: auto insurance, recruitment, and criminal justice. We then highlight challenges and recommendations to improve equity in the use of algorithms for institutional decisionmaking. This report is funded by RAND internal funds. The goal is to inform decision- makers and regulators grappling with equity considerations in the deployment of algo- rithmic decision aids, particularly in public institutions. We hope our report can also serve to inform private sector decisionmakers and the lay public. Community Health and Environmental Policy Program RAND Social and Economic Well-Being is a division of the RAND Corporation that seeks to actively improve the health and social and economic well-being of populations and communities throughout the world. This research was conducted in the Commu- nity Health and Environmental Policy Program within RAND Social and Economic Well-Being. The program focuses on such topics as infrastructure, science and tech- nology, community design, community health promotion, migration and population dynamics, transportation, energy, and climate and the environment, as well as other policy concerns that are influenced by the natural and built environment, technology, and community organizations and institutions that affect well-being. For more infor- mation, email [email protected]. RAND Ventures The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more iii iv Algorithmic Equity: A Framework for Social Applications secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and commit- ted to the public interest. RAND Ventures is a vehicle for investing in policy solutions. Philanthropic con- tributions support our ability to take the long view, tackle tough and often-controversial topics, and share our findings in innovative and compelling ways. RAND’s research findings and recommendations are based on data and evidence, and therefore do not necessarily reflect the policy preferences or interests of its clients, donors, or supporters. Funding for this venture was provided by gifts from RAND supporters and income from operations. Contents Preface ................................................................................................. iii Figures and Tables ...................................................................................vii Summary .............................................................................................. ix Abbreviations ......................................................................................... xi CHAPTER ONE Introduction ........................................................................................... 1 The Rise of Algorithmic Decisionmaking ........................................................... 2 Identifying and Scoping the Research Questions .................................................. 4 Approach and Methods ............................................................................... 4 Organization of This Report .......................................................................... 5 CHAPTER TWO Concepts of Equity ................................................................................... 7 Perspectives on Equity ................................................................................. 8 Equity in Practice: Impossibilities and Trade-Offs................................................15 Competing Concepts of Equity: The Case of COMPAS ........................................17 Metaethical Aspects of Algorithmic Equity .......................................................21 CHAPTER THREE Domain Exploration: Algorithms in Auto Insurance ........................................ 23 The Auto Insurance Decision Pipeline..............................................................25 Relevant Concepts of Equity in Auto Insurance ................................................. 28 Equity Challenges .................................................................................... 30 Closing Thoughts .....................................................................................31 CHAPTER FOUR Domain Exploration: Algorithms in Employee Recruitment ...............................33 Relevant Concepts of Equity in Employment .................................................... 34 Benefits of Recruitment Algorithms ............................................................... 36 Employment Algorithm Decision Pipeline........................................................ 36 v vi Algorithmic Equity: A Framework for Social Applications Equity Challenges in Recruitment Algorithms ....................................................39 Closing Thoughts .................................................................................... 42 CHAPTER FIVE Domain Exploration: Algorithms in U.S. Criminal Justice Systems ......................45 The Criminal Justice Decision Pipeline ........................................................... 46 Relevant Concepts of Equity in Criminal Justice ................................................ 48 Equity Challenges .....................................................................................49 Closing Thoughts .....................................................................................51 CHAPTER SIX Analytic Case Study: Auditing Algorithm Use in North Carolina’s Criminal Justice System ...................................................................................53 Background ............................................................................................53 Assessment of Algorithmic Bias .................................................................... 56 Summary of Audit Analysis ..........................................................................59 CHAPTER SEVEN Insights and Recommendations ...................................................................61 Insights from Domain Explorations ................................................................61 Recommendations and Useful Frameworks .......................................................65 Closing Remarks ......................................................................................73 APPENDIX A. Supplemental Tables .............................................................................75 References ............................................................................................ 77 Figures and Tables Figures 3.1. State Auto Insurance Regulatory Regimes ........................................... 26 3.2. Notional Rendition of the Auto Insurance Decision Pipeline ..................... 27 4.1. Notional Rendition of Decision Pipeline for Employment and Recruiting ......37 5.1. Notional Rendition of Decision Pipeline for the U.S. Criminal Justice System ............................................................................45 Tables 2.1. Summary of the Simpler Statistical Concepts of Equity ............................12 6.1. Relationship Between Supervision Levels and Risk/Needs Levels .................55 6.2. Supervision Level Minimum Contact Standards ....................................55 6.3. Distribution of Probation Assessment Results by Race .............................57 6.4. One-Year Violation Rates by Race and by Supervision Level ......................57 6.5. Summary of Multivariate Logistic Regression Results—Odds Ratio .............58 7.1. Notional Basic
Recommended publications
  • Deepfakes and Cheap Fakes
    DEEPFAKES AND CHEAP FAKES THE MANIPULATION OF AUDIO AND VISUAL EVIDENCE Britt Paris Joan Donovan DEEPFAKES AND CHEAP FAKES - 1 - CONTENTS 02 Executive Summary 05 Introduction 10 Cheap Fakes/Deepfakes: A Spectrum 17 The Politics of Evidence 23 Cheap Fakes on Social Media 25 Photoshopping 27 Lookalikes 28 Recontextualizing 30 Speeding and Slowing 33 Deepfakes Present and Future 35 Virtual Performances 35 Face Swapping 38 Lip-synching and Voice Synthesis 40 Conclusion 47 Acknowledgments Author: Britt Paris, assistant professor of Library and Information Science, Rutgers University; PhD, 2018,Information Studies, University of California, Los Angeles. Author: Joan Donovan, director of the Technology and Social Change Research Project, Harvard Kennedy School; PhD, 2015, Sociology and Science Studies, University of California San Diego. This report is published under Data & Society’s Media Manipulation research initiative; for more information on the initiative, including focus areas, researchers, and funders, please visit https://datasociety.net/research/ media-manipulation DATA & SOCIETY - 2 - EXECUTIVE SUMMARY Do deepfakes signal an information apocalypse? Are they the end of evidence as we know it? The answers to these questions require us to understand what is truly new about contemporary AV manipulation and what is simply an old struggle for power in a new guise. The first widely-known examples of amateur, AI-manipulated, face swap videos appeared in November 2017. Since then, the news media, and therefore the general public, have begun to use the term “deepfakes” to refer to this larger genre of videos—videos that use some form of deep or machine learning to hybridize or generate human bodies and faces.
    [Show full text]
  • An Unconventional Solution for a Sociotechnical
    1 RESOLVING THE PROBLEM OF ALGORITHMIC DISSONANCE: AN UNCONVENTIONAL SOLUTION FOR A SOCIOTECHNICAL PROBLEM Shiv Issar | Department of Sociology, University of Wisconsin-Milwaukee | [email protected] June 29, 2021 Abstract In this paper, I propose the concept of “algorithmic dissonance”, which characterizes the inconsistencies that emerge through the fissures that lie between algorithmic systems that utilize system identities, and sociocultural systems of knowledge that interact with them. A product of human-algorithm interaction, algorithmic dissonance builds upon the concepts of algorithmic discrimination and algorithmic awareness, offering greater clarity towards the comprehension of these sociotechnical entanglements. By employing Du Bois’ concept of “double consciousness” and black feminist theory, I argue that all algorithmic dissonance is racialized. Next, I advocate for the use of speculative methodologies and art for the creation of critically informative sociotechnical imaginaries that might serve a basis for the sociological critique and resolution of algorithmic dissonance. Algorithmic dissonance can be an effective check against structural inequities, and of interest to scholars and practitioners concerned with running “algorithm audits”. Keywords: Algorithmic Dissonance, Algorithmic Discrimination, Algorithmic Bias, Algorithm Audits, System Identity, Speculative methodologies. 2 Introduction This paper builds upon Aneesh’s (2015) notion of system identities by utilizing their nature beyond their primary function of serving algorithmic systems and the process of algorithmic decision- making. As Aneesh (2015) describes, system identities are data-bound constructions built by algorithmic systems for the (singular) purpose of being used by algorithmic systems. They are meant to simulate a specific dimension (or a set of dimensions) of personhood , just as algorithmic systems and algorithmic decision-making are meant to simulate the agents of different social institutions, and the decisions that those agents would make.
    [Show full text]
  • Digital Platform As a Double-Edged Sword: How to Interpret Cultural Flows in the Platform Era
    International Journal of Communication 11(2017), 3880–3898 1932–8036/20170005 Digital Platform as a Double-Edged Sword: How to Interpret Cultural Flows in the Platform Era DAL YONG JIN Simon Fraser University, Canada This article critically examines the main characteristics of cultural flows in the era of digital platforms. By focusing on the increasing role of digital platforms during the Korean Wave (referring to the rapid growth of local popular culture and its global penetration starting in the late 1990s), it first analyzes whether digital platforms as new outlets for popular culture have changed traditional notions of cultural flows—the forms of the export and import of popular culture mainly from Western countries to non-Western countries. Second, it maps out whether platform-driven cultural flows have resolved existing global imbalances in cultural flows. Third, it analyzes whether digital platforms themselves have intensified disparities between Western and non- Western countries. In other words, it interprets whether digital platforms have deepened asymmetrical power relations between a few Western countries (in particular, the United States) and non-Western countries. Keywords: digital platforms, cultural flows, globalization, social media, asymmetrical power relations Cultural flows have been some of the most significant issues in globalization and media studies since the early 20th century. From television programs to films, and from popular music to video games, cultural flows as a form of the export and import of cultural materials have been increasing. Global fans of popular culture used to enjoy films, television programs, and music by either purchasing DVDs and CDs or watching them on traditional media, including television and on the big screen.
    [Show full text]
  • Evaluation of the Shreveport Predictive Policing Experiment
    Safety and Justice Program CHILDREN AND FAMILIES The RAND Corporation is a nonprofit institution that EDUCATION AND THE ARTS helps improve policy and decisionmaking through ENERGY AND ENVIRONMENT research and analysis. HEALTH AND HEALTH CARE This electronic document was made available from INFRASTRUCTURE AND www.rand.org as a public service of the RAND TRANSPORTATION Corporation. INTERNATIONAL AFFAIRS LAW AND BUSINESS NATIONAL SECURITY Skip all front matter: Jump to Page 16 POPULATION AND AGING PUBLIC SAFETY SCIENCE AND TECHNOLOGY TERRORISM AND HOMELAND SECURITY Support RAND Purchase this document Browse Reports & Bookstore Make a charitable contribution For More Information Visit RAND at www.rand.org Explore the RAND Safety and Justice Program View document details Limited Electronic Distribution Rights This document and trademark(s) contained herein are protected by law as indicated in a notice appearing later in this work. This electronic representation of RAND intellectual property is provided for non-commercial use only. Unauthorized posting of RAND electronic documents to a non-RAND website is prohibited. RAND electronic documents are protected under copyright law. Permission is required from RAND to reproduce, or reuse in another form, any of our research documents for commercial use. For information on reprint and linking permissions, please see RAND Permissions. This report is part of the RAND Corporation research report series. RAND reports present research findings and objective analysis that ad- dress the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for re- search quality and objectivity. Evaluation of the Shreveport Predictive Policing Experiment Priscillia Hunt, Jessica Saunders, John S.
    [Show full text]
  • Predictive POLICING the Role of Crime Forecasting in Law Enforcement Operations
    Safety and Justice Program CHILDREN AND FAMILIES The RAND Corporation is a nonprofit institution that EDUCATION AND THE ARTS helps improve policy and decisionmaking through ENERGY AND ENVIRONMENT research and analysis. HEALTH AND HEALTH CARE This electronic document was made available from INFRASTRUCTURE AND www.rand.org as a public service of the RAND TRANSPORTATION Corporation. INTERNATIONAL AFFAIRS LAW AND BUSINESS NATIONAL SECURITY Skip all front matter: Jump to Page 16 POPULATION AND AGING PUBLIC SAFETY SCIENCE AND TECHNOLOGY TERRORISM AND HOMELAND SECURITY Support RAND Purchase this document Browse Reports & Bookstore Make a charitable contribution For More Information Visit RAND at www.rand.org Explore the RAND Safety and Justice Program View document details Limited Electronic Distribution Rights This document and trademark(s) contained herein are protected by law as indicated in a notice appearing later in this work. This electronic representation of RAND intellectual property is provided for non-commercial use only. Unauthorized posting of RAND electronic documents to a non-RAND website is prohibited. RAND electronic documents are protected under copyright law. Permission is required from RAND to reproduce, or reuse in another form, any of our research documents for commercial use. For information on reprint and linking permissions, please see RAND Permissions. This report is part of the RAND Corporation research report series. RAND reports present research findings and objective analysis that ad- dress the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for re- search quality and objectivity. Safety and Justice Program PREDICTIVE POLICING The Role of Crime Forecasting in Law Enforcement Operations Walter L.
    [Show full text]
  • Deepfakes and Cheap Fakes
    DEEPFAKES AND CHEAP FAKES THE MANIPULATION OF AUDIO AND VISUAL EVIDENCE Britt Paris Joan Donovan DEEPFAKES AND CHEAP FAKES - 1 - CONTENTS 02 Executive Summary 05 Introduction 10 Cheap Fakes/Deepfakes: A Spectrum 17 The Politics of Evidence 23 Cheap Fakes on Social Media 25 Photoshopping 27 Lookalikes 28 Recontextualizing 30 Speeding and Slowing 33 Deepfakes Present and Future 35 Virtual Performances 35 Face Swapping 38 Lip-synching and Voice Synthesis 40 Conclusion 47 Acknowledgments Author: Britt Paris, assistant professor of Library and Information Science, Rutgers University; PhD, 2018,Information Studies, University of California, Los Angeles. Author: Joan Donovan, director of the Technology and Social Change Research Project, Harvard Kennedy School; PhD, 2015, Sociology and Science Studies, University of California San Diego. This report is published under Data & Society’s Media Manipulation research initiative; for more information on the initiative, including focus areas, researchers, and funders, please visit https://datasociety.net/research/ media-manipulation DATA & SOCIETY - 2 - EXECUTIVE SUMMARY Do deepfakes signal an information apocalypse? Are they the end of evidence as we know it? The answers to these questions require us to understand what is truly new about contemporary AV manipulation and what is simply an old struggle for power in a new guise. The first widely-known examples of amateur, AI-manipulated, face swap videos appeared in November 2017. Since then, the news media, and therefore the general public, have begun to use the term “deepfakes” to refer to this larger genre of videos—videos that use some form of deep or machine learning to hybridize or generate human bodies and faces.
    [Show full text]
  • Algorithmic Awareness
    ALGORITHMIC AWARENESS CONVERSATIONS WITH YOUNG CANADIANS ABOUT ARTIFICIAL INTELLIGENCE AND PRIVACY Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy This report can be downloaded from: mediasmarts.ca/research-policy Cite as: Brisson-Boivin, Kara and Samantha McAleese. (2021). “Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy.” MediaSmarts. Ottawa. Written for MediaSmarts by: Kara Brisson-Boivin, PhD Samantha McAleese MediaSmarts 205 Catherine Street, Suite 100 Ottawa, ON Canada K2P 1C3 T: 613-224-7721 F: 613-761-9024 Toll-free line: 1-800-896-3342 [email protected] mediasmarts.ca @mediasmarts This research was made possible by the financial contributions from the Office of the Privacy Commissioner of Canada’s contributions program. Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy. MediaSmarts © 2021 2 Table of Contents Key Terms .................................................................................................................................................... 4 Introduction ................................................................................................................................................. 6 Algorithms and Artificial Intelligence: ............................................................................................... 9 What We (Don't) Know .........................................................................................................................
    [Show full text]
  • Estimating Age and Gender in Instagram Using Face Recognition: Advantages, Bias and Issues. / Diego Couto De Las Casas
    ESTIMATING AGE AND GENDER IN INSTAGRAM USING FACE RECOGNITION: ADVANTAGES, BIAS AND ISSUES. DIEGO COUTO DE. LAS CASAS ESTIMATING AGE AND GENDER IN INSTAGRAM USING FACE RECOGNITION: ADVANTAGES, BIAS AND ISSUES. Dissertação apresentada ao Programa de Pós-Graduação em Ciência da Computação do Instituto de Ciências Exatas da Univer- sidade Federal de Minas Gerais – Depar- tamento de Ciência da Computação como requisito parcial para a obtenção do grau de Mestre em Ciência da Computação. Orientador: Virgílio Augusto Fernandes de Almeida Belo Horizonte Fevereiro de 2016 DIEGO COUTO DE. LAS CASAS ESTIMATING AGE AND GENDER IN INSTAGRAM USING FACE RECOGNITION: ADVANTAGES, BIAS AND ISSUES. Dissertation presented to the Graduate Program in Ciência da Computação of the Universidade Federal de Minas Gerais – De- partamento de Ciência da Computação in partial fulfillment of the requirements for the degree of Master in Ciência da Com- putação. Advisor: Virgílio Augusto Fernandes de Almeida Belo Horizonte February 2016 © 2016, Diego Couto de Las Casas. Todos os direitos reservados Ficha catalográfica elaborada pela Biblioteca do ICEx - UFMG Las Casas, Diego Couto de. L337e Estimating age and gender in Instagram using face recognition: advantages, bias and issues. / Diego Couto de Las Casas. – Belo Horizonte, 2016. xx, 80 f. : il.; 29 cm. Dissertação (mestrado) - Universidade Federal de Minas Gerais – Departamento de Ciência da Computação. Orientador: Virgílio Augusto Fernandes de Almeida. 1. Computação - Teses. 2. Redes sociais on-line. 3. Computação social. 4. Instagram. I. Orientador. II. Título. CDU 519.6*04(043) Acknowledgments Gostaria de agradecer a todos que me fizeram chegar até aqui. Àminhafamília,pelosconselhos,pitacoseportodoosuporteaolongodesses anos. Aos meus colegas do CAMPS(-Élysées),pelascolaborações,pelasrisadasepelo companheirismo.
    [Show full text]
  • How Will AI Shape the Future of Law?
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Helsingin yliopiston digitaalinen arkisto How Will AI Shape the Future of Law? EDITED BY RIIKKA KOULU & LAURA KONTIAINEN 2019 Acknowledgements The editors and the University of Helsinki Legal Tech Lab would like to thank the authors and interviewees for the time and effort they put into their papers. We would also like to thank the Faculty of Law at University of Helsinki and particularly the continuous kind support of Professor Kimmo Nuotio, the former dean of the faculty and the Lab’s unofficial godfather, for facilitating the Lab’s development and for helping us make the second Legal Tech Conference happen. In addition, we would like to express our gratitude to the conference sponsors, Finnish Bar Association, the Association of Finnish Lawyers, Edita Publishing, Attorneys at Law Fondia and Attorneys at Law Roschier as well as the Legal Design Summit community for their support. It takes a village to raise a conference. Therefore, we would like to thank everyone whose time and commitment has turned the conference and this publication from an idea into reality. Thank you to the attendees, speakers, volunteers and Legal Tech Lab crew members. RIIKKA KOULU & LAURA KONTIAINEN Legal Tech Lab, University of Helsinki 2019 University of Helsinki Legal Tech Lab publications © Authors and Legal Tech Lab ISBN 978-951-51-5476-7 (print) ISBN 978-951-51-5477-4 (PDF) Print Veiters Helsinki 2019 Contents Foreword 009 KIMMO NUOTIO I Digital Transformation of
    [Show full text]
  • Establishing Discriminatory Purpose in Algorithmic Risk Assessment
    NOTES BEYOND INTENT: ESTABLISHING DISCRIMINATORY PURPOSE IN ALGORITHMIC RISK ASSESSMENT Equal protection doctrine is bound up in conceptions of intent. Unintended harms from well-meaning persons are, quite simply, nonac- tionable.1 This basic predicate has erected a high bar for plaintiffs ad- vancing equal protection challenges in the criminal justice system. Notably, race-based challenges on equal protection grounds, which sub- ject the state to strict scrutiny review, are nonetheless stymied by the complexities of establishing discriminatory purpose. What’s more, the inability of courts and scholars to coalesce on a specific notion of the term has resulted in gravely inconsistent applications of the doctrine.2 States’ increasing use of algorithmic systems raises longstanding con- cerns regarding prediction in our criminal justice system — how to cat- egorize the dangerousness of an individual, the extent to which past behavior is relevant to future conduct, and the effects of racial dispari- ties.3 Integration of algorithmic systems has been defined by ambiva- lence: while they are touted for their removal of human discretion,4 they also readily promote and amplify inequalities — for example, through their consideration of protected characteristics and their interaction with existing systems tainted by legacies of inequality.5 Furthermore, algo- rithms, especially those incorporating artificial intelligence (AI), may operate in ways that are opaque, unpredictable, or not well understood. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1 See Washington v. Davis, 426 U.S. 229, 239–40 (1976). 2 See Aziz Z. Huq, What Is Discriminatory Intent?, 103 CORNELL L. REV. 1211, 1240–63 (2018) (presenting five definitions from the literature). 3 See Sonia K.
    [Show full text]
  • Artificial Intelligence: Risks to Privacy and Democracy
    Artificial Intelligence: Risks to Privacy and Democracy Karl Manheim* and Lyric Kaplan** 21 Yale J.L. & Tech. 106 (2019) A “Democracy Index” is published annually by the Economist. For 2017, it reported that half of the world’s countries scored lower than the previous year. This included the United States, which was de- moted from “full democracy” to “flawed democracy.” The princi- pal factor was “erosion of confidence in government and public in- stitutions.” Interference by Russia and voter manipulation by Cam- bridge Analytica in the 2016 presidential election played a large part in that public disaffection. Threats of these kinds will continue, fueled by growing deployment of artificial intelligence (AI) tools to manipulate the preconditions and levers of democracy. Equally destructive is AI’s threat to deci- sional and informational privacy. AI is the engine behind Big Data Analytics and the Internet of Things. While conferring some con- sumer benefit, their principal function at present is to capture per- sonal information, create detailed behavioral profiles and sell us goods and agendas. Privacy, anonymity and autonomy are the main casualties of AI’s ability to manipulate choices in economic and po- litical decisions. The way forward requires greater attention to these risks at the na- tional level, and attendant regulation. In its absence, technology gi- ants, all of whom are heavily investing in and profiting from AI, will dominate not only the public discourse, but also the future of our core values and democratic institutions. * Professor of Law, Loyola Law School, Los Angeles. This article was inspired by a lecture given in April 2018 at Kansai University, Osaka, Japan.
    [Show full text]
  • Algorithm Bias Playbook Presentation
    Algorithmic Bias Playbook Ziad Obermeyer June, 2021 Rebecca Nissan Michael Stern Stephanie Eaneff Emily Joy Bembeneck Sendhil Mullainathan ALGORITHMIC BIAS PLAYBOOK Is your organization using biased algorithms? How would you know? What would you do if so? This playbook describes 4 steps your organization can take to answer these questions. It distills insights from our years of applied work helping others diagnose and mitigate bias in live algorithms. Algorithmic bias is everywhere. Our work with dozens of organizations—healthcare providers, insurers, technology companies, and regulators—has taught us that biased algorithms are deployed throughout the healthcare system, influencing clinical care, operational workflows, and policy. This playbook will teach you how to define, measure, and mitigate racial bias in live algorithms. By working through concrete examples—cautionary tales—you’ll learn what bias looks like. You’ll also see reasons for optimism—success stories—that demonstrate how bias can be mitigated, transforming flawed algorithms into tools that fight injustice. Who should read this? We wrote this playbook with three kinds of people in mind. ● C-suite leaders (CTOs, CMOs, CMIOs, etc.): Algorithms may be operating at scale in your organization—but what are they doing? And who is responsible? This playbook will help you think strategically about how algorithms can go wrong, and what your technical teams can do about it. It also lays out oversight structures you can put in place to prevent bias. ● Technical teams working in health care: We’ve found that the difference between biased and unbiased algorithms is often a matter of subtle technical choices. If you build algorithms, this playbook will help you make those choices better.
    [Show full text]