An Algorithm-In-The-Loop Analysis of Fairness in Risk Assessments

Total Page:16

File Type:pdf, Size:1020Kb

Load more

Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments Ben Green Yiling Chen Harvard University Harvard University [email protected] [email protected] ABSTRACT 1 INTRODUCTION Despite vigorous debates about the technical characteristics of risk Across the United States, courts are increasingly using risk as- assessments being deployed in the U.S. criminal justice system, sessments to estimate the likelihood that criminal defendants will remarkably little research has studied how these tools aect actual engage in unlawful behavior in the future.1 These tools are being decision-making processes. After all, risk assessments do not make deployed during several stages of criminal justice adjudication, in- denitive decisions—they inform judges, who are the nal arbiters. cluding at bail hearings (to predict the risk that the defendant, if It is therefore essential that considerations of risk assessments be released, will be rearrested before trial or not appear for trial) and informed by rigorous studies of how judges actually interpret and at sentencing (to predict the risk that the defendant will recidivate). use them. This paper takes a rst step toward such research on Because risk assessments rely on data and a standardized process, human interactions with risk assessments through a controlled many proponents believe that they can mitigate judicial biases and experimental study on Amazon Mechanical Turk. We found several make “objective” decisions about defendants [9, 12, 34]. Risk assess- behaviors that call into question the supposed ecacy and fair- ments have therefore gained widespread support as a tool to reduce ness of risk assessments: our study participants 1) underperformed incarceration rates and spur criminal justice reform [9, 27, 34]. the risk assessment even when presented with its predictions, 2) Yet many are concerned that risk assessments make biased de- could not eectively evaluate the accuracy of their own or the risk cisions due to the historical discrimination embedded in training assessment’s predictions, and 3) exhibited behaviors fraught with data. For example, the widely-used COMPAS risk assessment tool “disparate interactions,” whereby the use of risk assessments led wrongly labels black defendants as future criminals at twice the to higher risk predictions about black defendants and lower risk rate it does for white defendants [3]. Prompted by these concerns, predictions about white defendants. These results suggest the need machine learning researchers have developed a rapidly-growing for a new “algorithm-in-the-loop” framework that places machine body of technical work focused on topics such as characterizing the learning decision-making aids into the sociotechnical context of incompatibility of dierent fairness metrics [6, 44] and developing improving human decisions rather than the technical context of new algorithms to reduce bias [24, 33]. generating the best prediction in the abstract. If risk assessments Despite these eorts, current research into fair machine learning are to be used at all, they must be grounded in rigorous evaluations fails to capture an essential aspect of how risk assessments impact of their real-world impacts instead of in their theoretical potential. the criminal justice system: their inuence on judges. After all, risk assessments do not make denitive decisions about pretrial CCS CONCEPTS release and sentencing—they merely aid judges, who must decide whom to release before trial and how to sentence defendants after • Human-centered computing → Human computer interac- trial. In other words, algorithmic outputs act as decision-making tion (HCI); • Applied computing → Law. aids rather than nal arbiters. Thus, whether a risk assessment itself is accurate and fair is of only indirect concern—the primary KEYWORDS considerations are how it aects decision-making processes and fairness, risk assessment, behavioral experiment, Mechanical Turk whether it makes judges more accurate and fair. No matter how well we characterize the technical specications of risk assessments, we ACM Reference Format: will not fully understand their impacts unless we also study how Ben Green and Yiling Chen. 2019. Disparate Interactions: An Algorithm-in- judges interpret and use them. the-Loop Analysis of Fairness in Risk Assessments. In FAT* ’19: Conference This study sheds new light on how risk assessments inuence on Fairness, Accountability, and Transparency (FAT* ’19), January 29–31, 2019, human decisions in the context of criminal justice adjudication. Atlanta, GA, USA. ACM, New York, NY, USA, 10 pages. https://doi.org/10. We ran experiments using Amazon Mechanical Turk to study how 1145/3287560.3287563 people make predictions about risk, both with and without the aid of a risk assessment. We focus on pretrial release, which in 2 Permission to make digital or hard copies of part or all of this work for personal or many respects resembles a typical prediction problem. By studying classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation 1 Although there have been several generations of criminal justice risk assessments on the rst page. Copyrights for third-party components of this work must be honored. over the past century [41], throughout this paper we use risk assessments to refer to For all other uses, contact the owner/author(s). machine learning algorithms that provide statistical predictions. FAT* ’19, January 29–31, 2019, Atlanta, GA, USA 2After someone is arrested, courts must decide whether to release that person until © 2019 Copyright held by the owner/author(s). their trial. This is typically done by setting an amount of “bail,” or money that the ACM ISBN 978-1-4503-6125-5/19/01. defendant must pay as collateral for release. The broad goal of this process is to protect https://doi.org/10.1145/3287560.3287563 individual liberty while also ensuring that the defendant appears in court for trial FAT* ’19, January 29–31, 2019, Atlanta, GA, USA Ben Green and Yiling Chen behavior in this controlled environment, we discerned important 2.1 People are bad at incorporating patterns in how risk assessments inuence human judgments of quantitative predictions risk. Although these experiments involved laypeople rather than The phenomenon of “automation bias” suggests that automated judges—limiting the extent to which our results can be assumed tools inuence human decisions in signicant, and often detrimen- to directly implicate real-world risk assessments—they highlight tal, ways. Two types of errors are particularly common: omission several types of interactions that should be studied further before errors, in which people do not recognize when automated systems risk assessments can be responsibly deployed in the courtroom. err, and commission errors, in which people follow automated sys- Our results suggest several ways in which the interactions be- tems without considering contradictory information [51]. Heavy tween people and risk assessments can generate errors and biases reliance on automated systems can alter people’s relationship to a in the administration of criminal justice, thus calling into question task by creating a “moral buer” between their decisions and the the supposed ecacy and fairness of risk assessments. First, even impacts of those decisions [11]. Thus, although “[a]utomated deci- when presented with the risk assessment’s predictions, participants sion support tools are designed to improve decision eectiveness made decisions that were less accurate than the advice provided. and reduce human error, [...] they can cause operators to relinquish Second, people could not eectively evaluate the accuracy of their a sense of responsibility and subsequently accountability because own or the risk assessment’s predictions: participants’ condence of a perception that the automation is in charge” [11]. in their performance was associated with their actual negatively Even when algorithms are more accurate, people do not appro- performance and their judgments of the risk assessment’s accuracy priately incorporate algorithmic recommendations to improve their and fairness had no association with the risk assessment’s actual decisions, instead preferring to rely on their own or other people’s accuracy and fairness. Finally, participant interactions with the risk judgment [47, 66]. One study found that people could not distin- assessment introduced two new forms of bias (which we collec- guish between reliable and unreliable predictions [30], and another tively term “disparate interactions”) into decision-making: when found that people often deviate incorrectly from algorithmic fore- evaluating black defendants, participants were 25.9% more strongly casts [18]. Compounding this bias is the phenomenon of “algorithm inuenced to increase their risk prediction at the suggestion of the aversion,” through which people are less tolerant of errors made by risk assessment and were 36.4% more likely to deviate from the algorithms than errors made by other people [17]. risk assessment toward higher levels of risk. Further research is necessary to ascertain whether judges exhibit similar behaviors. The chain from algorithm to person to decision has become 2.2 Information lters through existing biases vitally important as algorithms inform increasing numbers of high- Previous research suggests that information presumed to help peo- stakes decisions.
Recommended publications
  • Amazon's Antitrust Paradox

    Amazon's Antitrust Paradox

    LINA M. KHAN Amazon’s Antitrust Paradox abstract. Amazon is the titan of twenty-first century commerce. In addition to being a re- tailer, it is now a marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house, a major book publisher, a producer of television and films, a fashion designer, a hardware manufacturer, and a leading host of cloud server space. Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and ex- pand widely instead. Through this strategy, the company has positioned itself at the center of e- commerce and now serves as essential infrastructure for a host of other businesses that depend upon it. Elements of the firm’s structure and conduct pose anticompetitive concerns—yet it has escaped antitrust scrutiny. This Note argues that the current framework in antitrust—specifically its pegging competi- tion to “consumer welfare,” defined as short-term price effects—is unequipped to capture the ar- chitecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output. Specifically, current doctrine underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive. These concerns are height- ened in the context of online platforms for two reasons. First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have re- warded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible.
  • The Pulitzer Prizes 2020 Winne

    The Pulitzer Prizes 2020 Winne

    WINNERS AND FINALISTS 1917 TO PRESENT TABLE OF CONTENTS Excerpts from the Plan of Award ..............................................................2 PULITZER PRIZES IN JOURNALISM Public Service ...........................................................................................6 Reporting ...............................................................................................24 Local Reporting .....................................................................................27 Local Reporting, Edition Time ..............................................................32 Local General or Spot News Reporting ..................................................33 General News Reporting ........................................................................36 Spot News Reporting ............................................................................38 Breaking News Reporting .....................................................................39 Local Reporting, No Edition Time .......................................................45 Local Investigative or Specialized Reporting .........................................47 Investigative Reporting ..........................................................................50 Explanatory Journalism .........................................................................61 Explanatory Reporting ...........................................................................64 Specialized Reporting .............................................................................70
  • 1 Statement of Justin Brookman Director, Privacy and Technology Policy Consumers Union Before the House Subcommittee on Digital

    1 Statement of Justin Brookman Director, Privacy and Technology Policy Consumers Union Before the House Subcommittee on Digital

    Statement of Justin Brookman Director, Privacy and Technology Policy Consumers Union Before the House Subcommittee on Digital Commerce and Consumer Protection Understanding the Digital Advertising Ecosystem June 14, 2018 On behalf of Consumers Union, I want to thank you for the opportunity to testify today. We appreciate the leadership of Chairman Latta and Ranking Member Schakowsky in holding today’s hearing to explore the digital advertising ecosystem and how digital advertisements affect Americans. I appear here today on behalf of Consumers Union, the advocacy division of Consumer Reports, an independent, nonprofit, organization that works side by side with consumers to create a fairer, safer, and healthier world.1 1 Consumer Reports is the world’s largest independent product-testing organization. It conducts its advocacy work in the areas of privacy, telecommunications, financial services, food and product safety, health care, among other areas. Using its dozens of labs, auto test center, and survey research department, the nonprofit organization rates thousands of products and services annually. Founded in 1936, Consumer Reports has over 7 million members and publishes its magazine, website, and other publications. 1 Executive Summary My testimony today is divided into three parts. First, I describe some of the many ways that the digital advertising ecosystem has gotten more complex in recent years, leaving consumers with little information or agency over how to safeguard their privacy. Consumers are no longer just tracked through cookies in a web browser: instead, companies are developing a range of novel techniques to monitor online behavior and to tie that to what consumers do on other devices and in the physical world.
  • Julia Angwin

    Julia Angwin

    For more information contact us on: North America 855.414.1034 International +1 646.307.5567 [email protected] Julia Angwin Topics Journalism, Science and Technology Travels From New York Bio Julia Angwin is an award-winning senior investigative reporter at ProPublica, a nonprofit newsroom in New York. From 2000 to 2013, she was a reporter at The Wall Street Journal, where she led a privacy investigative team that was a finalist for a Pulitzer Prize in Explanatory Reporting in 2011 and won a Gerald Loeb Award in 2010. Her book Dragnet Nation: A Quest for Privacy, Security and Freedom in a World of Relentless Surveillance was shortlisted for Best Business Book of the Year by the Financial Times. Julia is an accomplished and sought-after speaker on the topics of privacy, technology, and the quantified society that we live in. Among the many venues at which she has spoken are the Aspen Ideas Festival, the Chicago Humanities Festival, and keynotes at the Strata big data conference and the International Association of Privacy Professionals. In 2003, she was on a team of reporters at The Wall Street Journal that was awarded the Pulitzer Prize in Explanatory Reporting for coverage of corporate corruption. She is also the author of Stealing MySpace: The Battle to Control the Most Popular Website in America. She earned a B.A. in mathematics from the University of Chicago and an MBA from the page 1 / 3 For more information contact us on: North America 855.414.1034 International +1 646.307.5567 [email protected] Graduate School of Business at Columbia University.
  • United States

    United States

    FREEDOM ON THE NET 2016 United States 2015 2016 Population: 321.4 million Internet Freedom Status Free Free Internet Penetration 2015 (ITU): 75 percent Social Media/ICT Apps Blocked: No Obstacles to Access (0-25) 3 3 Political/Social Content Blocked: No Limits on Content (0-35) 2 2 Bloggers/ICT Users Arrested: No Violations of User Rights (0-40) 14 13 TOTAL* (0-100) 19 18 Press Freedom 2016 Status: Free * 0=most free, 100=least free Key Developments: June 2015 – May 2016 ● The USA FREEDOM Act passed in June 2015 limited bulk collection of Americans’ phone records and established other privacy protections. Nonetheless, mass surveillance targeting foreign citizens continues through programs authorized under Section 702 of the FISA Amendments Act and Executive Order 12333 (see Surveillance, Privacy, and Anonymity). ● Online media outlets and journalists face increased pressure, both financially and politically, that may impact future news coverage (see Media, Diversity, and Content Manipulation). ● Following a terrorist attack in San Bernardino in December 2015, the FBI sought to compel Apple to bypass security protections on the locked iPhone of one of the perpetrators (see Surveillance, Privacy, and Anonymity). www.freedomonthenet.org FREEDOM UNITED STATES ON THE NET 2016 Introduction Internet freedom improved slightly as the United States took a significant step toward reining in mass surveillance by the National Security Agency (NSA) with the passage of the USA FREEDOM Act in June 2015. The law ended the bulk collection of Americans’ phone records under Section 215 of the PATRIOT Act, a program detailed in documents leaked by former NSA contractor Edward Snowden in 2013 and ruled illegal by the Second Circuit Court of Appeals in May 2015.
  • Fighting Cyber-Crime After United States V. Jones Danielle K

    Fighting Cyber-Crime After United States V. Jones Danielle K

    Boston University School of Law Scholarly Commons at Boston University School of Law Faculty Scholarship Summer 2013 Fighting Cyber-Crime After United States v. Jones Danielle K. Citron Boston University School of Law David Gray University of Maryland Francis King Carey School of Law Liz Rinehart University of Maryland Francis King Carey School of Law Follow this and additional works at: https://scholarship.law.bu.edu/faculty_scholarship Part of the Privacy Law Commons Recommended Citation Danielle K. Citron, David Gray & Liz Rinehart, Fighting Cyber-Crime After United States v. Jones, 103 Journal of Criminal Law and Criminology 745 (2013). Available at: https://scholarship.law.bu.edu/faculty_scholarship/625 This Article is brought to you for free and open access by Scholarly Commons at Boston University School of Law. It has been accepted for inclusion in Faculty Scholarship by an authorized administrator of Scholarly Commons at Boston University School of Law. For more information, please contact [email protected]. Fighting Cyber-Crime After United States v. Jones David C. Gray Danielle Keats Citron Liz Clark Rinehard No. 2013 - 49 This paper can be downloaded free of charge at: The Social Science Research Network Electronic Paper Collection http://ssrn.com/abstract=2302861 Journal of Criminal Law and Criminology Volume 103 | Issue 3 Article 4 Summer 2013 Fighting Cybercrime After United States v. Jones David Gray Danielle Keats Citron Liz Clark Rinehart Follow this and additional works at: http://scholarlycommons.law.northwestern.edu/jclc Part of the Criminal Law Commons Recommended Citation David Gray, Danielle Keats Citron, and Liz Clark Rinehart, Fighting Cybercrime After United States v.
  • 34:3 Berkeley Technology Law Journal

    34:3 Berkeley Technology Law Journal

    34:3 BERKELEY TECHNOLOGY LAW JOURNAL 2019 Pages 705 to 918 Berkeley Technology Law Journal Volume 34, Number 3 Production: Produced by members of the Berkeley Technology Law Journal. All editing and layout done using Microsoft Word. Printer: Joe Christensen, Inc., Lincoln, Nebraska. Printed in the U.S.A. The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences— Permanence of Paper for Library Materials, ANSI Z39.48—1984. Copyright © 2019 Regents of the University of California. All Rights Reserved. Berkeley Technology Law Journal University of California School of Law 3 Law Building Berkeley, California 94720-7200 [email protected] https://www.btlj.org BERKELEY TECHNOLOGY LAW JOURNAL VOLUME 34 NUMBER 3 2019 TABLE OF CONTENTS ARTICLES THE INSTITUTIONAL LIFE OF ALGORITHMIC RISK ASSESSMENT ............................ 705 Alicia Solow-Niederman, YooJung Choi & Guy Van den Broeck STRANGE LOOPS: APPARENT VERSUS ACTUAL HUMAN INVOLVEMENT IN AUTOMATED DECISION MAKING .................................................................................. 745 Kiel Brennan-Marquez, Karen Levy & Daniel Susser PROCUREMENT AS POLICY: ADMINISTRATIVE PROCESS FOR MACHINE LEARNING ........................................................................................................................... 773 Deirdre K. Mulligan & Kenneth A. Bamberger AUTOMATED DECISION SUPPORT TECHNOLOGIES AND THE LEGAL PROFESSION .......................................................................................................................
  • Automating the Risk of Bias

    Automating the Risk of Bias

    41893-gwn_87-5 Sheet No. 99 Side B 01/29/2020 09:32:14 \\jciprod01\productn\G\GWN\87-5\GWN507.txt unknown Seq: 1 29-JAN-20 9:19 Automating the Risk of Bias Kristin N. Johnson* ABSTRACT Artificial intelligence (“AI”) is a transformative technology that has radi- cally altered decision-making processes. Evaluating the case for algorithmic or automated decision-making (“ADM”) platforms requires navigating tensions between two normative concerns. On the one hand, ADM platforms may lead to more efficient, accurate, and objective decisions. On the other hand, early and disturbing evidence suggests ADM platform results may demonstrate bi- ases, undermining claims that this special class of algorithms will democratize markets and increase inclusion. State law assigns decision-making authority to the boards of directors of corporations. State courts and lawmakers accord significant deference to the board in the execution of its duties. Among its duties, a board must employ effective oversight policies and procedures to manage known risks. Conse- quently, the board of directors and senior management of firms integrating ADM platforms must monitor operations to mitigate enterprise risks including litigation, reputation, compliance, and regulatory risks that arise as a result of the integration of algorithms. After the recent financial crisis, firms adopted structural and procedural governance reforms to mitigate various enterprise risks; these approaches may prove valuable in mitigating the risk of algorithmic bias. Evidence demon- strates that heterogeneous teams may identify and mitigate risks more success- fully than homogeneous teams. Heterogeneous teams are more likely to overcome cognitive biases such as confirmation, commitment, overconfidence, and relational biases.
  • Chapter 1 NOTES

    Chapter 1 NOTES

    Chapter 1 NOTES ::: 1 Sam Hill and Glenn Rifkin, Radical Marketing (New York: Impact of Market Structure, Firm Structure, Strategy and HarperBusiness, 1999); Gerry Khermouch, “Keeping the Froth Market Orientation Culture on Dimensions of Business on Sam Adams,” BusinessWeek, September 1, 2003, pp. 54–56. Performance,” Journal of the Academy of Marketing Science 24, 2 American Marketing Association, 2004. no. 1 (1996): 27–43; Rohit Deshpande and John U. Farley, 3 Peter Drucker, Management: Tasks, Responsibilities, Practices “Measuring Market Orientation: Generalization and Synthesis,” (New York: Harper and Row, 1973), pp. 64–65. Journal of Market-Focused Management 2 (1998): 213–232. 4 Philip Kotler, “Dream Vacations: The Booming Market for 19 John C. Narver, Stanley F. Slater, and Douglas L. MacLachlan, Designed Experiences,” The Futurist (October 1984): 7–13; B. “Total Market Orientation, Business Performance, and Joseph Pine II and James Gilmore, The Experience Economy Innovation,” Working Paper Series, Marketing Science Institute, (Boston: Harvard Business School Press, 1999); Bernd Schmitt, Report No. 00-116, 2000, pp. 1–34. See also, Ken Matsuno and Experience Marketing (New York: Free Press, 1999). John T. Mentzer, “The Effects of Strategy Type on the Market 5 Irving J. Rein, Philip Kotler, and Martin Stoller, High Visibility Orientation–Performance Relationship,” Journal on Marketing (Chicago: NTC Publishers, 1998). (October 2000): 1–16. 6 Philip Kotler, Irving J. Rein, and Donald Haider, Marketing 20 John R. Brandt, “Dare to Be Different,” Chief Executive,May Places: Attracting Investment, Industry, and Tourism to Cities, 2003, pp. 34–38. States, and Nations (New York: Free Press, 1993); and Philip 21 Christian Homburg, John P.Workman Jr., and Harley Krohmen, Kotler, Christer Asplund, Irving Rein, Donald H.
  • The Interviews

    The Interviews

    Jeff Schechtman Interviews December 1995 to April 2017 2017 Marcus du Soutay 4/10/17 Mark Zupan Inside Job: How Government Insiders Subvert the Public Interest 4/6/17 Johnathan Letham More Alive and Less Lonely: On Books and Writers 4/6/17 Ali Almossawi Bad Choices: How Algorithms Can Help You Think Smarter and Live Happier 4/5/17 Steven Vladick Prof. of Law at UT Austin 3/31/17 Nick Middleton An Atals of Countries that Don’t Exist 3/30/16 Hope Jahren Lab Girl 3/28/17 Mary Otto Theeth: The Story of Beauty, Inequality and the Struggle for Oral Health 3/28/17 Lawrence Weschler Waves Passing in the Night: Walter Murch in the Land of the Astrophysicists 3/28/17 Mark Olshaker Deadliest Enemy: Our War Against Killer Germs 3/24/17 Geoffrey Stone Sex and Constitution 3/24/17 Bill Hayes Insomniac City: New York, Oliver and Me 3/21/17 Basharat Peer A Question of Order: India, Turkey and the Return of the Strongmen 3/21/17 Cass Sunstein #Republic: Divided Democracy in the Age of Social Media 3/17/17 Glenn Frankel High Noon: The Hollywood Blacklist and the Making of an American Classic 3/15/17 Sloman & Fernbach The Knowledge Illusion: Why We Think Alone 3/15/17 Subir Chowdhury The Difference: When Good Enough Isn’t Enough 3/14/17 Peter Moskowitz How To Kill A City: Gentrification, Inequality and the Fight for the Neighborhood 3/14/17 Bruce Cannon Gibney A Generation of Sociopaths: How the Baby Boomers Betrayed America 3/10/17 Pam Jenoff The Orphan's Tale: A Novel 3/10/17 L.A.
  • Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods an Upturn and Omidyar Network Report Acknowledgments

    Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods an Upturn and Omidyar Network Report Acknowledgments

    Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods An Upturn and Omidyar Network Report Acknowledgments We would like to acknowledge the many people who gave us helpful insight and feedback during the course of our research, including Salmana Ahmed at Omidyar Network, Gemma Galdon Clavell at Eticas Research & Consulting, Luis Fernando Garcia at R3D, Luciano Floridi at the University of Oxford, John Havens at IEEE, Gus Hosein at Privacy International, Estelle Massé at Access Now, Valeria Milanes at ADC Digital, Vivian Ng at the Human Rights, Big Data and Technology Project at the Essex Human Rights Centre, Cathy O’Neil at O’Neil Risk Consulting & Algorithmic Auditing, Matthew Sheret at IF, Matthias Spielkamp at AlgorithmWatch, Martin Tisné at Omidyar Network, Frank Pasquale at the University of Maryland, Pablo Viollier at Derechos Digitales and Adrian Weller at the Leverhulme Centre for the Future of Intelligence. Authors Aaron Rieke is a Managing Director at Upturn. He holds a JD from Berkeley Law, with a Certificate of Law and Technology, and a BA in Philosophy from Pacific Lutheran University. Miranda Bogen is a Policy Analyst at Upturn. She holds a Master’s degree in Law and Diplomacy with a focus on international technology policy from The Fletcher School of Law and Diplomacy at Tufts, and bachelor’s degrees in Political Science and Middle Eastern & North African Studies from UCLA. David G. Robinson is a Managing Director and co-founder at Upturn. He holds a JD from Yale Law School, and bachelor’s degrees in philosophy from Princeton and Oxford, where he was a Rhodes Scholar.
  • Pulitzer Prize Winners and Finalists

    Pulitzer Prize Winners and Finalists

    WINNERS AND FINALISTS 1917 TO PRESENT TABLE OF CONTENTS Excerpts from the Plan of Award ..............................................................2 PULITZER PRIZES IN JOURNALISM Public Service ...........................................................................................6 Reporting ...............................................................................................24 Local Reporting .....................................................................................27 Local Reporting, Edition Time ..............................................................32 Local General or Spot News Reporting ..................................................33 General News Reporting ........................................................................36 Spot News Reporting ............................................................................38 Breaking News Reporting .....................................................................39 Local Reporting, No Edition Time .......................................................45 Local Investigative or Specialized Reporting .........................................47 Investigative Reporting ..........................................................................50 Explanatory Journalism .........................................................................61 Explanatory Reporting ...........................................................................64 Specialized Reporting .............................................................................70