Deep Automation Bias: How to Tackle a Wicked Problem of AI?
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Addressing Cognitive Biases in Augmented Business Decision Systems Human Performance Metrics for Generic AI-Assisted Decision Making
Addressing Cognitive Biases in Augmented Business Decision Systems Human performance metrics for generic AI-assisted decision making. THOMAS BAUDEL* IBM France Lab, Orsay, France MANON VERBOCKHAVEN IBM France Lab & ENSAE VICTOIRE COUSERGUE IBM France Lab, Université Paris-Dauphine & Mines ParisTech GUILLAUME ROY IBM France Lab & ENSAI RIDA LAARACH IBM France Lab, Telecom ParisTech & HEC How do algorithmic decision aids introduced in business decision processes affect task performance? In a first experiment, we study effective collaboration. Faced with a decision, subjects alone have a success rate of 72%; Aided by a recommender that has a 75% success rate, their success rate reaches 76%. The human-system collaboration had thus a greater success rate than each taken alone. However, we noted a complacency/authority bias that degraded the quality of decisions by 5% when the recommender was wrong. This suggests that any lingering algorithmic bias may be amplified by decision aids. In a second experiment, we evaluated the effectiveness of 5 presentation variants in reducing complacency bias. We found that optional presentation increases subjects’ resistance to wrong recommendations. We conclude by arguing that our metrics, in real usage scenarios, where decision aids are embedded as system-wide features in Business Process Management software, can lead to enhanced benefits. CCS CONCEPTS • Cross-computing tools and techniques: Empirical studies, Information Systems: Enterprise information systems, Decision support systems, Business Process Management, Human-centered computing, Human computer interaction (HCI), Visualization, Machine Learning, Automation Additional Keywords and Phrases: Business decision systems, Decision theory, Cognitive biases * [email protected]. 1 INTRODUCTION For the past 20 years, Business Process Management (BPM) [29] and related technologies such as Business Rules [10, 52] and Robotic Process Automation [36] have streamlined processes and operational decision- making in large enterprises, transforming work organization. -
An Unconventional Solution for a Sociotechnical
1 RESOLVING THE PROBLEM OF ALGORITHMIC DISSONANCE: AN UNCONVENTIONAL SOLUTION FOR A SOCIOTECHNICAL PROBLEM Shiv Issar | Department of Sociology, University of Wisconsin-Milwaukee | [email protected] June 29, 2021 Abstract In this paper, I propose the concept of “algorithmic dissonance”, which characterizes the inconsistencies that emerge through the fissures that lie between algorithmic systems that utilize system identities, and sociocultural systems of knowledge that interact with them. A product of human-algorithm interaction, algorithmic dissonance builds upon the concepts of algorithmic discrimination and algorithmic awareness, offering greater clarity towards the comprehension of these sociotechnical entanglements. By employing Du Bois’ concept of “double consciousness” and black feminist theory, I argue that all algorithmic dissonance is racialized. Next, I advocate for the use of speculative methodologies and art for the creation of critically informative sociotechnical imaginaries that might serve a basis for the sociological critique and resolution of algorithmic dissonance. Algorithmic dissonance can be an effective check against structural inequities, and of interest to scholars and practitioners concerned with running “algorithm audits”. Keywords: Algorithmic Dissonance, Algorithmic Discrimination, Algorithmic Bias, Algorithm Audits, System Identity, Speculative methodologies. 2 Introduction This paper builds upon Aneesh’s (2015) notion of system identities by utilizing their nature beyond their primary function of serving algorithmic systems and the process of algorithmic decision- making. As Aneesh (2015) describes, system identities are data-bound constructions built by algorithmic systems for the (singular) purpose of being used by algorithmic systems. They are meant to simulate a specific dimension (or a set of dimensions) of personhood , just as algorithmic systems and algorithmic decision-making are meant to simulate the agents of different social institutions, and the decisions that those agents would make. -
A Task-Based Taxonomy of Cognitive Biases for Information Visualization
A Task-based Taxonomy of Cognitive Biases for Information Visualization Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic Three kinds of limitations The Computer The Display 2 Three kinds of limitations The Computer The Display The Human 3 Three kinds of limitations: humans • Human vision ️ has limitations • Human reasoning 易 has limitations The Human 4 ️Perceptual bias Magnitude estimation 5 ️Perceptual bias Magnitude estimation Color perception 6 易 Cognitive bias Behaviors when humans consistently behave irrationally Pohl’s criteria distilled: • Are predictable and consistent • People are unaware they’re doing them • Are not misunderstandings 7 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias, -
Bias and Fairness in NLP
Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas -
Working Memory, Cognitive Miserliness and Logic As Predictors of Performance on the Cognitive Reflection Test
Working Memory, Cognitive Miserliness and Logic as Predictors of Performance on the Cognitive Reflection Test Edward J. N. Stupple ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Maggie Gale ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Christopher R. Richmond ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Abstract Most participants respond that the answer is 10 cents; however, a slower and more analytic approach to the The Cognitive Reflection Test (CRT) was devised to measure problem reveals the correct answer to be 5 cents. the inhibition of heuristic responses to favour analytic ones. The CRT has been a spectacular success, attracting more Toplak, West and Stanovich (2011) demonstrated that the than 100 citations in 2012 alone (Scopus). This may be in CRT was a powerful predictor of heuristics and biases task part due to the ease of administration; with only three items performance - proposing it as a metric of the cognitive miserliness central to dual process theories of thinking. This and no requirement for expensive equipment, the practical thesis was examined using reasoning response-times, advantages are considerable. There have, moreover, been normative responses from two reasoning tasks and working numerous correlates of the CRT demonstrated, from a wide memory capacity (WMC) to predict individual differences in range of tasks in the heuristics and biases literature (Toplak performance on the CRT. These data offered limited support et al., 2011) to risk aversion and SAT scores (Frederick, for the view of miserliness as the primary factor in the CRT. -
Algorithmic Awareness
ALGORITHMIC AWARENESS CONVERSATIONS WITH YOUNG CANADIANS ABOUT ARTIFICIAL INTELLIGENCE AND PRIVACY Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy This report can be downloaded from: mediasmarts.ca/research-policy Cite as: Brisson-Boivin, Kara and Samantha McAleese. (2021). “Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy.” MediaSmarts. Ottawa. Written for MediaSmarts by: Kara Brisson-Boivin, PhD Samantha McAleese MediaSmarts 205 Catherine Street, Suite 100 Ottawa, ON Canada K2P 1C3 T: 613-224-7721 F: 613-761-9024 Toll-free line: 1-800-896-3342 [email protected] mediasmarts.ca @mediasmarts This research was made possible by the financial contributions from the Office of the Privacy Commissioner of Canada’s contributions program. Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy. MediaSmarts © 2021 2 Table of Contents Key Terms .................................................................................................................................................... 4 Introduction ................................................................................................................................................. 6 Algorithms and Artificial Intelligence: ............................................................................................... 9 What We (Don't) Know ......................................................................................................................... -
Algorithm Bias Playbook Presentation
Algorithmic Bias Playbook Ziad Obermeyer June, 2021 Rebecca Nissan Michael Stern Stephanie Eaneff Emily Joy Bembeneck Sendhil Mullainathan ALGORITHMIC BIAS PLAYBOOK Is your organization using biased algorithms? How would you know? What would you do if so? This playbook describes 4 steps your organization can take to answer these questions. It distills insights from our years of applied work helping others diagnose and mitigate bias in live algorithms. Algorithmic bias is everywhere. Our work with dozens of organizations—healthcare providers, insurers, technology companies, and regulators—has taught us that biased algorithms are deployed throughout the healthcare system, influencing clinical care, operational workflows, and policy. This playbook will teach you how to define, measure, and mitigate racial bias in live algorithms. By working through concrete examples—cautionary tales—you’ll learn what bias looks like. You’ll also see reasons for optimism—success stories—that demonstrate how bias can be mitigated, transforming flawed algorithms into tools that fight injustice. Who should read this? We wrote this playbook with three kinds of people in mind. ● C-suite leaders (CTOs, CMOs, CMIOs, etc.): Algorithms may be operating at scale in your organization—but what are they doing? And who is responsible? This playbook will help you think strategically about how algorithms can go wrong, and what your technical teams can do about it. It also lays out oversight structures you can put in place to prevent bias. ● Technical teams working in health care: We’ve found that the difference between biased and unbiased algorithms is often a matter of subtle technical choices. If you build algorithms, this playbook will help you make those choices better. -
Algorithmic Bias on the Implicit Biases of Social Technology
Algorithmic Bias On the Implicit Biases of Social Technology Gabbrielle M Johnson New York University Abstract Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scien- tists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias obscures the existence of the bias itself, making it difficult to identify, mitigate, or evaluate using standard resources in epistemology and ethics. I demonstrate these points in the case of mitigation techniques by presenting what I call `the Proxy Prob- lem'. One reason biases resist revision is that they rely on proxy attributes, seemingly innocuous attributes that correlate with socially-sensitive attributes, serving as proxies for the socially-sensitive attributes themselves. I argue that in both human and algo- rithmic domains, this problem presents a common dilemma for mitigation: attempts to discourage reliance on proxy attributes risk a tradeoff with judgement accuracy. This problem, I contend, admits of no purely algorithmic solution. 1 Introduction On March 23rd, 2016, Microsoft Corporation released Tay, an artificial intelligence (AI) Twitter chatbot intended to mimic the language patterns of a 19-year-old American girl. Tay operated by learning from human Twitter users with whom it interacted. Only 16 hours after its launch, Tay was shut down for authoring a number of tweets endorsing Nazi ideology and harassing other Twitter users. -
Automation Bias and Errors: Are Crews Better Than Individuals?
THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY, 10(1), 85–97 Copyright © 2000, Lawrence Erlbaum Associates, Inc. Automation Bias and Errors: Are Crews Better Than Individuals? Linda J. Skitka Department of Psychology University of Illinois at Chicago Kathleen L. Mosier Department of Psychology San Francisco State University Mark Burdick San Jose State University Foundation and NASA Ames Research Center Moffett Field, CA Bonnie Rosenblatt Department of Psychology University of Illinois at Chicago The availability of automated decision aids can sometimes feed into the general hu- man tendency to travel the road of least cognitive effort. Is this tendency toward “au- tomation bias” (the use of automation as a heuristic replacement for vigilant informa- tion seeking and processing) ameliorated when more than one decision maker is monitoring system events? This study examined automation bias in two-person crews versus solo performers under varying instruction conditions. Training that focused on automation bias and associated errors successfully reduced commission, but not omission, errors. Teams and solo performers were equally likely to fail to respond to system irregularities or events when automated devices failed to indicate them, and to incorrectly follow automated directives when they contradicted other system infor- mation. Requests for reprints should be sent to Linda J. Skitka, Department of Psychology (M/C 285), Uni- versity of Illinois at Chicago, 1007 W. Harrison Street, Chicago, IL 60607–7137. E-mail: [email protected] 86 SKITKA, -
Review Into Bias in Algorithmic Decision-Making
Review into bias in algorithmic decision-making November 2020 Review into bias in algorithmic decision-making: Contents Contents Preface 3 By the Board of the Centre for Data Ethics and Innovation Executive summary 5 Part I: Introduction 14 1. Background and scope 15 2. The issue 20 Part II: Sector reviews 37 3. Recruitment 39 4. Financial services 48 5. Policing 61 6. Local government 72 Part III: Addressing the challenges 80 7. Enabling fair innovation 83 8. The regulatory environment 108 9. Transparency in the public sector 130 10. Next steps and future challenges 144 11. Acknowledgements 147 Centre for Data Ethics and Innovation 2 Review into bias in algorithmic decision-making: Contents Preface Fairness is a highly prized human value. Algorithms, like all technology, should work for people, Societies in which individuals can flourish and not against them. need to be held together by practices and This is true in all sectors, but especially key in the public institutions that are regarded as fair. What sector. When the state is making life-affecting decisions it means to be fair has been much debated about individuals, that individual often can’t go elsewhere. throughout history, rarely more so than in Society may reasonably conclude that justice requires decision-making processes to be designed so that human recent months. Issues such as the global judgment can intervene where needed to achieve fair Black Lives Matter movement, the “levelling and reasonable outcomes for each person, informed by up” of regional inequalities within the UK, individual evidence. and the many complex questions of fairness raised by the COVID-19 pandemic have kept Data gives us a powerful weapon to see where fairness and equality at the centre of public bias is occurring and measure whether our debate. -
Review Into Bias in Algorithmic Decision-Making 12Th March 2021
Review into Bias in Algorithmic Decision-Making 12th March 2021 Lara Macdonald, Senior Policy Advisor, CDEI Ghazi Ahamat, Senior Policy Advisor, CDEI Dr Adrian Weller, CDEI Board Member and Programme Director for AI at the Alan Turing Institute @CDEIUK Government’s role in reducing algorithmic bias ● As a major user of technology, government and the public sector should set standards, provide guidance and highlight good practice ● As a regulator, government needs to adapt existing regulatory frameworks to incentivise ethical innovation 2 Bias, discrimination & fairness ● We interpreted bias in algorithmic decision-making as: the use of algorithms which can cause a systematic skew in decision-making that results in unfair outcomes. ● Some forms of bias constitute discrimination under UK equality (anti-discrimination) law, namely when bias leads to unfair treatment based on certain protected characteristics. ● There are also other kinds of algorithmic bias that are non-discriminatory, but still lead to unfair outcomes. ● Fairness is about much more than the absence of bias ● There are multiple (incompatible) concepts of fairness 3 How should organisations address algorithmic bias? Guidance to organisation leaders and boards Achieving this in practice will include: ● Understand the capabilities and limits of algorithmic tools Building (diverse, multidisciplinary) internal capacity ● Consider carefully whether individuals will be fairly treated by the decision-making process that the tool forms part of Understand risks of bias by measurement -
Complacency and Bias in Human Use of Automation: an Attentional Integration
Complacency and Bias in Human Use of Automation: An Attentional Integration Raja Parasuraman, George Mason University, Fairfax, Virginia, and Dietrich H. Manzey, Berlin Institute of Technology, Berlin, Germany Objective: Our aim was to review empirical stud- INTRODUCTION ies of complacency and bias in human interaction with Human interaction with automated and deci- automated and decision support systems and provide an integrated theoretical model for their explanation. sion support systems constitutes an important Background: Automation-related complacency area of inquiry in human factors and ergonomics and automation bias have typically been considered (Bainbridge, 1983; Lee & Seppelt, 2009; Mosier, separately and independently. 2002; Parasuraman, 2000; R. Parasuraman, Methods: Studies on complacency and automation Sheridan, & Wickens, 2000; Rasmussen, 1986; bias were analyzed with respect to the cognitive pro- Sheridan, 2002; Wiener & Curry, 1980; Woods, cesses involved. 1996). Research has shown that automation Results: Automation complacency occurs under con- does not simply supplant human activity but ditions of multiple-task load, when manual tasks compete rather changes it, often in ways unintended and with the automated task for the operator’s attention. unanticipated by the designers of automation; Automation complacency is found in both naive and moreover, instances of misuse and disuse of expert participants and cannot be overcome with sim- ple practice. Automation bias results in making both omis- automation are common (R. Parasuraman & sion and commission errors when decision aids are Riley, 1997). Thus, the benefits anticipated by imperfect. Automation bias occurs in both naive and expert designers and policy makers when implement- participants, cannot be prevented by training or instruc- ing automation—increased efficiency, improved tions, and can affect decision making in individuals as well as safety, enhanced flexibility of operations, lower in teams.