Automation Bias: a Systematic Review of Frequency, Effect Mediators, and Mitigators Kate Goddard,1 Abdul Roudsari,1,2 Jeremy C Wyatt3

Total Page:16

File Type:pdf, Size:1020Kb

Automation Bias: a Systematic Review of Frequency, Effect Mediators, and Mitigators Kate Goddard,1 Abdul Roudsari,1,2 Jeremy C Wyatt3 Review Automation bias: a systematic review of frequency, effect mediators, and mitigators Kate Goddard,1 Abdul Roudsari,1,2 Jeremy C Wyatt3 < Additional materials are ABSTRACT CDSS. Indeed, the implementation of interventions published online only. To view Automation bias (AB)dthe tendency to over-rely on such as CDSS can actually introduce new errors.6 these files please visit the automationdhas been studied in various academic However, AB has not previously been properly journal online (www.jamia.org/ fi content/19/1.toc). fields. Clinical decision support systems (CDSS) aim to de ned or examined; the negative effects need benefit the clinical decision-making process. Although investigation particularly in the healthcare field, 1Centre for Health Informatics, City University, London, UK most research shows overall improved performance with where decision errors can have severe consequences. 2School of Health Information use, there is often a failure to recognize the new errors It is currently unclear how frequently AB occurs Downloaded from https://academic.oup.com/jamia/article/19/1/121/732254 by guest on 23 September 2021 Science, University of Victoria, that CDSS can introduce. With a focus on healthcare, and what the risk factors are and so a systematic Victoria, British Columbia, a systematic review of the literature from a variety of review of the current literature was carried out to Canada 3Institute of Digital Healthcare, research fields has been carried out, assessing the clarify the nature of AB. University of Warwick, UK frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is Automation bias and complacency Correspondence to discussed alongside automation-induced complacency, Previous investigation of AB has primarily focused Kate Goddard, Centre for Health or insufficient monitoring of automation output. A mix of Informatics, City University, on the aviation sector. The examination of human Northampton Square, London subject specific and freetext terms around the themes of factors involved in healthcare systems is more e EC1V 0HB, UK; automation, human automation interaction, and task recent, and no systematic reviews of the reliance [email protected] performance and error were used to search article (over-reliance in particular) phenomenon relating databases. Of 13 821 retrieved papers, 74 met the to healthcare have been identified so far. In a study Received 17 December 2010 inclusion criteria. User factors such as cognitive style, Accepted 17 May 2011 examining the enhancement of clinical decision- Published Online First decision support systems (DSS), and task specific making by the use of a computerized diagnostic 16 June 2011 experience mediated AB, as did attitudinal driving factors system, Friedman et al7 noticed that in 6% of such as trust and confidence. Environmental mediators cases, clinicians over-rode their own correct deci- included workload, task complexity, and time constraint, sions in favor of erroneous advice from the DSS. which pressurized cognitive resources. Mitigators of AB Despite the high risk associated with medication included implementation factors such as training and error, there is little direct research into over-reli- emphasizing user accountability, and DSS design factors ance on technology in the healthcare and CDSS such as the position of advice on the screen, updated fields. confidence levels attached to DSS output, and the Often the literature has looked solely at overall provision of information versus recommendation. By clinical or DSS accuracy and clinical outcomes uncovering the mechanisms by which AB operates, this without investigating etiology and types of errors. review aims to help optimize the clinical decision-making More recent papers have started to examine the process for CDSS developers and healthcare human factors relevant to appropriate design and practitioners. use of automation in generaldfor example, trust8 and other social, cognitive, and motivational factors. As the concept is relatively new and undefined, a number of synonyms have been used BACKGROUND in the literature to describe the concept of AB, Clinical decision support systems (CDSS) have including automation-induced complacency,9 over- great potential to improve clinical decisions, actions, reliance on automation, and confirmation bias.10 and patient outcomes1 2 by providing advice and In a recent literature review, Parasuraman et al11 filtered or enhanced information, or by presenting discuss AB and automation-induced complacency prompts or alerts to the user. However, most studies as overlapping concepts reflecting the same kind of of CDSS have emphasized the accuracy of the automation misuse associated with misplaced computer system, without placing clinicians in the attention: either an attentional bias toward DSS role of direct users. Although most decision support output, or insufficient attention and monitoring of systems (DSS) are 80e90% accurate, it is known automation output (particularly with automation that the occasional incorrect advice they give may deemed reliable). Parasuraman et al note that tempt users to reverse a correct decision they have commission and omission errors result from both already made, and thus introduce errors of over- AB and complacency (although they mention that reliance.3 These errors can be classified as automation commission errors are more strictly the domain of bias (AB),4 by which users tend to over-accept AB). There is a lack of consensus over the definition computer output ‘as a heuristic replacement of of complacency. Complacency appears to occur as vigilant information seeking and processing.’5 AB an attention allocation strategy in multitasking manifests in errors of commission (following where manual tasks are attended to over moni- incorrect advice) and omission (failing to act toring the veracity of the output of automation. because of not being prompted to do so) when using Automation bias can also be found outside of J Am Med Inform Assoc 2012;19:121e127. doi:10.1136/amiajnl-2011-000089 121 Review multitask situations, and occurs when there is an active bias a broad, multi-disciplinary search in order to identify the widest toward DSS in decision-making. cross-section of papers. All study settings and all papers irre- Although the focus of this review is on AB, due to the theo- spective of the level of the user’s expertise were considered. retical overlap and vagaries with current definitions, it will also The first search of databases identified 14 457 research papers include papers which imply complacency effects as these also (inclusive of duplicates). affect the misuse of or over-reliance on automation literature. Similar outcomes in terms of commission or omission may Inclusion criteria mean that one effect may be conflated by or confused with Papers were included which investigated: another. Studies relating to AB are identified and examined < human interaction with automated DSS in any field (eg, separately from those on automation complacency. aviation, transport, and cognitive psychology), but particu- larly healthcare. < REVIEW AIM AND OBJECTIVES empirical data on automation use, especially those which incorporated a subjective user questionnaire or interview. The overall aim is to systematically review the literature on DSS < and AB, particularly in the field of healthcare. The specific the appropriateness and accuracy of use of DSS. Downloaded from https://academic.oup.com/jamia/article/19/1/121/732254 by guest on 23 September 2021 review objectives are to answer the following questions: Particular attention was given to papers explicitly mentioning < What is the rate of AB and what is the size of the problem? AB, automation misuse, or over-reliance on automation, or fi < What are the mediators for AB? terms such as con rmation bias, automation dependence, or < Is there a way to mitigate AB? automation complacency. PRISMA methodology12 was used to select articles, and Outcome measures involved identification, screening, appraisal for eligibility, and qualitative and quantitative assessment of final papers. Papers were included which reported: < assessment of user performance (the degree and/or appropri- ateness of automation use), which included: REVIEW METHODS < indicators of DSS advice usagedconsistency between DSS Sources of studies advice and decisions, user performance, peripheral behaviors The main concepts surrounding the subject of AB are DSS (such as advice verification), or response bias indicators.13 d intervention, the DSS human interaction, and task perfor- < indicators of the influence of automation on decision-making, mance and error generation. These were the key themes used in such as pre- and post-advice decision accuracy (eg, negative a systematic search of the literature. As initial searches indicated consultations, where a pre-advice decision is correct and fi little healthcare speci c evidence, it was decided to include changed to an incorrect post-advice decision), DSS versus a number of databases and maintain wide parameters for non-DSS decision accuracy (higher risk of incorrect decisions inclusion/exclusion to identify further relevant articles. when bad advice is given by DSS vs control group decisions), The search took place between September 2009 and January and correlation between DSS and user accuracy (the relation- 2010 and the following databases were searched: MEDLINE/ ship between falling DSS accuracy and falling user decision PubMed, CINAHL, PsycInfo,
Recommended publications
  • Addressing Cognitive Biases in Augmented Business Decision Systems Human Performance Metrics for Generic AI-Assisted Decision Making
    Addressing Cognitive Biases in Augmented Business Decision Systems Human performance metrics for generic AI-assisted decision making. THOMAS BAUDEL* IBM France Lab, Orsay, France MANON VERBOCKHAVEN IBM France Lab & ENSAE VICTOIRE COUSERGUE IBM France Lab, Université Paris-Dauphine & Mines ParisTech GUILLAUME ROY IBM France Lab & ENSAI RIDA LAARACH IBM France Lab, Telecom ParisTech & HEC How do algorithmic decision aids introduced in business decision processes affect task performance? In a first experiment, we study effective collaboration. Faced with a decision, subjects alone have a success rate of 72%; Aided by a recommender that has a 75% success rate, their success rate reaches 76%. The human-system collaboration had thus a greater success rate than each taken alone. However, we noted a complacency/authority bias that degraded the quality of decisions by 5% when the recommender was wrong. This suggests that any lingering algorithmic bias may be amplified by decision aids. In a second experiment, we evaluated the effectiveness of 5 presentation variants in reducing complacency bias. We found that optional presentation increases subjects’ resistance to wrong recommendations. We conclude by arguing that our metrics, in real usage scenarios, where decision aids are embedded as system-wide features in Business Process Management software, can lead to enhanced benefits. CCS CONCEPTS • Cross-computing tools and techniques: Empirical studies, Information Systems: Enterprise information systems, Decision support systems, Business Process Management, Human-centered computing, Human computer interaction (HCI), Visualization, Machine Learning, Automation Additional Keywords and Phrases: Business decision systems, Decision theory, Cognitive biases * [email protected]. 1 INTRODUCTION For the past 20 years, Business Process Management (BPM) [29] and related technologies such as Business Rules [10, 52] and Robotic Process Automation [36] have streamlined processes and operational decision- making in large enterprises, transforming work organization.
    [Show full text]
  • A Task-Based Taxonomy of Cognitive Biases for Information Visualization
    A Task-based Taxonomy of Cognitive Biases for Information Visualization Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic Three kinds of limitations The Computer The Display 2 Three kinds of limitations The Computer The Display The Human 3 Three kinds of limitations: humans • Human vision ️ has limitations • Human reasoning 易 has limitations The Human 4 ️Perceptual bias Magnitude estimation 5 ️Perceptual bias Magnitude estimation Color perception 6 易 Cognitive bias Behaviors when humans consistently behave irrationally Pohl’s criteria distilled: • Are predictable and consistent • People are unaware they’re doing them • Are not misunderstandings 7 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias,
    [Show full text]
  • Bias and Fairness in NLP
    Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas
    [Show full text]
  • Working Memory, Cognitive Miserliness and Logic As Predictors of Performance on the Cognitive Reflection Test
    Working Memory, Cognitive Miserliness and Logic as Predictors of Performance on the Cognitive Reflection Test Edward J. N. Stupple ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Maggie Gale ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Christopher R. Richmond ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Abstract Most participants respond that the answer is 10 cents; however, a slower and more analytic approach to the The Cognitive Reflection Test (CRT) was devised to measure problem reveals the correct answer to be 5 cents. the inhibition of heuristic responses to favour analytic ones. The CRT has been a spectacular success, attracting more Toplak, West and Stanovich (2011) demonstrated that the than 100 citations in 2012 alone (Scopus). This may be in CRT was a powerful predictor of heuristics and biases task part due to the ease of administration; with only three items performance - proposing it as a metric of the cognitive miserliness central to dual process theories of thinking. This and no requirement for expensive equipment, the practical thesis was examined using reasoning response-times, advantages are considerable. There have, moreover, been normative responses from two reasoning tasks and working numerous correlates of the CRT demonstrated, from a wide memory capacity (WMC) to predict individual differences in range of tasks in the heuristics and biases literature (Toplak performance on the CRT. These data offered limited support et al., 2011) to risk aversion and SAT scores (Frederick, for the view of miserliness as the primary factor in the CRT.
    [Show full text]
  • Automation Bias and Errors: Are Crews Better Than Individuals?
    THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY, 10(1), 85–97 Copyright © 2000, Lawrence Erlbaum Associates, Inc. Automation Bias and Errors: Are Crews Better Than Individuals? Linda J. Skitka Department of Psychology University of Illinois at Chicago Kathleen L. Mosier Department of Psychology San Francisco State University Mark Burdick San Jose State University Foundation and NASA Ames Research Center Moffett Field, CA Bonnie Rosenblatt Department of Psychology University of Illinois at Chicago The availability of automated decision aids can sometimes feed into the general hu- man tendency to travel the road of least cognitive effort. Is this tendency toward “au- tomation bias” (the use of automation as a heuristic replacement for vigilant informa- tion seeking and processing) ameliorated when more than one decision maker is monitoring system events? This study examined automation bias in two-person crews versus solo performers under varying instruction conditions. Training that focused on automation bias and associated errors successfully reduced commission, but not omission, errors. Teams and solo performers were equally likely to fail to respond to system irregularities or events when automated devices failed to indicate them, and to incorrectly follow automated directives when they contradicted other system infor- mation. Requests for reprints should be sent to Linda J. Skitka, Department of Psychology (M/C 285), Uni- versity of Illinois at Chicago, 1007 W. Harrison Street, Chicago, IL 60607–7137. E-mail: [email protected] 86 SKITKA,
    [Show full text]
  • Deep Automation Bias: How to Tackle a Wicked Problem of AI?
    big data and cognitive computing Article Deep Automation Bias: How to Tackle a Wicked Problem of AI? Stefan Strauß Institute of Technology Assessment (ITA), Austrian Academy of Sciences, 1030 Vienna, Austria Abstract: The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and ethical issues with technical solutions that may cause additional, wicked problems. Alternative analytical approaches are thus needed to avoid this and to comprehend how societal and ethical issues occur in AI systems. Despite various forms of bias, ultimately, risks result from eventual rule conflicts between the AI system behavior due to feature complexity and user practices with limited options for scrutiny. Hence, although different forms of bias can occur, automation is their common ground. The paper highlights the role of automation and explains why deep automation bias (DAB) is a metarisk of AI. Based on former work it elaborates the main influencing factors and develops a heuristic model for assessing DAB-related risks in AI systems. This model aims at raising problem awareness and training on the sociotechnical risks resulting from AI-based automation and contributes to improving the general explicability of AI systems beyond technical issues. Keywords: artificial intelligence; machine learning; automation bias; fairness; transparency; account- ability; explicability; uncertainty; human-in-the-loop; awareness raising Citation: Strauß, S. Deep Automation Bias: How to Tackle a 1. Introduction Wicked Problem of AI? Big Data Cogn.
    [Show full text]
  • Complacency and Bias in Human Use of Automation: an Attentional Integration
    Complacency and Bias in Human Use of Automation: An Attentional Integration Raja Parasuraman, George Mason University, Fairfax, Virginia, and Dietrich H. Manzey, Berlin Institute of Technology, Berlin, Germany Objective: Our aim was to review empirical stud- INTRODUCTION ies of complacency and bias in human interaction with Human interaction with automated and deci- automated and decision support systems and provide an integrated theoretical model for their explanation. sion support systems constitutes an important Background: Automation-related complacency area of inquiry in human factors and ergonomics and automation bias have typically been considered (Bainbridge, 1983; Lee & Seppelt, 2009; Mosier, separately and independently. 2002; Parasuraman, 2000; R. Parasuraman, Methods: Studies on complacency and automation Sheridan, & Wickens, 2000; Rasmussen, 1986; bias were analyzed with respect to the cognitive pro- Sheridan, 2002; Wiener & Curry, 1980; Woods, cesses involved. 1996). Research has shown that automation Results: Automation complacency occurs under con- does not simply supplant human activity but ditions of multiple-task load, when manual tasks compete rather changes it, often in ways unintended and with the automated task for the operator’s attention. unanticipated by the designers of automation; Automation complacency is found in both naive and moreover, instances of misuse and disuse of expert participants and cannot be overcome with sim- ple practice. Automation bias results in making both omis- automation are common (R. Parasuraman & sion and commission errors when decision aids are Riley, 1997). Thus, the benefits anticipated by imperfect. Automation bias occurs in both naive and expert designers and policy makers when implement- participants, cannot be prevented by training or instruc- ing automation—increased efficiency, improved tions, and can affect decision making in individuals as well as safety, enhanced flexibility of operations, lower in teams.
    [Show full text]
  • Why Do Humans Reason? Arguments for an Argumentative Theory
    University of Pennsylvania ScholarlyCommons Goldstone Research Unit Philosophy, Politics and Economics 4-2011 Why Do Humans Reason? Arguments for an Argumentative Theory Hugo Mercier University of Pennsylvania, [email protected] Dan Sperber Follow this and additional works at: https://repository.upenn.edu/goldstone Part of the Epistemology Commons, and the Psychology Commons Recommended Citation Mercier, H., & Sperber, D. (2011). Why Do Humans Reason? Arguments for an Argumentative Theory. Behavioral and Brain Sciences, 34 (2), 57-74. http://dx.doi.org/10.1017/S0140525X10000968 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/goldstone/15 For more information, please contact [email protected]. Why Do Humans Reason? Arguments for an Argumentative Theory Abstract Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views.
    [Show full text]
  • Fondamentaux & Domaines
    Septembre 2020 Marie Lechner & Yves Citton Angles morts du numérique ubiquitaire Sélection de lectures, volume 2 Fondamentaux & Domaines Sommaire Fondamentaux Mike Ananny, Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness, Science, Technology, & Human Values, 2015, p. 1-25 . 1 Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, Wired, June 23, 2008 . 26 Mark Andrejevic, The Droning of Experience, FibreCultureJournal, FCJ-187, n° 25, 2015 . 29 Franco ‘Bifo’ Berardi, Concatenation, Conjunction, and Connection, Introduction à AND. A Phenomenology of the End, New York, Semiotexte, 2015 . 45 Tega Brain, The Environment is not a system, Aprja, 2019, http://www.aprja.net /the-environment-is-not-a-system/ . 70 Lisa Gitelman and Virginia Jackson, Introduction to Raw Data is an Oxymoron, MIT Press, 2013 . 81 Orit Halpern, Robert Mitchell, And Bernard & Dionysius Geoghegan, The Smartness Mandate: Notes toward a Critique, Grey Room, n° 68, 2017, pp. 106–129 . 98 Safiya Umoja Noble, The Power of Algorithms, Introduction to Algorithms of Oppression. How Search Engines Reinforce Racism, NYU Press, 2018 . 123 Mimi Onuoha, Notes on Algorithmic Violence, February 2018 github.com/MimiOnuoha/On-Algorithmic-Violence . 139 Matteo Pasquinelli, Anomaly Detection: The Mathematization of the Abnormal in the Metadata Society, 2015, matteopasquinelli.com/anomaly-detection . 142 Iyad Rahwan et al., Machine behavior, Nature, n° 568, 25 April 2019, p. 477 sq. 152 Domaines Ingrid Burrington, The Location of Justice: Systems. Policing Is an Information Business, Urban Omnibus, Jun 20, 2018 . 162 Kate Crawford, Regulate facial-recognition technology, Nature, n° 572, 29 August 2019, p. 565 . 185 Sidney Fussell, How an Attempt at Correcting Bias in Tech Goes Wrong, The Atlantic, Oct 9, 2019 .
    [Show full text]
  • Use, Misuse, Disuse, Abuse
    Human Factors: The Journal of the Human Factors and Ergonomics Society http://hfs.sagepub.com/ Humans and Automation: Use, Misuse, Disuse, Abuse Raja Parasuraman and Victor Riley Human Factors: The Journal of the Human Factors and Ergonomics Society 1997 39: 230 DOI: 10.1518/001872097778543886 The online version of this article can be found at: http://hfs.sagepub.com/content/39/2/230 Published by: http://www.sagepublications.com On behalf of: Human Factors and Ergonomics Society Additional services and information for Human Factors: The Journal of the Human Factors and Ergonomics Society can be found at: Email Alerts: http://hfs.sagepub.com/cgi/alerts Subscriptions: http://hfs.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://hfs.sagepub.com/content/39/2/230.refs.html >> Version of Record - Jun 1, 1997 What is This? Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN FACTORS, 1997,39(2),230-253 Humans and Automation: Use, Misuse, Disuse, Abuse RAJA PARASURAMAN,1Catholic University of America, Washington, D.C., and VICTOR RILEY, Honeywell Technology Center, Minneapolis, Minnesota This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to overreliance on automation, which can result in failures of moni- toring or decision biases.
    [Show full text]
  • An Interdisciplinary Review of the Experimental Evidence on Human-Machine Interaction
    Research Collection Working Paper We and It: An Interdisciplinary Review of the Experimental Evidence on Human-Machine Interaction Author(s): Chugunova, Marina; Sele, Daniela Publication Date: 2020-08 Permanent Link: https://doi.org/10.3929/ethz-b-000442053 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library Center for Law & Economics Working Paper Series Number 12/2020 We and It: An Interdisciplinary Review of the Experimental Evidence on Human-Machine Interaction Marina Chugunova Daniela Sele August 2020 All Center for Law & Economics Working Papers are available at lawecon.ethz.ch/research/workingpapers.html We and It: An interdisciplinary review of the experimental evidence on human-machine interaction ∗ Marina Chugunova y Daniela Sele z August 21, 2020 Abstract Today, humans interact with technology frequently and in a variety of settings. Their behavior in these interactions has attracted considerable research interest across several fields, with sometimes little exchange among them and seemingly inconsistent findings. Here, we review over 110 experimental studies on human-machine interaction. We syn- thesize the evidence from different disciplines, suggest ways to reconcile inconsistencies, and elaborate on political and societal implications. The reviewed studies show that people react to automated agents differently than to humans: They behave more rationally, and are less prone to emotional and social responses. We show that there are several factors which systematically impact the willingness to accept automated decisions: task context, performance expectations and the distribution of decision authority. That is, humans seem willing to (over-)rely on algorithmic support, yet averse to fully ceding their decision au- thority.
    [Show full text]
  • Exogenous Cognition and Cognitive State Theory: the Plexus of Consumer Analytics and Decision-Making
    Exogenous cognition and cognitive state theory: the plexus of consumer analytics and decision-making Andrew Smith, John Harvey, James Goulding, Gavin Smith & Leigh Sparks Prof Andrew Smith, N/LAB, Nottingham University Business School (contact author) Dr John Harvey, N/LAB, Nottingham University Business School Dr James Goulding, N/LAB, Nottingham University Business School Dr Gavin Smith, N/LAB, Nottingham University Business School Prof Leigh Sparks, Institute for Retail Studies, University of Stirling Abstract We develop the concept of exogenous cognition (ExC) as a specific manifestation of an external cognitive system (ECS). Exogenous cognition describes the technological and algorithmic extension of (and annexation of) cognition in a consumption context. ExC provides a framework to enhance understanding of the impact of pervasive computing and smart technology on consumer decision-making and the behavioural impacts of consumer analytics. To this end, the paper provides commentary and structures to outline the impact of ExC and to elaborate the definition and reach of ExC. The logic of ExC culminates in a theory of cognitive states comprising of three potential decision states; endogenous cognition (EnC), symbiotic cognition (SymC) and surrogate cognition (SurC). These states are posited as transient (consumers might move between them during a purchase episode) and determined by individual propensities and situational antecedents. The paper latterly provides various potential empirical avenues and issues for consideration and debate. Key Words: Consumer, Decision-making, Exogenous Cognition, Analytics, Marketing, Big Data, Algorithmic Acknowledgements: This research was supported by the EPSRC project “Neo-demographics: Opening Developing World Markets by Using Personal Data and Collaboration” EP/L021080/1 This research was supported by the EPSRC project “From Human Data to Personal Experience” EP/M02315X/1 1 Introduction Consumer decision-making often occurs via digitally mediated structures; these are increasingly data driven and oriented around analytics.
    [Show full text]