
Review Automation bias: a systematic review of frequency, effect mediators, and mitigators Kate Goddard,1 Abdul Roudsari,1,2 Jeremy C Wyatt3 < Additional materials are ABSTRACT CDSS. Indeed, the implementation of interventions published online only. To view Automation bias (AB)dthe tendency to over-rely on such as CDSS can actually introduce new errors.6 these files please visit the automationdhas been studied in various academic However, AB has not previously been properly journal online (www.jamia.org/ fi content/19/1.toc). fields. Clinical decision support systems (CDSS) aim to de ned or examined; the negative effects need benefit the clinical decision-making process. Although investigation particularly in the healthcare field, 1Centre for Health Informatics, City University, London, UK most research shows overall improved performance with where decision errors can have severe consequences. 2School of Health Information use, there is often a failure to recognize the new errors It is currently unclear how frequently AB occurs Downloaded from https://academic.oup.com/jamia/article/19/1/121/732254 by guest on 23 September 2021 Science, University of Victoria, that CDSS can introduce. With a focus on healthcare, and what the risk factors are and so a systematic Victoria, British Columbia, a systematic review of the literature from a variety of review of the current literature was carried out to Canada 3Institute of Digital Healthcare, research fields has been carried out, assessing the clarify the nature of AB. University of Warwick, UK frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is Automation bias and complacency Correspondence to discussed alongside automation-induced complacency, Previous investigation of AB has primarily focused Kate Goddard, Centre for Health or insufficient monitoring of automation output. A mix of Informatics, City University, on the aviation sector. The examination of human Northampton Square, London subject specific and freetext terms around the themes of factors involved in healthcare systems is more e EC1V 0HB, UK; automation, human automation interaction, and task recent, and no systematic reviews of the reliance [email protected] performance and error were used to search article (over-reliance in particular) phenomenon relating databases. Of 13 821 retrieved papers, 74 met the to healthcare have been identified so far. In a study Received 17 December 2010 inclusion criteria. User factors such as cognitive style, Accepted 17 May 2011 examining the enhancement of clinical decision- Published Online First decision support systems (DSS), and task specific making by the use of a computerized diagnostic 16 June 2011 experience mediated AB, as did attitudinal driving factors system, Friedman et al7 noticed that in 6% of such as trust and confidence. Environmental mediators cases, clinicians over-rode their own correct deci- included workload, task complexity, and time constraint, sions in favor of erroneous advice from the DSS. which pressurized cognitive resources. Mitigators of AB Despite the high risk associated with medication included implementation factors such as training and error, there is little direct research into over-reli- emphasizing user accountability, and DSS design factors ance on technology in the healthcare and CDSS such as the position of advice on the screen, updated fields. confidence levels attached to DSS output, and the Often the literature has looked solely at overall provision of information versus recommendation. By clinical or DSS accuracy and clinical outcomes uncovering the mechanisms by which AB operates, this without investigating etiology and types of errors. review aims to help optimize the clinical decision-making More recent papers have started to examine the process for CDSS developers and healthcare human factors relevant to appropriate design and practitioners. use of automation in generaldfor example, trust8 and other social, cognitive, and motivational factors. As the concept is relatively new and undefined, a number of synonyms have been used BACKGROUND in the literature to describe the concept of AB, Clinical decision support systems (CDSS) have including automation-induced complacency,9 over- great potential to improve clinical decisions, actions, reliance on automation, and confirmation bias.10 and patient outcomes1 2 by providing advice and In a recent literature review, Parasuraman et al11 filtered or enhanced information, or by presenting discuss AB and automation-induced complacency prompts or alerts to the user. However, most studies as overlapping concepts reflecting the same kind of of CDSS have emphasized the accuracy of the automation misuse associated with misplaced computer system, without placing clinicians in the attention: either an attentional bias toward DSS role of direct users. Although most decision support output, or insufficient attention and monitoring of systems (DSS) are 80e90% accurate, it is known automation output (particularly with automation that the occasional incorrect advice they give may deemed reliable). Parasuraman et al note that tempt users to reverse a correct decision they have commission and omission errors result from both already made, and thus introduce errors of over- AB and complacency (although they mention that reliance.3 These errors can be classified as automation commission errors are more strictly the domain of bias (AB),4 by which users tend to over-accept AB). There is a lack of consensus over the definition computer output ‘as a heuristic replacement of of complacency. Complacency appears to occur as vigilant information seeking and processing.’5 AB an attention allocation strategy in multitasking manifests in errors of commission (following where manual tasks are attended to over moni- incorrect advice) and omission (failing to act toring the veracity of the output of automation. because of not being prompted to do so) when using Automation bias can also be found outside of J Am Med Inform Assoc 2012;19:121e127. doi:10.1136/amiajnl-2011-000089 121 Review multitask situations, and occurs when there is an active bias a broad, multi-disciplinary search in order to identify the widest toward DSS in decision-making. cross-section of papers. All study settings and all papers irre- Although the focus of this review is on AB, due to the theo- spective of the level of the user’s expertise were considered. retical overlap and vagaries with current definitions, it will also The first search of databases identified 14 457 research papers include papers which imply complacency effects as these also (inclusive of duplicates). affect the misuse of or over-reliance on automation literature. Similar outcomes in terms of commission or omission may Inclusion criteria mean that one effect may be conflated by or confused with Papers were included which investigated: another. Studies relating to AB are identified and examined < human interaction with automated DSS in any field (eg, separately from those on automation complacency. aviation, transport, and cognitive psychology), but particu- larly healthcare. < REVIEW AIM AND OBJECTIVES empirical data on automation use, especially those which incorporated a subjective user questionnaire or interview. The overall aim is to systematically review the literature on DSS < and AB, particularly in the field of healthcare. The specific the appropriateness and accuracy of use of DSS. Downloaded from https://academic.oup.com/jamia/article/19/1/121/732254 by guest on 23 September 2021 review objectives are to answer the following questions: Particular attention was given to papers explicitly mentioning < What is the rate of AB and what is the size of the problem? AB, automation misuse, or over-reliance on automation, or fi < What are the mediators for AB? terms such as con rmation bias, automation dependence, or < Is there a way to mitigate AB? automation complacency. PRISMA methodology12 was used to select articles, and Outcome measures involved identification, screening, appraisal for eligibility, and qualitative and quantitative assessment of final papers. Papers were included which reported: < assessment of user performance (the degree and/or appropri- ateness of automation use), which included: REVIEW METHODS < indicators of DSS advice usagedconsistency between DSS Sources of studies advice and decisions, user performance, peripheral behaviors The main concepts surrounding the subject of AB are DSS (such as advice verification), or response bias indicators.13 d intervention, the DSS human interaction, and task perfor- < indicators of the influence of automation on decision-making, mance and error generation. These were the key themes used in such as pre- and post-advice decision accuracy (eg, negative a systematic search of the literature. As initial searches indicated consultations, where a pre-advice decision is correct and fi little healthcare speci c evidence, it was decided to include changed to an incorrect post-advice decision), DSS versus a number of databases and maintain wide parameters for non-DSS decision accuracy (higher risk of incorrect decisions inclusion/exclusion to identify further relevant articles. when bad advice is given by DSS vs control group decisions), The search took place between September 2009 and January and correlation between DSS and user accuracy (the relation- 2010 and the following databases were searched: MEDLINE/ ship between falling DSS accuracy and falling user decision PubMed, CINAHL, PsycInfo,
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-