Automation and Accountability in Decision Support System Interface

Total Page:16

File Type:pdf, Size:1020Kb

Automation and Accountability in Decision Support System Interface Automation and Accountability in Decision Support System Interface Design 23 T Mary L. Cummings h e J o u r n Abstract a If a DSS is faulty or fails to take into l o When the human element is introduced into account a critical social impact factor, the results f T e decision support system design, entirely new will not only be expensive in terms of later c h n layers of social and ethical issues emerge but are redesigns and lost productivity, but possibly also o l o not always recognized as such. This paper dis- the loss of life. Unfortunately, history is replete g y S cusses those ethical and social impact issues with examples of how failures to adequately t u d specific to decision support systems and high- understand decision support problems inherent in i e lights areas that interface designers should con- complex sociotechnical domains can lead to s sider during design with an emphasis on mili- catastrophe. For example, in 1988, the USS tary applications. Because of the inherent com- Vincennes, a U.S. Navy warship accidentally shot plexity of socio-technical systems, decision sup- down a commercial passenger Iranian airliner port systems are particularly vulnerable to cer- due to a poorly designed weapons control com- tain potential ethical pitfalls that encompass puter interface, killing all aboard. The accident automation and accountability issues. If comput- investigation revealed nothing was wrong with er systems diminish a user’s sense of moral the system software or hardware, but that the agency and responsibility, an erosion of account- accident was caused by inadequate and overly ability could result. In addition, these problems complex display of information to the controllers are exacerbated when an interface is perceived (van den Hoven, 1994). Specifically, one of the as a legitimate authority. I argue that when primary factors leading to the decision to shoot developing human computer interfaces for deci- down the airliner was the perception by the con- sion support systems that have the ability to trollers that the airliner was descending towards harm people, the possibility exists that a moral the ship, when in fact it was climbing away from buffer, a form of psychological distancing, is the ship. The display tracking the airliner was created which allows people to ethically distance poorly designed and did not include the rate of themselves from their actions. target altitude change, which required controllers to “compare data taken at different times and Introduction make the calculation in their heads, on scratch Understanding the impact of ethical and pads, or on a calculator – and all this during social dimensions in design is a topic that is combat” (Lerner, 1989). receiving increasing attention both in academia and in practice. Designers of decision support This lack of understanding the need for a systems (DSS’s) embedded in computer inter- human-centered interface design was again faces have a number of additional ethical repeated by the military in the 2004 war with responsibilities beyond those of designers who Iraq when the U.S. Army’s Patriot missile sys- only interact with the mechanical or physical tem engaged in fratricide, shooting down a world. When the human element is introduced British Tornado and an American F/A-18, killing into decision and control processes, entirely new three pilots. The displays were confusing and layers of social and ethical issues (to include often incorrect, and operators, who only were moral responsibility) emerge but are not always given ten seconds to veto a computer solution, recognized as such. Ethical and social impact were admittedly lacking training in a highly issues can arise during all phases of design, and complex management-by-exception system identifying and addressing these issues as early (32nd Army Air and Missile Defense Command, as possible can help the designer to both analyze 2003). In both the USS Vincennes and Patriot the domain more comprehensively as well as missile cases, interface designers could say that suggest specific design guidance. This paper usability was the core problem, but the problem discusses those accountability issues specific to is much deeper and more complex. While the DSS’s that result from introducing automation manifestation of poor design decisions led to and highlight areas that interface designers severe usability issues in these cases, there are should take into consideration. underlying issues concerning responsibility, accountability, and social impact that deserve brittle-decision algorithms, which possibly make 24 further analysis. erroneous or misleading suggestions (Guerlain et al., 1996; Smith, McCoy, & C. Layton, 1997). s Beyond simply examining usability issues, e i The unpredictability of future situations and d there are many facets of decision support system u t unanticipated responses from both systems and S design that have significant social and ethical y human operators, what Parasuraman et al. (2000) g implications, although often these can be subtle. o l term the “noisiness” of the world makes it impos- o n The interaction between cognitive limitations, h sible for any automation algorithm to always pro- c system capabilities, and ethical and social e T vide the correct response. In addition, as in the f impact cannot be easily quantified using formu- o l USS Vincennes and Patriot missile examples, a las and mathematical models. Often what may n r automated solutions and recommendations can be u seem to be a straightforward design decision can o confusing or misleading, causing operators to J carry with it ethical implications that may go e h make suboptimal decisions, which in the case of T unnoticed. One such design consideration is the a weapons control interface, can be lethal. degree of automation used in a decision support system. While the introduction of automation In addition to problems with automation may seemingly be a technical issue, it is indeed brittleness, significant research has shown that one that has tremendous social and ethical there are many drawbacks to higher levels of implications that may not be fully understood in automation that relegate the operator to a prima- the design process. It is critical that interface rily monitoring role. Parasuraman (2000) con- designers realize the inclusion of degrees of tends that over-automation causes skill degrada- automation is not merely a technical issue, but one tion, reduced situational awareness, unbalanced that also contains social and ethical implications. workload, and an over-reliance on automation. Automation in decision support There have been many incidents in other systems domains, such as nuclear power plants and med- In general, automation does not replace the ical device applications, where confusing need for humans; rather it changes the nature automation representations have led to lethal of the work of humans (Parasuraman & Riley, consequences. For example, in perhaps one of 1997). One of the primary design dilemmas the most well-known engineering accidents in engineers and designers face is determining what the United States, the 1979 cooling malfunction level of automation should be introduced into a of one of the Three Mile Island nuclear reactors, system that requires human intervention. For problems with information representation in the rigid tasks that require no flexibility in decision- control room and human cognitive limitations making and with a low probability of system were primary contributors to the accident. failure, full automation often provides the best Automation of system components and subse- solution (Endsley & Kaber, 1999). However, in quent representation on the instrument panels systems like those that deal with decision-mak- were overly complex and overwhelmed the con- ing in dynamic environments with many external trollers with information that was difficult to and changing constraints, higher levels of synthesize, misleading, and confusing (NRC, automation are not advisable because of the risks 2004). and the inability of an automated decision aid to The medical domain is replete with exam- be perfectly reliable (Sarter & Schroeder, 2001). ples of problematic interfaces and ethical dilem- mas. For example, in the Therac-25 cases that Various levels of automation can be intro- occurred between 1985-1987, it was discovered duced in decision support systems, from fully too late for several patients that the human-com- automated where the operator is completely left puter interface for the Therac-25, which was out of the decision process to minimal levels of designed for cancer radiation therapy, was poor- automation where the automation only presents ly designed. It was possible for a technician to the relevant data. The application of automation enter erroneous data, correct it on the display so for decision support systems is effective when that the data appeared accurate, and then begin decisions can be accurately and quickly reached radiation treatments unknowingly with lethal based on a correct and comprehensive algorithm levels of radiation. Other than an ambiguous that considers all known constraints. However, the “Malfunction 54” error code, there was no indi- inability of automation models to account for all cation that the machine was delivering fatal potential conditions or relevant factors results in doses of radiation (Leveson & Turner, 1995). Many researchers assert that keeping the correct (Mosier & Skitka, 1996; Parasuraman & operator engaged in decisions supported by Riley, 1997). Automation bias is particularly 25 automation, otherwise known as the human-cen- problematic when intelligent decision support is T h tered approach to the application of automation, needed in large problem spaces with time pres- e J will help to prevent confusion and erroneous sure like what is needed in command and control o u r n decisions which could cause potentially fatal domains such as emergency path planning and a l problems (Billings, 1997; Parasuraman, resource allocation (Cummings, 2004). Moreover, o f T Masalonis, & Hancock, 2000; Parasuraman & automated decision aids designed to reduce e c h Riley, 1997).
Recommended publications
  • Addressing Cognitive Biases in Augmented Business Decision Systems Human Performance Metrics for Generic AI-Assisted Decision Making
    Addressing Cognitive Biases in Augmented Business Decision Systems Human performance metrics for generic AI-assisted decision making. THOMAS BAUDEL* IBM France Lab, Orsay, France MANON VERBOCKHAVEN IBM France Lab & ENSAE VICTOIRE COUSERGUE IBM France Lab, Université Paris-Dauphine & Mines ParisTech GUILLAUME ROY IBM France Lab & ENSAI RIDA LAARACH IBM France Lab, Telecom ParisTech & HEC How do algorithmic decision aids introduced in business decision processes affect task performance? In a first experiment, we study effective collaboration. Faced with a decision, subjects alone have a success rate of 72%; Aided by a recommender that has a 75% success rate, their success rate reaches 76%. The human-system collaboration had thus a greater success rate than each taken alone. However, we noted a complacency/authority bias that degraded the quality of decisions by 5% when the recommender was wrong. This suggests that any lingering algorithmic bias may be amplified by decision aids. In a second experiment, we evaluated the effectiveness of 5 presentation variants in reducing complacency bias. We found that optional presentation increases subjects’ resistance to wrong recommendations. We conclude by arguing that our metrics, in real usage scenarios, where decision aids are embedded as system-wide features in Business Process Management software, can lead to enhanced benefits. CCS CONCEPTS • Cross-computing tools and techniques: Empirical studies, Information Systems: Enterprise information systems, Decision support systems, Business Process Management, Human-centered computing, Human computer interaction (HCI), Visualization, Machine Learning, Automation Additional Keywords and Phrases: Business decision systems, Decision theory, Cognitive biases * [email protected]. 1 INTRODUCTION For the past 20 years, Business Process Management (BPM) [29] and related technologies such as Business Rules [10, 52] and Robotic Process Automation [36] have streamlined processes and operational decision- making in large enterprises, transforming work organization.
    [Show full text]
  • A Task-Based Taxonomy of Cognitive Biases for Information Visualization
    A Task-based Taxonomy of Cognitive Biases for Information Visualization Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic Three kinds of limitations The Computer The Display 2 Three kinds of limitations The Computer The Display The Human 3 Three kinds of limitations: humans • Human vision ️ has limitations • Human reasoning 易 has limitations The Human 4 ️Perceptual bias Magnitude estimation 5 ️Perceptual bias Magnitude estimation Color perception 6 易 Cognitive bias Behaviors when humans consistently behave irrationally Pohl’s criteria distilled: • Are predictable and consistent • People are unaware they’re doing them • Are not misunderstandings 7 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias,
    [Show full text]
  • Bias and Fairness in NLP
    Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas
    [Show full text]
  • Working Memory, Cognitive Miserliness and Logic As Predictors of Performance on the Cognitive Reflection Test
    Working Memory, Cognitive Miserliness and Logic as Predictors of Performance on the Cognitive Reflection Test Edward J. N. Stupple ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Maggie Gale ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Christopher R. Richmond ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Abstract Most participants respond that the answer is 10 cents; however, a slower and more analytic approach to the The Cognitive Reflection Test (CRT) was devised to measure problem reveals the correct answer to be 5 cents. the inhibition of heuristic responses to favour analytic ones. The CRT has been a spectacular success, attracting more Toplak, West and Stanovich (2011) demonstrated that the than 100 citations in 2012 alone (Scopus). This may be in CRT was a powerful predictor of heuristics and biases task part due to the ease of administration; with only three items performance - proposing it as a metric of the cognitive miserliness central to dual process theories of thinking. This and no requirement for expensive equipment, the practical thesis was examined using reasoning response-times, advantages are considerable. There have, moreover, been normative responses from two reasoning tasks and working numerous correlates of the CRT demonstrated, from a wide memory capacity (WMC) to predict individual differences in range of tasks in the heuristics and biases literature (Toplak performance on the CRT. These data offered limited support et al., 2011) to risk aversion and SAT scores (Frederick, for the view of miserliness as the primary factor in the CRT.
    [Show full text]
  • Automation Bias and Errors: Are Crews Better Than Individuals?
    THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY, 10(1), 85–97 Copyright © 2000, Lawrence Erlbaum Associates, Inc. Automation Bias and Errors: Are Crews Better Than Individuals? Linda J. Skitka Department of Psychology University of Illinois at Chicago Kathleen L. Mosier Department of Psychology San Francisco State University Mark Burdick San Jose State University Foundation and NASA Ames Research Center Moffett Field, CA Bonnie Rosenblatt Department of Psychology University of Illinois at Chicago The availability of automated decision aids can sometimes feed into the general hu- man tendency to travel the road of least cognitive effort. Is this tendency toward “au- tomation bias” (the use of automation as a heuristic replacement for vigilant informa- tion seeking and processing) ameliorated when more than one decision maker is monitoring system events? This study examined automation bias in two-person crews versus solo performers under varying instruction conditions. Training that focused on automation bias and associated errors successfully reduced commission, but not omission, errors. Teams and solo performers were equally likely to fail to respond to system irregularities or events when automated devices failed to indicate them, and to incorrectly follow automated directives when they contradicted other system infor- mation. Requests for reprints should be sent to Linda J. Skitka, Department of Psychology (M/C 285), Uni- versity of Illinois at Chicago, 1007 W. Harrison Street, Chicago, IL 60607–7137. E-mail: [email protected] 86 SKITKA,
    [Show full text]
  • Deep Automation Bias: How to Tackle a Wicked Problem of AI?
    big data and cognitive computing Article Deep Automation Bias: How to Tackle a Wicked Problem of AI? Stefan Strauß Institute of Technology Assessment (ITA), Austrian Academy of Sciences, 1030 Vienna, Austria Abstract: The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and ethical issues with technical solutions that may cause additional, wicked problems. Alternative analytical approaches are thus needed to avoid this and to comprehend how societal and ethical issues occur in AI systems. Despite various forms of bias, ultimately, risks result from eventual rule conflicts between the AI system behavior due to feature complexity and user practices with limited options for scrutiny. Hence, although different forms of bias can occur, automation is their common ground. The paper highlights the role of automation and explains why deep automation bias (DAB) is a metarisk of AI. Based on former work it elaborates the main influencing factors and develops a heuristic model for assessing DAB-related risks in AI systems. This model aims at raising problem awareness and training on the sociotechnical risks resulting from AI-based automation and contributes to improving the general explicability of AI systems beyond technical issues. Keywords: artificial intelligence; machine learning; automation bias; fairness; transparency; account- ability; explicability; uncertainty; human-in-the-loop; awareness raising Citation: Strauß, S. Deep Automation Bias: How to Tackle a 1. Introduction Wicked Problem of AI? Big Data Cogn.
    [Show full text]
  • Complacency and Bias in Human Use of Automation: an Attentional Integration
    Complacency and Bias in Human Use of Automation: An Attentional Integration Raja Parasuraman, George Mason University, Fairfax, Virginia, and Dietrich H. Manzey, Berlin Institute of Technology, Berlin, Germany Objective: Our aim was to review empirical stud- INTRODUCTION ies of complacency and bias in human interaction with Human interaction with automated and deci- automated and decision support systems and provide an integrated theoretical model for their explanation. sion support systems constitutes an important Background: Automation-related complacency area of inquiry in human factors and ergonomics and automation bias have typically been considered (Bainbridge, 1983; Lee & Seppelt, 2009; Mosier, separately and independently. 2002; Parasuraman, 2000; R. Parasuraman, Methods: Studies on complacency and automation Sheridan, & Wickens, 2000; Rasmussen, 1986; bias were analyzed with respect to the cognitive pro- Sheridan, 2002; Wiener & Curry, 1980; Woods, cesses involved. 1996). Research has shown that automation Results: Automation complacency occurs under con- does not simply supplant human activity but ditions of multiple-task load, when manual tasks compete rather changes it, often in ways unintended and with the automated task for the operator’s attention. unanticipated by the designers of automation; Automation complacency is found in both naive and moreover, instances of misuse and disuse of expert participants and cannot be overcome with sim- ple practice. Automation bias results in making both omis- automation are common (R. Parasuraman & sion and commission errors when decision aids are Riley, 1997). Thus, the benefits anticipated by imperfect. Automation bias occurs in both naive and expert designers and policy makers when implement- participants, cannot be prevented by training or instruc- ing automation—increased efficiency, improved tions, and can affect decision making in individuals as well as safety, enhanced flexibility of operations, lower in teams.
    [Show full text]
  • Why Do Humans Reason? Arguments for an Argumentative Theory
    University of Pennsylvania ScholarlyCommons Goldstone Research Unit Philosophy, Politics and Economics 4-2011 Why Do Humans Reason? Arguments for an Argumentative Theory Hugo Mercier University of Pennsylvania, [email protected] Dan Sperber Follow this and additional works at: https://repository.upenn.edu/goldstone Part of the Epistemology Commons, and the Psychology Commons Recommended Citation Mercier, H., & Sperber, D. (2011). Why Do Humans Reason? Arguments for an Argumentative Theory. Behavioral and Brain Sciences, 34 (2), 57-74. http://dx.doi.org/10.1017/S0140525X10000968 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/goldstone/15 For more information, please contact [email protected]. Why Do Humans Reason? Arguments for an Argumentative Theory Abstract Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views.
    [Show full text]
  • Fondamentaux & Domaines
    Septembre 2020 Marie Lechner & Yves Citton Angles morts du numérique ubiquitaire Sélection de lectures, volume 2 Fondamentaux & Domaines Sommaire Fondamentaux Mike Ananny, Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness, Science, Technology, & Human Values, 2015, p. 1-25 . 1 Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, Wired, June 23, 2008 . 26 Mark Andrejevic, The Droning of Experience, FibreCultureJournal, FCJ-187, n° 25, 2015 . 29 Franco ‘Bifo’ Berardi, Concatenation, Conjunction, and Connection, Introduction à AND. A Phenomenology of the End, New York, Semiotexte, 2015 . 45 Tega Brain, The Environment is not a system, Aprja, 2019, http://www.aprja.net /the-environment-is-not-a-system/ . 70 Lisa Gitelman and Virginia Jackson, Introduction to Raw Data is an Oxymoron, MIT Press, 2013 . 81 Orit Halpern, Robert Mitchell, And Bernard & Dionysius Geoghegan, The Smartness Mandate: Notes toward a Critique, Grey Room, n° 68, 2017, pp. 106–129 . 98 Safiya Umoja Noble, The Power of Algorithms, Introduction to Algorithms of Oppression. How Search Engines Reinforce Racism, NYU Press, 2018 . 123 Mimi Onuoha, Notes on Algorithmic Violence, February 2018 github.com/MimiOnuoha/On-Algorithmic-Violence . 139 Matteo Pasquinelli, Anomaly Detection: The Mathematization of the Abnormal in the Metadata Society, 2015, matteopasquinelli.com/anomaly-detection . 142 Iyad Rahwan et al., Machine behavior, Nature, n° 568, 25 April 2019, p. 477 sq. 152 Domaines Ingrid Burrington, The Location of Justice: Systems. Policing Is an Information Business, Urban Omnibus, Jun 20, 2018 . 162 Kate Crawford, Regulate facial-recognition technology, Nature, n° 572, 29 August 2019, p. 565 . 185 Sidney Fussell, How an Attempt at Correcting Bias in Tech Goes Wrong, The Atlantic, Oct 9, 2019 .
    [Show full text]
  • Use, Misuse, Disuse, Abuse
    Human Factors: The Journal of the Human Factors and Ergonomics Society http://hfs.sagepub.com/ Humans and Automation: Use, Misuse, Disuse, Abuse Raja Parasuraman and Victor Riley Human Factors: The Journal of the Human Factors and Ergonomics Society 1997 39: 230 DOI: 10.1518/001872097778543886 The online version of this article can be found at: http://hfs.sagepub.com/content/39/2/230 Published by: http://www.sagepublications.com On behalf of: Human Factors and Ergonomics Society Additional services and information for Human Factors: The Journal of the Human Factors and Ergonomics Society can be found at: Email Alerts: http://hfs.sagepub.com/cgi/alerts Subscriptions: http://hfs.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://hfs.sagepub.com/content/39/2/230.refs.html >> Version of Record - Jun 1, 1997 What is This? Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN FACTORS, 1997,39(2),230-253 Humans and Automation: Use, Misuse, Disuse, Abuse RAJA PARASURAMAN,1Catholic University of America, Washington, D.C., and VICTOR RILEY, Honeywell Technology Center, Minneapolis, Minnesota This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to overreliance on automation, which can result in failures of moni- toring or decision biases.
    [Show full text]
  • An Interdisciplinary Review of the Experimental Evidence on Human-Machine Interaction
    Research Collection Working Paper We and It: An Interdisciplinary Review of the Experimental Evidence on Human-Machine Interaction Author(s): Chugunova, Marina; Sele, Daniela Publication Date: 2020-08 Permanent Link: https://doi.org/10.3929/ethz-b-000442053 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library Center for Law & Economics Working Paper Series Number 12/2020 We and It: An Interdisciplinary Review of the Experimental Evidence on Human-Machine Interaction Marina Chugunova Daniela Sele August 2020 All Center for Law & Economics Working Papers are available at lawecon.ethz.ch/research/workingpapers.html We and It: An interdisciplinary review of the experimental evidence on human-machine interaction ∗ Marina Chugunova y Daniela Sele z August 21, 2020 Abstract Today, humans interact with technology frequently and in a variety of settings. Their behavior in these interactions has attracted considerable research interest across several fields, with sometimes little exchange among them and seemingly inconsistent findings. Here, we review over 110 experimental studies on human-machine interaction. We syn- thesize the evidence from different disciplines, suggest ways to reconcile inconsistencies, and elaborate on political and societal implications. The reviewed studies show that people react to automated agents differently than to humans: They behave more rationally, and are less prone to emotional and social responses. We show that there are several factors which systematically impact the willingness to accept automated decisions: task context, performance expectations and the distribution of decision authority. That is, humans seem willing to (over-)rely on algorithmic support, yet averse to fully ceding their decision au- thority.
    [Show full text]
  • Exogenous Cognition and Cognitive State Theory: the Plexus of Consumer Analytics and Decision-Making
    Exogenous cognition and cognitive state theory: the plexus of consumer analytics and decision-making Andrew Smith, John Harvey, James Goulding, Gavin Smith & Leigh Sparks Prof Andrew Smith, N/LAB, Nottingham University Business School (contact author) Dr John Harvey, N/LAB, Nottingham University Business School Dr James Goulding, N/LAB, Nottingham University Business School Dr Gavin Smith, N/LAB, Nottingham University Business School Prof Leigh Sparks, Institute for Retail Studies, University of Stirling Abstract We develop the concept of exogenous cognition (ExC) as a specific manifestation of an external cognitive system (ECS). Exogenous cognition describes the technological and algorithmic extension of (and annexation of) cognition in a consumption context. ExC provides a framework to enhance understanding of the impact of pervasive computing and smart technology on consumer decision-making and the behavioural impacts of consumer analytics. To this end, the paper provides commentary and structures to outline the impact of ExC and to elaborate the definition and reach of ExC. The logic of ExC culminates in a theory of cognitive states comprising of three potential decision states; endogenous cognition (EnC), symbiotic cognition (SymC) and surrogate cognition (SurC). These states are posited as transient (consumers might move between them during a purchase episode) and determined by individual propensities and situational antecedents. The paper latterly provides various potential empirical avenues and issues for consideration and debate. Key Words: Consumer, Decision-making, Exogenous Cognition, Analytics, Marketing, Big Data, Algorithmic Acknowledgements: This research was supported by the EPSRC project “Neo-demographics: Opening Developing World Markets by Using Personal Data and Collaboration” EP/L021080/1 This research was supported by the EPSRC project “From Human Data to Personal Experience” EP/M02315X/1 1 Introduction Consumer decision-making often occurs via digitally mediated structures; these are increasingly data driven and oriented around analytics.
    [Show full text]