The Role of Cognitive Biases in AI-Assisted Decision-Making
Total Page:16
File Type:pdf, Size:1020Kb
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making Charvi Rastogi1;2, Yunfeng Zhang2, Dennis Wei2, Kush R. Varshney2 Amit Dhurandhar2, Richard Tomsett3 1Carnegie Mellon University, 2IBM Research – T. J. Watson Research Center, 3IBM Research – Europe Abstract and fraud monitoring. Specifically, judges use risk assess- ments to determine criminal sentences, banks use models Several strands of research have aimed to bridge the gap to manage credit risk, and doctors use image-based ML between artificial intelligence (AI) and human decision- predictions for diagnosis, to list a few. makers in AI-assisted decision-making, where humans are The emergence of AI-assisted decision-making in soci- the consumers of AI model predictions and the ultimate ety has raised questions about whether and when we can decision-makers in high-stakes applications. However, trust the AI model’s decisions. This has spurred research people’s perception and understanding is often distorted in crucial directions on various aspects of the human- by their cognitive biases, like confirmation bias, anchor- AI interaction, such as in interpretable, explainable and ing bias, availability bias, to name a few. In this work, trustworthy machine learning. One of the main objec- we use knowledge from the field of cognitive science to tives of this research is to bridge the “communication gap” account for cognitive biases in the human-AI collabora- between humans and AI. Research in interpretability and tive decision-making system and mitigate their negative explainability aims to develop AI systems that can effec- effects. To this end, we mathematically model cognitive tively communicate the reasoning and justification for its biases and provide a general framework through which decisions to humans. Research in trustworthy ML and researchers and practitioners can understand the inter- trust calibration in collaborative decision-making aims to play between cognitive biases and human-AI accuracy. appropriately enhance humans’ trust and thereby their re- We then focus on anchoring bias, a bias commonly wit- ceptiveness to communication from AI systems. However, nessed in human-AI partnerships. We devise a cognitive a key component of human-AI communication that is science-driven, time-based approach to de-anchoring. A often sidelined is the human decision-makers themselves. user experiment shows the effectiveness of this approach Humans’ perception of the communication received from in human-AI collaborative decision-making. Using the AI is at the core of this communication gap. Research results from this first experiment, we design a time allo- in communication exemplifies the need to model receiver cation strategy for a resource constrained setting so as characteristics, thus implying the need to understand and arXiv:2010.07938v1 [cs.HC] 15 Oct 2020 to achieve optimal human-AI collaboration under some model human cognition in collaborative decision-making. assumptions. A second user study shows that our time As a step towards studying human cognition in AI- allocation strategy can effectively debias the human when assisted decision-making, this paper addresses the role the AI model has low confidence and is incorrect. of cognitive biases in this setting. Cognitive biases, in- troduced in the seminal work by Kahneman and Tver- sky (Tversky and Kahneman, 1974), represent a system- 1 Introduction atic pattern of deviation from rationality in judgment wherein individuals create their own "subjective reality" It is a truth universally acknowledged that a human from their perception of the input. An individual’s per- decision-maker in possession of an AI model must be in ception of reality, not the objective input, may dictate want of a collaborative partnership. Recently, we have their behavior in the world, thus, leading to distorted and seen a rapid increase in deployment of machine learning inaccurate judgment. models (AI) in decision-making systems, where the AI Our first contribution is to propose a “biased Bayesian” models serve as helpers to human experts in many high- framework to reason about cognitive biases in AI-assisted stakes settings. Examples of such tasks can be found in human decision-making. Viewing a rational decision- healthcare, financial loans, criminal justice, job recruiting maker as being Bayes optimal, we discuss how different 1 agreement with the AI prediction on several different care- Empirical tests fully designed trials, we are able to show that time indeed Observed Space Prediction Space is a significant resource that helps the decision-maker sufficiently adjust away from the anchor when needed. Prior, Perception Given the findings of our first experiment, we formulate Data of AI output likelihood a resource allocation problem that accounts for anchoring bias to maximize human-AI collaborative accuracy. The Perceived Space time allocation problem that we study is motivated by real-world AI-assisted decision-making settings. Adap- tively determining the time allocated to a particular in- Figure 1: Three constituent spaces to capture different stance for the best possible judgement can be very useful interactions in human-AI collaboration. The interactions in multiple applications. For example, consider a (pro- of the perceived space, representing the human decision- curement) fraud monitoring system deployed in multina- maker, with the observed space and the prediction space tional corporations, which analyzes and flags high-risk may lead to cognitive biases. invoices (Dhurandhar et al., 2015). Given the scale of these systems, which typically analyze tens of thousands of invoices from many different geographies daily, the cognitive biases can be seen as modifying different factors number of invoices that may be flagged, even if a small in Bayes’ theorem. We illustrate this in Figure 1, based fraction, can easily overwhelm the team of experts vali- on ideas in (Yeom and Tschantz, 2018). In a collaborative dating them. In such scenarios, spending a lot of time decision-making setting, there are two interactions that on each invoice is not admissible. An adaptive scheme may lead to cognitive biases in the perceived space, which which takes into account the biases of the human and represents the human decision-maker. The observed space the expected accuracy of the AI model would be highly consists of the feature space and all the information the desirable to produce the most objective decisions. This decision-maker has acquired about the task. The pre- work would also be useful in the other aforementioned diction space represents the output generated by the AI domains, such as in criminal proceedings where judges model, which could consist of the AI decision, explanation, often have to look over many different case documents etc. Through Figure 1, we associate cognitive biases that and make quick decisions. affect the decision-makers’ priors and their perception of As a solution to the time allocation problem above, the data available with their interaction with the observed we propose a simple allocation policy based on AI confi- space. Confirmation bias, availability bias, the represen- dence and identify assumptions under which the policy is tativeness heuristic, and bias due to selective accessibility theoretically optimal. We conduct a second experiment of the feature space are mapped to the observed space. to evaluate this policy in comparison to baseline policies On the other hand, anchoring bias and the weak evidence as well as human-only and AI-only performance. Our effect are mapped to the prediction space. In this paper, results show that while the overall performance of all we provide a model for some of these biases using our policies considered is roughly the same, our policy helps biased Bayesian framework. the participants adjust away from the AI prediction more when the AI is incorrect and has low confidence. To focus our work, in the remainder of the paper we In summary, we make the following contributions: study anchoring bias. Anchoring bias is often present in AI-assisted decision making where the decision-maker • We model cognitive biases in AI-assisted decision- forms a skewed perception due to an anchor (AI decision) making with a biased Bayesian framework and situate presented to them, which limits the exploration of alterna- well-known cognitive biases within it, based on their tive hypotheses. The anchoring-and-adjustment heuristic, sources. studied widely, suggests that when presented with an anchor, people adjust away from the anchor insufficiently. • Focusing on anchoring bias, we show with human Building on the notion of bounded rationality, previous participants that allocating more time to a decision work (Lieder et al., 2018) attributes the adjustment strat- reduces anchoring. egy to a resource-rational policy wherein the insufficient • We formulate a time allocation problem to account adjustment is a rational trade-off between accuracy and for the anchoring-and-adjustment heuristic and max- time. Inspired by this, we conduct an experiment with imize human-AI accuracy. human participants to study whether allocating more resources — in this case, time — can alleviate anchoring • We propose a confidence-based allocation policy and bias. This bias manifests through the rate of agreement identify conditions under which it achieves optimal with the AI prediction. Thus, by measuring the rate of team performance. 2 • We evaluate the real-world effectiveness of the policy strategies to address the problem of blind agreement