<<

Retention and Transfer of Cognitive Mitigation Interventions: A Systematic Literature Study

J.E. (Hans) Korteling1, Jasmin Y.J. Gerritsma1, and Alexander Toet*1

1 TNO Human Factors, Soesterberg, Netherlands

* Correspondence: Alexander Toet E: [email protected]

Word count: 6428 1 2 Abstract 3 Cognitive can adversely human judgement and decision making and should therefore 4 preferably be mitigated, so that we can achieve our goals as effectively as possible. Hence, numerous bias 5 mitigation interventions have been developed and evaluated. However, to be effective in practical 6 situations beyond laboratory conditions, the bias mitigation effects of these interventions should be 7 retained over time and should transfer across contexts. This provides an overview of the 8 literature on retention and transfer of bias mitigation interventions. A systematic search yielded 41 studies 9 that were eligible for screening. At the end of the selection process, only seven studies remained that 10 adequately studied retention over a period of at least 14 days (six studies) or transfer to different tasks and 11 contexts (two studies). Retention of bias mitigation was found up to 12 months after the interventions. 12 Most of the relevant studies investigated the effects of bias mitigation training with specific serious games. 13 Only two studies (the second one being a replication of the first) found evidence for a transfer effect of bias 14 mitigation training across contexts. Our main conclusion is that there is currently insufficient evidence for 15 the retention and transfer of bias mitigation effects to consider the associated interventions as useful tools 16 that can help people to make better decisions in daily life. This is in line with recent theoretical insights 17 about the hard-wired neural and evolutionary origin of cognitive biases. 18 19 Keywords: cognitive biases, bias mitigation, retention, transfer of training, teaching, training interventions, 20 neural networks, brain, evolution, systematic literature study 21 22

23 Introduction 24 25 People constantly form judgments and make decisions, both consciously and unconsciously, without 26 certainty about their consequences. The decision to take another job, invest in company, or to start a 27 relationship, is generally made without knowing beforehand how internal and contextual success-factors 28 will develop, or what will happen when these decisions are really carried out. When making these kinds of 29 practical decisions our thinking is characterized by systematic distortions and aberrations, violating the 30 rules of logic and . These violations are manifested in all kinds of cognitive biases [1-4]. Cognitive 31 biases can be generally described as systematic and universally occurring tendencies, inclinations, or 32 dispositions that skew or distort decision making processes in ways that make their outcomes inaccurate, 33 suboptimal, or simply wrong [5, 6]. Biases occur in virtually the same way in many different decision 34 situations [7-9]. They distort our thinking in very specific ways, are largely implicit and unconscious, and 35 feel quite natural and self-evident [8, 10]. That is why they are often termed “intuitive” [11], “irrational” 36 [9], or a-rational [12, 13]. We tend to notice biased reasoning in others more than in ourselves and we 37 typically feel quite confident about our decisions and judgments, even when we are aware of our cognitive 38 biases and when there is hardly any evidence to support them [10, 14]. The robust, pervasive and 39 persistent phenomenon is extensively described and demonstrated in the psychological 40 literature [7-9, 15, 16]. Some well-known biases are: bias (the tendency to base the power or 41 relevance of an idea on the credibility of the conclusion instead of on the argument: [17]), Confirmation 42 bias (the tendency to select, interpret, focus on and remember in a way that confirms one’s 43 preconceptions, views, and expectations: [18]), Fundamental .(the tendency to 44 overestimate the influence of personality, while underestimating the importance of situational factors 45 when explaining events or of other people: [19, 20]), Hyperbolic discounting (the tendency to 46 prefer a smaller reward that arrives sooner over a larger reward that arrives later: [21]), (the 47 tendency to evaluate a decision based on its outcome rather than on what factors led to the decision: [22]), 48 Representativeness bias (the tendency to judge the likelihood of an entity by the extent to which it 49 ‘resembles the typical case’ instead of by its simple base rate: [23]), the Sunk-cost (the tendency to 50 consistently continue a chosen course or investment with negative outcomes rather than alter it: [24]) or 51 Social proof (the tendency to copy the actions and opinions of others [25]. 52 53 Biased thinking can result in outcomes that are quite acceptable in everyday situations, especially when the 54 time and processing cost of reasoning is taken into account [26, 27]. This is for example the case under time 55 pressure or when relevant or available information is too extensive or detailed, or when no optimal 56 solution is evident (e.g., “Bounded ”; [27]). In these cases, we use pragmatic decision routines 57 (‘’), characterized by a high ratio of benefits to cost in terms of the quality of the outcomes 58 relative to invested time, effort, and resources [28]. However, biased or reasoning often leads to 59 outcomes that deviate from what may be considered optimal, advisable, or utile (in relation to our personal 60 objectives). These deviations are also not random, but very specific and systematic: in a wide range of 61 different conditions, people show the same, typical tendencies in the way they pick up and process 62 information to judge and decide. This applies to (almost) everybody, at all levels and in all parts of society, 63 not only in our daily life, but also in professional institutions like politics, government, business, and media. 64 This may have substantial practical consequences, for example in the context of corporate government or 65 policymaking when decision making is often very complex with far-reaching consequences [29-31]. For 66 example, in policymaking the outcomes of a plan can be ‘framed’ in terms of gains, which leads to a 67 preference of avoidance, whereas framing the outcomes in terms of losses can lead to risk seeking [6, 68 32, 33]. This means that people base their decisions on the way a problem is formulated rather than on its 69 mere content. Besides framing, the number of choice alternatives can influence decision making [29]. 70 Decision makers prefer the status quo when the number of alternatives is high (: the 71 tendency to prefer the current state of affairs; [34]). These are just a few practical examples of many 72 factors that have been shown to systematically affect the choices people make [7]. 73 74 Mitigating cognitive biases may lead to better decision making on all levels of society, which could 75 substantially promote long-term human wellbeing. Substantial has already been conducted as to 76 whether and how cognitive biases can be mitigated. Merely teaching knowledge on the existence and 77 nature of cognitive biases has appeared insufficient to mitigate them [35]. Therefore, more elaborate 78 training methods and tools for have been developed, where people are intensively educated and 79 trained how to mitigate one or more cognitive biases [36]. One method is to ask people to consider why 80 their initial judgments could be wrong. This strategy is called “consider the opposite” and has been shown 81 to reduce various biases [37, 38]. Most studies investigate the bias mitigating effect just after finishing the 82 training while using the same type of tasks that were also used during the training [39-41]. However, to be 83 truly effective in real life, achieved effects of cognitive training should lead to enduring changes in the 84 decision maker’s and should generalize toward other task domains or training contexts [42]. This 85 means that the retention of training should be high, and the bias mitigation effects should last for a longer 86 time than just a brief period immediately after the training. It cannot simply be assumed that an 87 intervention that is effective right after training, will still be effective at a later time [43]. In addition, to 88 have practical value, the bias mitigation effects should transfer to real-life decision situations beyond the 89 specific training environment or context. So, people should be able to apply what they have learnt in a bias 90 mitigation training to more than just one specific type of problem or situation. To date, there is no study 91 that systematically reviews and analyzes the retention and transfer effects of bias mitigation interventions. 92 Therefore, the present systematic literature study provides an overview of the available studies on 93 retention and transfer of bias mitigation interventions. 94

95 Methods 96 and systematic search Four databases were used: Scopus, Web of Science, PubMed and 97 Psychinfo. For the search, two elements were of interests: debiasing (title/abstract) and retention or 98 transfer (all fields). These elements were adapted to the respective databases. For example, the search 99 string used in Scopus was: (title-ABS-key (“bias mitigation” OR debiasing) AND ALL(retention OR transfer)). 100 The field specifications were made to search for papers that deal with retention and transfer, within the 101 more general topic of debiasing. 102 103 Inclusion and exclusion criteria Studies where included if they investigated the effectiveness of cognitive 104 bias mitigation interventions. Cognitive biases had to be investigated explicitly, while studies investigating 105 an improvement of performance in a broader sense were excluded [42, 44]. Secondly, either retention of 106 debiasing or transfer were investigated. For retention, there had to be one measurement of bias at least 107 two weeks after the intervention. This two-week period may be considered sufficient to demonstrate long- 108 lasting training effects [42]. The transfer investigated had to transcend the specific training context or 109 domain (domain independence). Therefore, studies measuring ‘near transfer’, for instance to another of 110 similar test items measuring cognitive bias, were excluded. Finally, only primary (quasi) experimental 111 studies where used. 112 113 Data collection and analysis After the systematic search, duplicates were removed, and titles and abstracts 114 were carefully screened to determine if the article addressed cognitive bias mitigation. For the articles 115 meeting this criterium, full texts were obtained to determine if retention and/or far transfer of the bias 116 mitigation intervention was indeed assessed. Finally, relevant data were extracted. 117 118 Results 119 From the 75 articles returned by the initial search, only 5 articles remained after selection. Figure 1 120 illustrates the selection process. Of the 75 initial articles, 34 were duplicates. Of the remaining 41 articles, 121 25 did not (properly) investigate cognitive bias mitigation. They rather studied some other form of bias in 122 another field (10 articles), investigated the existence of only a particular cognitive bias (4 articles), or were 123 not experimental studies at all (11 articles). Of the 16 subsequently assessed articles, six assessed near 124 transfer, which was not the interest of the present study, and five studies did not study the transfer or 125 retention of the effects of interventions. The remaining five articles that were included in this study, 126 described seven single in total, all of which investigated retention, while only two very similar 127 (replication) studies focused on real transfer. A summary of all studies can be found in Table 1. 128 129 130 131 132 [ Figure 1 about here ] 133 134 135 136 137 138 Table 1 Articles on retention and transfer of bias mitigation interventions. 139 Clegg et al., 2015 [45] Rhodes et al., 2017 [46] Morewedge et al., 2015 [47] Sellier et al, 2019 [48] Aczel et al., 2015 [49] Study 1 Study 2 Study 1 Study 2 Retention/transfer Retention Retention Retention Retention Retention Retention & transfer Retention & transfer Type of intervention Video game Video game Video game Video game Video game Video game Training Conditions (nr of Game played once, played Missing, Heuristica, Missing, Heurisica, Missing: the pursuit of Missing: the final secret, Missing: the pursuit of Terry Awareness training for bias group 1 conditions in total) twice, training videoa. (3 Cycles, Enemy of Cycles, related videoa, Terry Hughes, and and related videoa. (2 in Hughes analogical training for bias group 2, in total) Reason, Macbeth, and unrelated videob. related videoa. (2 in total) awareness training for bias group 2 and related videoa, and (5 in total) total) analogical training for bias group 1, and no unrelated videob. (7 training. (3 in total) in total)

Studied biases Anchoring bias, projection Anchoring bias, projection bias, , Anchoring bias, Bias blind spot, confirmation Group 1 (outcome bias, fallacy, bias and representativeness bias, bias blind spot, and , and representativeness and bias, correspondence bias and the ‘statistical biases’: insensitivity to representativeness bias confirmation bias fundamental attribution social projection sample size, base rate neglect, regression to error (correspondence the mean, and covariation detection) bias) Group 2 (framing effect, anchoring bias, and overconfidence bias) Sample population Students, ntotal = 191, per Students, ntotal = 301, Students, ntotal = 325, Inhabitants of Pittsburg, Inhabitants of Pittsburg, Students, ntotal =290, ngame Students, ntotal = 154, nawareness = 50, nanalogical condition unknown, per condition 23 < n < per condition 35 < n < most had some college most had some college =182, nuntrained =108 = 50, presumably around 63 51 64 education, ntotal = 196. education, ntotal = 192. ncontrol = 54 nvideo = 66, ngame = 130 nvideo = 66, ngame = 126

Training duration (in Played once: 60 Missing: 90 Missing: 60 Heurisica: Game: unknown Game: unknown Missing: 80-100 Awareness training + Analogical training: 180 minutes) Played twice: 116 Heuristica: 90 70 Cycles: 60 related Video: 30 Video: 30 Video: 30 Cycles: 30 video: 30 Enemy of Reason: 60 unrelated video: 30 Macbeth: 60 related video: 30 unrelated video: 30

Retention interval 12 weeks 8 weeks 12 weeks 8 weeks 12 weeks Between 1-7 weeks (average 4 weeks lag: 18 days) Measures Questionnaire of bias Questionnaire of bias elicitation: ‘Assessment Questionnaire of bias elicitation, separate scores for Retention: Questionnaire of Retention: Questionnaire of bias elicitation, elicitation, separate of Biases in Cognition’, sum score of all biases each bias (high score means little bias) Bias blind spot and separate score for each bias and sum score scores for each bias (high used for analysis. (high score means low bias) Correspondence bias of ‘statistical biases’ (high score means little score means little bias) Developed by [50]. Retention & Transfer: choice of bias) hypothesis confirming case Transfer: one question asking if participants solutions. remember having decided differently due to the training. Internal consistency For anchoring bias Acceptable (.73<α<.76) For bias blind spot For anchoring bias poor to Unknown Unknown unacceptable(37<α<.44), acceptable (.76<α< .77), questionable (.52<α<.60), for projection bias for confirmation bias for representativeness questionable/acceptable acceptable (.73< α<.76) bias acceptable to good (.62<α<.91) and for for fundamental (.70< α<.83), representativeness bias attribution error for social projection acceptable/good questionable to questionable to (.70<α<.83) acceptable (.68<α<.78) acceptable (.63<α<.78)

Comparison Increase posttest - pretest Increase (posttest-pretest) compared to Increase posttest - pretest Retention: decrease of Retention: increase posttest – pretest, related- and unrelated video debiasing effect long lag – Transfer: one measurement, no short lag groups comparison/statistical testing Transfer: difference between untrained – trained (gaming) groups Short term effects Anchoring bias: game In all conditions except unrelated video score Both conditions were Both conditions were Playing the game effectively Not tested payed once p < .05, d = increased (ps < .01) effective at reducing all effective at reducing all reduced all three biases. 0.53, three biases, p<.001, three biases, p<.001, the p<.001 Game played twice: p the game was more game was more effective Bias blind Spot: dgame = .53 <.05, d = 0.72, Video: p all games > unrelated all games > unrelated effective than the video in reducing projection and Correspondence bias: dgame = <.05 d = 0.62, video (ps < .05, ds = video (ps < .001, ds = for each bias, p<.001 representativeness, p .63. Confirmation bias: Projection bias: game 0.44–1.18) Enemy of 0.95–1.56) Bias blind spot: <.001, but not anchoring, regression p<.05 (59% vs 72%) played once p < .05 d = Reason & Missing > Cycles & Missing > dgame = 0.98, dvideo = ps > .62. 1.90, game played twice p related video (ps < related video (ps < 0.49 Anchoring: dgame = 0.70, < .05 d = 2.09, .05, ds= 0.37, 0.54, .01, d=0.71, 0.50, Fundamental dvideo = 0.80 Video: p < .05, d = 0.43, respectively), Cycles respectively attribution error: Projection bias: dgame = representativeness bias: marginally better dgame= 1.12, dvideo = 1.11, dvideo = 0.49 game played once p < .05, than related video p 0.38 Representativeness: d = 1.82, game played =.05, d = 0.46 Confirmation bias: dgame = 1.51, dvideo = twice p < .05 d = 2.18, dgame = 1.09, dvideo = 1.81 video, p < .05 d = 1.84 0.38

Long term effects Anchoring bias: game Did not decline Slight decline Both conditions were Both conditions were Reduction of Confirmation Retention: Only the ‘statistical biases’ could payed once, p <.05 d = significantly compared to short effective at reducing all effective at reducing all bias. by playing the game once be reduced by training, p = .02, dcontrol = - 0.33, compared to short term, did not return three biases, p<.001, three biases, p<.001, the did not differ significantly 0.20, Game played twice, p <.05 term. to pretest level. the game was more game was more effective between 11 days (short lag) dawareness = 0.17, d = 0.72, Video, p <.05 d = All games except Only Cycles > effective than the video in reducing projection, p and about 6- 7 weeks (long lag) danalogical = 0.45, 0.29 Macbeth > unrelated unrelated video for bias blind spot, <.001, but not anchoring, p= .42 awareness does not differ from control, (p = Projection bias: played video (ps < .05, ds = t(243) > 2.79, p < .01, p<.001, and the other ps > .62, and .14) once p <. 05 d = 0.94, 0.40–0.57). d = 0.60. biases, p<.05 representativeness, p = analogical > control, (p = .007), awareness played twice, p < .05 d = All except Macbeth > No game > related Bias blind spot: .37 does not differ from analogical (p = .21) 1.67, video, p >.05, related video (ps< video (ps> .20) dgame = 0.89, dvideo = 0.49 Anchoring: dgame = 0.63, the individual biases did not significantly d = 0.17 .01, ds=0.49–0.68) Fundamental dvideo = 0.66 decrease after training, .20

Base rate neglect “the tendency of people to neglect statistical base rate information when making decisions” [52, 53]

Bias blind spot “perceiving oneself to be less biased than one’s peers” [14, 54]

Confirmation bias “gathering and interpreting evidence in a manner confirming rather than disconfirming the hypothesis being tested” [18] Correspondence bias, a.k.a. “tendency to make correspondent ” or “neglect of external demands” [55], Fundamental attribution “attributing the behavior of a person to dispositional rather than to situational influences” error [19, 20]

Covariation detection “how people judge whether a component has an effect, with or without taking into account the other elements of the contingency table” [56] Framing effect “the tendency of people to decide differently when the same information is worded differently” [6, 32] Insensitivity to sample size “people’s tendency to disregard the fact that small samples don’t follow the laws of big samples” [1] Outcome bias “the tendency of people to evaluate quality of decisions based on their outcome” [22]

Overconfidence bias “the tendency of people to perceive their ability as better than it actually is” [49]

Projection bias “assuming others’ , thoughts, and values are similar to one’s own” [57, 58]

Regression to the mean “the tendency of people not to take into account that after an extreme value the next value will more probably be closer to the mean” [59] Representativeness bias “using the similarity of an outcome to a prototypical outcome to judge its probability” [60] Sunk cost fallacy “people’s tendency to continue an activity if they have already invested money, time or effort in it” [24, 61] 163 164 The second article on retention by Rhodes e.a. [46] discusses two experiments in which the 165 investigated biases (anchoring bias, projection bias, representativeness bias, bias blind spot, and the 166 confirmation bias), were analyzed as a sum score instead of individually. Since the biases are so diverse in 167 nature, one could question whether the use of a sum score is valid. Presumably, this was done to increase 168 power, which was probably necessary since seven conditions were compared in Study 1 and five in Study 2. 169 Based on the acceptable internal consistency, the questionnaire does seem reliable. The first 170 evaluates five games, in comparison to a related video that taught the same cognitive biases and an 171 unrelated video that did not teach any biases. All five games and the related video mitigated the biases 172 right after the intervention, while three games outperformed the related video. After eight weeks the 173 overall mitigation effects remained, while now four video games outperformed the related video. The 174 second experiment evaluates the three games that outperformed the video right after the intervention. 175 Two of these outperformed the related video right after the intervention; however, after 12 weeks no 176 game produced better debiasing results than the related video anymore. The related video outperformed 177 the games in teaching the definition of biases, underlining once again that knowledge of biases is not 178 sufficient to achieve substantial bias mitigation in people. The unrelated video was added to investigate 179 whether there was an effect of testing. This effect was indeed observed: after eight weeks participants 180 were less biased than right after watching the unrelated video. However, this effect was smaller than the 181 debiasing effect of the games. To achieve bias mitigation, games appear more effective than videos. 182 183 The third article on retention by Morewedge e.a. [47] also discusses two experiments. The first 184 experiment evaluates the effectiveness of a game compared to that of a related video in mitigating the bias 185 blind spot, confirmation bias and fundamental attribution error (or correspondence bias). While both the 186 game and the video reduced all biases, the game reduced the biases more, both right after the game and 187 eight weeks later. The second experiment evaluates a different game than Experiment 1 and compares it to 188 a related video. The targeted biases are anchoring, representativeness and social projection. Like in the 189 study of Clegg e.a. [45], the internal consistency of the anchoring bias was unacceptable, making the 190 results for this bias meaningless. Both the game and the video reduced biases right after the intervention 191 and after 12 weeks. The game was more effective in reducing representativeness and social projection right 192 after intervention; 12 weeks later it was only more effective in reducing social projection. These results 193 suggest that serious games can indeed mitigate biases. Notably, there is a discrepancy in the results 194 regarding the representativeness bias: the shortly after the intervention is larger for the video 195 than for the game. Still, the authors claim the game was more effective at that point. It is unclear whether 196 the effect sizes got mixed up or the authors' claim is wrong. 197 198 The fourth article on retention by Selllier e.a. [48] investigates whether the debiasing effects of the 199 game that was used in the first experiment by Morewedge e.a. [47] transfers to fields settings. The game 200 incorporated four debiasing strategies: warning about bias, teaching its directionality, providing feedback, 201 and extensive coaching and training. The targeted bias was the confirmation bias. Measurements of bias 202 blind spot and correspondence bias served as manipulation checks of the efficacy of the training. 203 Replicating previous research [47], trained participants exhibited significantly lower levels of bias on both 204 scale measures than untrained controls. Transfer was measured solving a surreptitiously presented 205 business case modeled on the decision to launch the Space Shuttle Challenger. The debiasing effects of the 206 game appeared to transfer to this unrelated problem and were retained up to 52 days. Trained participants 207 were 19% less likely to choose an inferior hypothesis-confirming case solution than untrained ones. This 208 reduction in confirmation bias appeared to result from a reduction in confirmatory hypothesis testing. It 209 must be noted that the definition of the broad ‘Confirmation bias’ was restricted to ‘confirming hypothesis 210 testing’. This may result in a training outcome of using a specific and limited learned ‘trick’, rule, or 211 standard operation to disconfirming hypothesis testing when confronted with a hypothesis to be tested. 212 This rather than to mitigate the more general and pervasive thinking tendency, to search for, interpret, 213 favor, and recall information in a way that confirms or supports one's prior beliefs values (i.e., the 214 Confirmation bias often called ‘the mother of all biases’) ([18]. It is therefore not clear whether the 215 effectiveness of the intervention resulted from this narrow operationalization of the Confirmation bias or, 216 in contrast, to the engaging nature of the intervention, the intensive practice and feedback, or the breadth 217 of the training that included practicing the mitigating strategies to different and domains. 218 219 The fifth and final article on retention by Aczel e.a. [49] is the only article that both (1) evaluated a 220 training that was not computer-based and (2) investigated retention and transfer. The two evaluated 221 training techniques were bias-awareness training, which taught the definitions of the biases and different 222 mitigation strategies, and analogical-debiasing training, which encouraged participants to recognize 223 analogies between different situations in which biases occur. The study had a within design (see Table 1 for 224 details). Four weeks after training the only finding regarding retention was that analogical training 225 improved decision-making in regard to the ‘statistical biases’, a sum score of four biases from the first 226 group of biases (insensitivity to sample size, base rate neglect, regression to the mean, and covariation 227 detection). For individual biases, no significant retention effects were found. These results on retention 228 suggest that training can, although only slightly, mitigate some biases. Transfer was measured by asking 229 participants four weeks after the intervention whether they remembered deciding differently because of 230 the training. Of the participants who received analogical training for the statistical biases, 58 % reported 231 “yes”, of the participants who received awareness training for these biases this was only the case for 32 %. 232 This implies the training affects transfer to some extent. That is: the statistical biases, benefited the most 233 from the (analogical) debiasing training. Yet, for the other biases, improvement after either the awareness, 234 or the analogical training was not found. 235 236 Discussion 237 Bias mitigation training is only of practical value when the bias mitigation effects of the intervention are 238 retained over time and transfer across contexts. To explore the evidence for the effectivity of bias 239 mitigation interventions, we performed a systematic literature review on the retention and transfer of bias 240 mitigation interventions beyond the training context. More specifically, this review investigates whether 241 there is sufficient evidence to conclude that bias mitigating interventions yield stable effects that transfer 242 to situations beyond the training context. Our main conclusion is that the scientific evidence for both 243 retention and transfer of bias mitigating interventions is very scarce. While our literature search on 244 retention or transfer initially yielded 74 articles, only four articles (including six studies) remained after 245 careful selection on relevance and methodological considerations. 246 247 Regarding retention, there still are a few positive findings for several biases. Effects were observed 248 up to 12 weeks after the interventions, with relatively large effect sizes [45-47]. The largest effect sizes 249 were found by Clegg e.a. [45] for a game that was played twice. This was the only study using repetitive 250 training, which is known to increase retention [62]. Most interventions with successful results involved 251 serious games [45-48]. These studies all targeted a group of six biases that are relevant for military 252 intelligence analysis [47, 63]. To be of value for other application domains, the interventions should also 253 mitigate other biases (e.g., hyperbolic time discounting, sunk-cost fallacy, or outcome bias). The games 254 used in these yielded valuable information on retention, but no information on transfer. Most studies 255 found that games mitigated biases more than videos that educate about biases sizes [45-47]. The authors 256 of these studies often suggest that the interactive nature of the games might be essential. However, this is 257 not supported by Poos e.a. [36] who demonstrated in a (short-term, near transfer) experiment that the 258 transcript of a game was equally effective in mitigating biases as the game itself. In addition, none of these 259 studies measured retention beyond the intervention situation, i.e., for a different task (domain) or task 260 context. 261 262 Studies on transfer of training beyond the intervention context were even more scarce. Our search 263 yielded only two studies in which transfer was measured under task conditions beyond the training context 264 [48, 49]. The main result of the study by Aczel e.a. [49] was that half of the participants remembered 265 deciding differently than before the training, within four weeks after the intervention. Since this finding 266 was based on self-report of transfer, instead of on performance in a decision-making task, no firm 267 conclusion about transfer of training can be drawn. Sellier e.a. [48] observed a transfer effect to a different 268 problem and context in a field setting for about 19% of their participants. The marginal evidence that is 269 available for transfer beyond the training situation suggests that the debiasing effect of many interventions 270 may be a learned ‘trick’, or a learned rule or manner to answer or behave in a desired way given a specific 271 type of problem, while the fundamental underlying decision making competence does not change. If the 272 training interventions would indeed affect the actual underlying cause of biased thinking, transfer to other 273 task domains and contexts should be much better demonstrable. 274 275 A few more things are worth noting about the studies discussed so far. The test-retest reliability for 276 the anchoring bias was found to be low [50]. This could mean that individuals are more prone to anchoring 277 in one context than in another. However, since the questionnaires that were used also had low internal 278 consistency, it is not clear whether this bias is unstable over time or whether their assessment was flawed. 279 Hence, no conclusions can be drawn about mitigation of the anchoring bias. Secondly, most studies 280 involved only one training session, while repetition of training has been proven to increase retention. The 281 only study with multiple training sessions found that playing the game twice resulted in more bias 282 reduction than playing the game only once. Lastly, there could be an effect of testing. Rhodes e.a. [46] 283 investigated this and indeed found that testing alone reduced biases to some extent, but not as much as 284 the interventions. 285 286 Given that this (most prominent) subject of cognitive biases has been studied for a long time [1], 287 the very limited number of relevant publications on retention and transfer of cognitive bias mitigation 288 training is quite surprising. Transfer has been studied substantially in specific limited areas in psychology, 289 i.e. behavior modeling training [64] and error management training [65]. However, it is not a priori likely 290 that cognitive bias mitigation transfers to other areas since it has been observed that transfer of training 291 often fails [66]. Another possibility is that retention and transfer of debiasing interventions have in fact 292 been studied extensively, however without yielding significant results. Such ‘negative’ results often do not 293 lead to publication, due to a scientific phenomenon called the ‘’ [67]. Also, debiasing 294 interventions could be studied using different terms than the ones used in the present literature search. 295 There are studies aiming at an improvement in decision making in a broader sense that do not use the term 296 ‘cognitive bias’ [42, 44]. However, the use of different terms for transfer and retention seems unlikely, 297 since these terms have been widely used for quite some time [68]. 298 299 A minimal proof of retention and transfer of the training-effects of bias mitigation interventions 300 may be grounded on recent theoretical insights. Biases (and heuristics) are typically explained with the Dual 301 Processing Model [7, 69-71]. This framework assumes that cognitive information processing can take place 302 in two ways: the first way (‘System 1’, heuristic) demands little effort and works quickly and automatically. 303 This is the default method that feels self-evident and is sensitive to biases. The second way of thinking 304 (deliberate, ‘System 2’) is slower, and demands effort and concentration. The ‘System’ terminology was 305 originally proposed by Stanovich and West [69] and adopted by many scientists like Kahneman [7]. It 306 suggests distinct underlying brain systems that mediate emotional versus rational reasoning [72]. This 307 somewhat misguiding frame was adopted to make the dual processing model more easily comprehensible 308 [73]. This means that the Two-System model should be considered mainly descriptive, instead of 309 explanatory. It does not provide a real explanation in terms of underlying mechanisms that cause cognitive 310 biases that are robust over time and task conditions. 311 312 A framework with more explanatory power is provided by a recent theoretical framework for 313 cognitive biases [8, 12]. According to this framework biases are largely caused by hard-wired neural and 314 evolutionary characteristics and mechanisms of the brain. Neural biases arise from the inherent 315 characteristics (or principles) of the functioning of the brain as a biological neural network, which was 316 originally developed and optimized for biological and perceptual-motor functions. These biases distort or 317 skew objective information, just like perceptual illusions [74]. Basically, these mechanisms - such as 318 association, facilitation, adaptation, or lateral inhibition - result in a modification of the original or available 319 data and its processing (e.g., by weighting its importance). For example, lateral inhibition [75] is a universal 320 neural process resulting in the magnification of differences in neural activity (contrast enhancement), 321 which is very useful for perceptual-motor functions. However, for higher cognitive functions, that require 322 exact calculation and proper weighting of data and the application of the rules of logic and probability, this 323 transformation of data may work out detrimentally. In these cases, this hard-wired mechanism may lead 324 for instance to increasing differences in weighing up alternatives or selectively overweighing thoughts that 325 occasionally pop up (Availability bias: the tendency to judge the frequency, importance, or likelihood of an 326 event by the ease with which relevant instances come to mind [1, 76]. Therefore, the use of our thin, 327 recently developed, layer of deliberate cognitive thought may be compared with eating soup with a fork. 328 Although it is possible, it does not work very well. Examples of these neurally inherent biases are for 329 instance Anchoring bias (biasing decisions toward previously acquired information or the ‘anchor’: [1, 77]), 330 Availability bias, Focusing illusion (the tendency to place too much emphasis on one or a limited number of 331 aspects of an event or situation; [78]), (the tendency to erroneously perceive events as 332 inevitable or more likely once they have occurred; [79, 80]). 333 334 In addition to the inherent limitations of (biological) neural networks, biases may also originate 335 from ingrained evolutionary principles that promoted the survival of our ancestors who lived as hunter- 336 gatherers for hundreds of thousands of years in close-knit groups [81, 82]. Cognitive biases can be caused 337 by mismatches between evolutionarily rationalized heuristics ('evolutionary rationality': [81]) and the 338 modern environment or context in which we live [82]. From this perspective, the same thinking patterns 339 that optimized the chances of survival of our ancestors in their (natural) environment can lead to 340 maladaptive (biased) behavior when they are used in our current (artificial) settings. Biases that may be 341 considered as examples of this kind of mismatch are: (the tendency to attribute greater 342 accuracy to the opinion of an authority figure -unrelated to its content- and to be more influenced by that 343 opinion; [83, 84]), bias (the tendency to adjust one’s thinking and behavior to that of a group 344 standard; [85]), and the Ingroup bias (the tendency to favor one’s own group above that of others; [86]). 345 The hypothesis that cognitive biases originate from (neurally) inherent and (evolutionary) ingrained brain 346 mechanisms [8, 12] may explain why it is so difficult to obtain long-lasting debiasing effects outside the 347 context of the laboratory setting in which these effects were acquired. 348 349 Bias mitigation interventions only have practical value when they help people to make better 350 decisions in practical situations in a long-lasting way. Based on the literature, we conclude that there is 351 currently insufficient evidence for transfer and retention of bias mitigation interventions. So far, only a few 352 studies (mainly from one major IARPA research project) have reported positive results. Regarding transfer 353 of bias mitigation interventions, which is essential in such a diverse field as decision making, there is barely 354 anything known to date. As called for by Larrick [87] and confirmed by Ludolph and Schulz [88] more 355 recently, more extensive studies on transfer effects are required to investigate whether these interventions 356 can beneficially aid decision making. In addition, we advocate to more rigorously investigate the ingrained 357 psycho-biological origins of biases and (based on the results) how human decision making can be 358 enduringly improved in a broad array of practical contexts. 359 360 As a first step, the supposed inherent, ingrained, and hidden character of biased thinking makes it 361 unlikely that simple and straightforward methods like training sessions or awareness courses will appear 362 very effective to ameliorate most biases. Regarding these kinds of interventions, we advise to focus first on 363 the prevention or mitigation of the biases with an evolutionary origin instead of on the neural biases. The 364 reason for this is that neural biases are inextricably linked to the ‘structural’ system properties of our brain 365 (‘inherent’). Because of this inherent, structural character of neural biases, it is impossible to execute an 366 order like “Do not think of a pink elephant” or to control which information pops up in your mind when 367 deliberating about an issue. In contrast, evolutionary biases are more ‘functional’ and may be conceived as 368 a kind of inborn ‘brain programs’ coding for certain preferences or inclinations that had value for survival in 369 primordial times. This means that, in contrast to neural biases, evolutionary biases are less fundamental (or 370 hard-wired) regarding the way neural networks work. This means that evolutionary inherited survival 371 tendencies like striving for immediate reward (Hyperbolic discounting), for self-interest at the expense of 372 the community (Tragedy of the commons’), for admiration by peers, or following or copying the opinions or 373 behavior of the majority (Social proof), may be suppressed and overcome by the subject. 374 375 However, the notion that many biases arise from more or less hard-wired brain mechanisms, 376 means that mitigating them will always be an ‘uphill-battle’ requiring substantial motivation, effort and 377 perseverance. Therefore, we suppose that the most promising way of dealing with biases may be to 378 improve the environment in which people make decisions instead of trying to directly augment their 379 thinking capacities. For example, one could stimulate or impost the use of certain very strict working 380 methods or aids with which the ingrained tendency to biased thinking can be prevented or circumvented. 381 Examples of these are checklists or premortems (e.g. countering bias by imagining what could go 382 wrong in a project). In addition, it is well-known that for the execution of specific (narrow) cognitive tasks 383 (logical, analytical, computational), modern digital intelligence may be more effective and efficient than 384 biological intelligence [89]. Therefore, we conjecture that ultimately the development of digital decision 385 support systems (supposedly based on artificial intelligence technology) may appear the most effective way 386 leading to improved human decision making.

References

1. Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science. 1974;185(4157):1124-31. doi: 10.1126/science.185.4157.1124. 2. Haselton MG, Nettle D, Andrews PW. The evolution of cognitive bias. In: Buss DM, editor. The handbook of . Hoboken, NJ, USA: John Wiley & Sons Inc.; 2005. p. 724-46.Available from: c:\Library\HaseltonNettle2005.pdf. 3. LeBoeuf RA, Shafir EB. Decision making. In: Holyoak KJ, Morisson RG, editors. The Cambridge Handbook of Thinking and Reasoning. Cambridge, UK: Cambridge University Press; 2005. p. 243-67. 4. Toet A, Brouwer AM, van den Bosch K, Korteling JE. Effects of personal characteristics on susceptibility to decision bias: a literature study. International Journal of Humanities and Social Sciences. 2016;8(5):1-17. 5. Lichtenstein S, Slovic P. Reversals of preference between bids and choices in decisions. Journal of . 1971;89(1):46-55. doi: 10.1037/h0031207. 6. Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211(4481):453-8. doi: 10.1126/science.7455683. 7. Kahneman D. Thinking, fast and slow. New York, USA: Farrar, Straus and Giroux; 2011. Available from: c:\Library\Kahneman2011.pdf. 8. Korteling JE, Brouwer A-M, Toet A. A neural network framework for cognitive bias. Front Psychol. 2018;9:article 1561. doi: 10.3389/fpsyg.2018.01561. 9. Shafir E, LeBoeuf RA. Rationality. Annual Review of Psychology. 2002;53(1):491-517. doi: 10.1146/annurev.psych.53.100901.135213. PubMed PMID: 11752494. 10. Risen JL. Believing what we do not believe: Acquiescence to superstitious beliefs and other powerful intuitions. Psychological Review. 2015;123(2):128-207. doi: 10.1037/rev0000017. 11. Kahneman D, Klein G. Conditions for intuitive expertise: a failure to disagree. American Psychologist. 2009;64(6):515-26. doi: 10.1037/a0016755. 12. Korteling JE, Toet A. Neuro-evolutionary framework for cognitive biases. Soesterberg, The Netherlands: TNO Defence,, Safety & Security, 2020. Report No.: TNO 2020 R10611. 13. Korteling JE, Toet A. Cognitive biases. Reference Module in Neuroscience and Biobehavioral Psychology: Elsevier; 2020.Available from: http://www.sciencedirect.com/science/article/pii/B9780128093245241059. 14. Pronin E, Lin DY, Ross L. The bias blind spot: Perceptions of bias in self versus others. Personality and Bulletin. 2002;28(3):369-81. doi: 10.1177/0146167202286008. 15. Hastie R, Dawes RM. Rational choice in an uncertain world: The psychology of judgment and decision making. Thousand Oaks, CA, USA: Sage; 2001. 16. Kahneman D, Tversky A, editors. Choices, values, and frames. Cambridge, UK: Cambridge University Press; 2000. 17. Evans JS, Barston J, Pollard P. On the conflict between logic and belief in syllogistic reasoning. Memory & Cognition. 1983;11(3):295-306. doi: 10.3758/BF03196976. 18. Nickerson RS. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology. 1998;2(2):175-220. doi: 10.1037/1089-2680.2.2.175. 19. Gilbert DT. Ordinary personology. In: Gilbert DT, Fiske T, Lindsey G, editors. The handbook of social psychology;Vol 2; 1998. p. 89-150. 20. Jones EE, Harris VA. The attribution of attitudes. Journal of Experimental Social Psychology. 1967;3(1):1-24. doi: 10.1016/0022-1031(67)90034-0. 21. Alexander WH, Brown JW. Hyperbolically discounted temporal difference learning. Neural Computation. 2010;22(6):1511-27. doi: 10.1162/neco.2010.08-09-1080. PubMed PMID: 20100071. 22. Baron J, Hershey JC. Outcome bias in decision evaluation. Journal of Personality and Social Psychology. 1988;54(4):569-79. doi: 10.1037/0022-3514.54.4.569. 23. Tversky A, Kahneman D. Judgments of and by representativeness. In: Kahneman D, Slovic P, Tversky A, editors. Judgment under uncertainty: Heuristics and biases. Cambridge, UK: Cambridge University Press; 1982. 24. Arkes HR, Ayton P. The sunk cost and Concorde effects: Are humans less rational than lower animals? Psychol Bull. 1999;125(5):591. doi: 10.1037/0033-2909.125.5.591. 25. Cialdini RD. Influence: the psychology of persuation. New York, NY, USA: Harper; 1984. 26. Gigerenzer G, Gaissmaier W. Heuristic decision making. Annual Review of Psychology. 2011;62(1):451-82. doi: 10.1146/annurev-psych-120709-145346. PubMed PMID: 21126183. 27. Simon HA. A behavioral model of rational choice. The Quarterly Journal of . 1955;69(1):99-118. doi: 10.2307/1884852. 28. Gigerenzer G, Todd PM. Simple heuristics that make us smart. New York, NY, USA: Oxford University Press,; 1999. 29. Bellé N, Cantarelli P, Belardinelli P. goes public: experimental evidence on cognitive biases in public policy and management decisions. Public Administration Review. 2018;78(6):828-40. doi: 10.1111/puar.12960. 30. Vis B. Prospect theory and political decision making. Political Studies Review. 2011;9(3):334-43. doi: 10.1111/j.1478-9302.2011.00238.x. 31. Flyvbjerg B. Policy and planning for large-infrastructure projects: problems, causes, cures. Environment and Planning B: Planning and Design. 2007;34(4):578-97. doi: 10.1068/b32111. 32. Plous S. The psychology of judgment and decision making. New York, NY, USA: McGraw- Hill; 1993. 33. Mercer J. Prospect theory and . Annual Review of Political Science. 2005;8:1-21. doi: 10.1146/annurev.polisci.8.082103.104911. 34. Samuelson W, Zeckhauser R. Status quo bias in decision making. Journal of Risk and Uncertainty. 1988;1(1):7-59. doi: 10.1007/bf00055564. 35. Beaulac G, Kenyon T. The scope of debiasing in the classroom. Topoi. 2018;37(1):93-102. doi: 10.1007/s11245-016-9398-8. 36. Poos JM, van den Bosch K, Janssen CP. Battling bias: Effects of training and training context. Computers & Education. 2017;111:101-13. doi: 10.1016/j.compedu.2017.04.004. 37. Arkes HR, Faust D, Guilmette TJ, Hart K. Eliminating the hindsight bias. Journal of . 1988;73(2):305-7. 38. Mussweiler T, Strack F, Pfeiffer T. Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin. 2000;26(9):1142-50. doi: 10.1177/01461672002611010. 39. Larrick RP, Morgan JN, Nisbett RE. Teaching the use of cost-benefit reasoning in everyday life. Psychological Science. 1990;1(6):362-70. doi: 10.1111/j.1467-9280.1990.tb00243.x. 40. Clarkson PM, Emby C, Watt VWS. Debiasing the Outcome Effect: The role of instructions in an audit litigation setting. AUDITING: A Journal of Practice. 2002;21(2):7-20. doi: 10.2308/aud.2002.21.2.7. 41. Cheng F-F, Wu C-S. Debiasing the framing effect: The effect of warning and involvement. Decision Support Systems. 2010;49(3):328-34. doi: 10.1016/j.dss.2010.04.002. 42. Fong GT, Nisbett RE. Immediate and delayed transfer of training effects in statistical reasoning. Journal of Experimental Psychology: General. 1991;120(1):34-45. doi: 10.1037/0096-3445.120.1.34. 43. Schmidt RA, Bjork RA. New conceptualizations of practice: common principles in three paradigms suggest new concepts for training. Psychological Science. 1992;3(4):207-18. doi: 10.1111/j.1467-9280.1992.tb00029.x. 44. Kostopoulou O, Porat T, Corrigan D, Mahmoud S, Delaney BC. Diagnostic accuracy of GPs when using an early-intervention decision support system: a high-fidelity simulation. British Journal of General Practice. 2017;67(656):e201-e8. doi: 10.3399/bjgp16X688417. 45. Clegg BA, McKernan B, Martey RM, Taylor SM, Stromer-Galley J, Kenski K, et al. Effective mitigation of anchoring bias, projection bias, and representativeness bias from serious game-based training. Procedia Manufacturing. 2015;3:1558 – 65. 46. Rhodes RE, Kopecky J, Bos N, McKneely J, Gertner A, Zaromb F, et al. Teaching decision making with serious games: an independent evaluation. Games and Culture. 2017;12(3):233-51. doi: 10.1177/1555412016686642. 47. Morewedge CK, Yoon H, Scopelliti I, Symborski CW, Korris JH, Kassam KS. Debiasing decisions: improved decision making with a single training intervention. Policy Insights from the Behavioral and Brain Sciences. 2015;2(1):129-40. doi: 10.1177/2372732215600886. 48. Sellier A-L, Scopelliti I, Morewedge CK. Debiasing training improves decision making in the field. Psychological Science. 2019;30(9):1371-9. doi: 10.1177/0956797619861429. PubMed PMID: 31347444. 49. Aczel B, Bago B, Szollosi A, Foldes A, Lukacs B. Is it time for studying real-life debiasing? Evaluation of the effectiveness of an analogical intervention technique. Front Psychol. 2015;6(1120). doi: 10.3389/fpsyg.2015.01120. 50. Gertner A, Zaromb F, Schneider R, Roberts RD, Matthews G. The assessment of biases in cognition: Development and evaluation of an assessment instrument for the measurement of cognitive bias. McLean, VA: 2016. Report No.: MITRE Technical Report MTR160163.Available from: Retrieved from https://www.mitre.org/publications/technicalpapers/the-assessment-of-biases-in-cognition. 51. George D, Mallery P. SPSS for Windows step by step: A simple guide and reference. 11.0 update (4th ed.). . Boston, USA: Allyn & Bacon; 2003. 52. Fong GT, Krantz DH, Nisbett RE. The effects of statistical training on thinking about everyday problems. . 1986;18(3):253-92. doi: 10.1016/0010- 0285(86)90001-0. 53. Tversky A, Kahneman D. Evidential impact of base rates. In: Kahneman D, Slovic P, Tversky A, editors. Judgment under uncertainty: Heuristics and biases. Cambridge, UK: Cambridge University Press; 1982. 54. Scopelliti I, Morewedge CK, McCormick E, Min HL, Lebrecht S, Kassam KS. Bias blind spot: Structure, measurement, and consequences. Management Science. 2015;61(10):2468- 86. doi: 10.1287/mnsc.2014.2096. 55. Scopelliti I, Min HL, McCormick E, Kassam KS, Morewedge CK. Individual differences in correspondence bias: measurement, consequences, and correction of biased interpersonal attributions. Management Science. 2018;64(4):1879-910. doi: 10.1287/mnsc.2016.2668. 56. Stanovich KE, West RF. Individual differences in rational thought. Journal of Experimental Psychology: General. 1998;127(2):161-88. doi: 10.1037/0096-3445.127.2.161. 57. Epley N, Morewedge CK, Keysar B. Perspective taking in children and adults: Equivalent egocentrism but differential correction. Journal of Experimental Social Psychology. 2004;40(6):760-8. doi: 10.1016/j.jesp.2004.02.002. 58. Robbins JM, Krueger JI. Social projection to ingroups and outgroups: A review and meta- analysis. Personality and Social Psychology Review. 2005;9(1):32-47. doi: 10.1207/s15327957pspr0901_3. 59. Bazerman M. Judgment in managerial decision making. Hoboken, NJ: John Wiley & Sons; 2005. 60. Kahneman D, Tversky A. Subjective probability: A judgment of representativeness. Cognitive Psychology. 1972;3(3):430-54. doi: 10.1016/0010-0285(72)90016-3. 61. Arkes HR, Blumer C. The psychology of sunk cost. Organizational Behavior and Human Decision Processes. 1985;35(1):124-40. doi: 10.1016/0749-5978(85)90049-4. 62. Cepeda NJ, Vul E, Rohrer D, Wixted JT, Pashler H. Spacing effects in learning: A temporal ridgeline of optimal retention. Psychological Science. 2008;19(11):1095-102. doi: 10.1111/j.1467-9280.2008.02209.x. PubMed PMID: 19076480. 63. Heuer RJ. Psychology of intelligence analysis. Washington, D.C.: Center for the Study of Intelligence, Central Intelligence Agency; 1999. 64. Taylor PJ, Russ-Eft DF, Chan DWL. A meta-analytic review of behavior modeling training. Journal of Applied Psychology. 2005;90(4):692-709. doi: 10.1037/0021-9010.90.4.692. 65. Keith N, Frese M. Effectiveness of error management training: A meta-analysis. Journal of Applied Psychology. 2008;93(1):59-69. doi: 10.1037/0021-9010.93.1.59. 66. Burke LA, Hutchins HM. Training transfer: An integrative literature review. Human Resource Development Review. 2007;6(3):263-96. doi: 10.1177/1534484307303035. 67. Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA. 1990;263(10):1385-9. doi: 10.1001/jama.1990.03440100097014. 68. Hesketh B. Dilemmas in training for transfer and retention. Applied Psychology: An International Review. 1997;46(4):317-39. doi: 10.1080/026999497378124. 69. Stanovich KE, West RF. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences. 2001;23(5):645-65. Epub 04/09. doi: Not available. 70. Evans JS. Dual-processing accounts of reasoning, judgment, and . Annual Review of Psychology. 2008;59(1):255-78. doi: 10.1146/annurev.psych.59.103006.093629. 71. Kahneman D. A perspective on judgment and choice: mapping . American Psychologist. 2003;58(9):697-720. doi: 10.1037/0003-066X.58.9.697. 72. Feldman Barrett L. How emotions are made: The secret life of the brain. London, UK: Houghton Mifflin Harcourt; 2017. Available from: https://how-emotions-are-made.com. 73. Kahneman D. Thinking, fast and slow; Talks at Google. https://www.youtube.com/watch?v=CjVQJdIrDJ0. 2011. 74. Reeves A, Pinna B. Editorial: The future of perceptual illusions: From phenomenology to neuroscience. Frontiers in Human Neuroscience. 2017;11(9). doi: 10.3389/fnhum.2017.00009. 75. Isaacson JS, Scanziani M. How inhibition shapes cortical activity. Neuron. 2011;72(2):231- 43. doi: 10.1016/j.neuron.2011.09.027. 76. Tversky A, Kahneman D. Availability: A heuristic for judging frequency and probability. Cognitive Psychology. 1973;5(2):207-32. doi: 10.1016/0010-0285(73)90033-9. 77. Furnham A, Boo HC. A literature review of the anchoring effect. The Journal of Socio- Economics. 2011;40(1):35-42. doi: 10.1016/j.socec.2010.10.008. 78. Kahneman D, Krueger AB, Schkade D, Schwarz N, Stone AA. You be happier if you were richer: A focusing illusion. Science. 2006;312(5782):1908-10. doi: 10.1126/science.1129688. 79. Roese NJ, Vohs KD. Hindsight bias. Perspectives on Psychological Science. 2012;7(5):411- 26. doi: 10.1177/1745691612454303. 80. Hoffrage U, Hertwig R, Gigerenzer G. Hindsight bias: A by-product of knowledge updating? Journal of Experimental Psychology: Learning, Memory, and Cognition. 2000;26(3):566. doi: 10.1037/0278-7393.26.3.566. 81. Haselton MG, Bryant GA, Wilke A, Frederick DA, Galperin A, Frankenhuis WE, et al. Adaptive rationality: An evolutionary perspective on cognitive bias. Social Cognition. 2009;27(5):733-62. doi: 10.1521/soco.2009.27.5.733. 82. Tooby J, Cosmides L. Conceptual foundations of evolutionary psychology. In: Buss DM, editor. Handbook of evolutionary psychology. Hoboken, New Jersey: John Wiley & Sons, Inc.; 2005. p. 5-67. 83. Milgram S. Behavioral study of obedience. The Journal of Abnormal and Social Psychology. 1963;67(4):371-8. doi: 10.1037/h0040525. 84. Milgram S. Obedience to authority. An experimental view. New York, NY, USA: Harper and Row; 1974. 85. Cialdini RB, Goldstein NJ. : and conformity. Annual Review of Psychology. 2004;55(1):591-621. doi: 10.1146/annurev.psych.55.090902.142015. PubMed PMID: 14744228. 86. Taylor DM, Doria JR. Self-serving and group-serving bias in attribution. The Journal of Social Psychology. 1981;113(2):201-11. doi: 10.1080/00224545.1981.9924371. 87. Larrick RP. Debiasing. In: Koehler D, Harvey N, editors. The Blackwell Handbook of Judgment and Decision Making; Oxford, UK: Blackwell Publishing; 2004. p. 316-37. 88. Ludolph R, Schulz PJ. Debiasing health-related judgments and decision making: a systematic review. Medical Decision Making. 2018;38(1):3-13. doi: 10.1177/0272989x17716672. PubMed PMID: 28649904. 89. Moravec H. Mind children: The future of robot and human intelligence. Cambridge, MA: Harvard University Press; 1988. 90. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLOS Medicine. 2009;6(7):e1000097. doi: 10.1371/journal.pmed.1000097.

Figures

Figure 1 Flow diagram of the study selection process (adapted from [90]).