The Blind Spot:

Investigating and Measuring

Factors in Consequential

Decision-Making

A Thesis

Submitted to the Faculty

of

Drexel University

by

Victoria Pietruszka

In partial fulfillment of the

Requirements for the degree

of

Master of Science in Clinical

January 2019

© Copyright 2019

Victoria Pietruszka. All Right Reserved.

ii Acknowledgments

First, I would like to acknowledge my thesis committee, Kirk Heilbrun,

Ph.D., David DeMatteo, J.D., Ph.D., and Adam Benforado, J.D., for their valuable input, feedback, and encouragement. Without them this project would not have been possible. I would especially like to thank Kirk for his continued support, for providing the opportunity to work with the Drexel Reentry Project that inspired this project, and for encouraging me to pursue this line of research. Additionally, I would like to thank the members of the Heilbrun Research Lab for providing feedback throughout this process, and Rebecca Schiedel, Alice Thornewill,

Madelena Rizzo, Heidi Zapotocky, and Stephen Loggia for piloting the survey and providing comments. I would also like to acknowledge Jerry Briggs, Gracie

Czepiel, and John Coppins, who got to where they were going before I did.

Finally, I would like to thank James Bassett-Cann, Angelina, Justin, and David

Beaudry, and my cats, Klaus and Edgar, for supporting me through the process.

iii

Table of Contents

LIST OF TABLES ...... iv

ABSTRACT ...... v

1. BACKGROUND ...... 1

1.1 Types of Bias ...... 6

1.2 The : Definitions and Elements ...... 14

1.3 The Bias Blind Spot: Efforts to Understand and Mitigate ...... 20

2. THE CURRENT STUDY ...... 25

2.1 Research Questions ...... 25

2.2 Hypotheses ...... 25

2.3 Participants ...... 26

2.4 Procedure ...... 27

2.5 Measures ...... 28

3. RESULTS ...... 31

4. DISCUSSION ...... 34

LIST OF REFERENCES ...... 39

APPENDIX A:The Bias Blind Spot Measure ...... 46

APPENDIX B: Vignettes ...... 48

APPENDIX C: Marlowe-Crowne Social Desirability Scale ...... 49

APPENDIX D: Cognitive Reflection Test-Long ...... 51

APPENDIX E: Need for Cognition Scale ...... 52

APPENDIX F: Demographics Questionnaire ...... 54

iv List of Tables

1. Participant Characteristics ...... 57

2. Measure Scores and Descriptive Statistics ...... 59

3. Driving and Crime Statistics ...... 60

v Abstract

The Bias Blind Spot: Investigating and Measuring Factors in Consequential Decision-Making

Victoria Pietruszka

Kirk Heilbrun, Ph.D.

In describing decision-making in terms of tools, or heuristics, Amos Tversky and

Daniel Kahneman proposed that decision-making under conditions of uncertainty is susceptible to systematic errors. One such error includes the conviction that others are more susceptible to these errors. This has been termed the bias blind spot (BBS). To date, few studies have addressed how the systematic errors resulting from this bias are seen in important decisions, such as those associated with criminal conduct. The current study used a survey to provide empirical information on decision-making associated with minor adverse legal consequence to assess whether participants have a BBS. The current study also assessed theoretically supported factors that contribute to the BBS and whether they predicted participants’ BBS size.

1 1. Background

The ways in which the human decision-making process is facilitated by

“shortcuts,” which provide for quicker and easier decision-making, has recently been applied across a wider range of human behavior. The original theoretical and empirical work, conducted by Amos Tversky and Daniel Kahneman, described the shortcuts or heuristics used to simplify complex tasks of prediction under conditions of uncertainty or pressure that often result in systematic error

(Kahneman, 2011; Kahneman & Tversky, 1996; Tversky & Kahneman, 1974,

1992, 2009). While initially accepted widely among economists, these theories quickly made significant contributions to psychology. Since these developments, the field has identified various mechanisms that produce these errors (Bruine de

Bruin, Parker, & Fischhoff, 2007; Ehrlinger, Gilovich, & Ross, 2005; Frantz,

2006; West, Meserve, & Stanovich, 2012). One of the more serious consequences of flawed decision-making mechanisms is the decision to commit a crime or violate a societal rule (e.g., a law, or ethical violation). Research on decision- making and these mechanisms may be able to provide valuable insight into how to improve these abilities in people to result in more socially acceptable and less negatively consequential decisions.

Evidence suggests that individuals are susceptible to several known errors in thinking in certain contexts despite the desire to be viewed as objective and right (MacClean & Dror, 2016; Mussweiler, Strack, & Pfeiffer, 2000; Nickerson,

1998; Tversky & Kahneman, 1975). Thus, there is tension between our desire for bottom-up processing—or the idea that individuals can objectively combine information to reach a conclusion—and the reality of top-down processing,

2 beginning with the big picture and with details accessible only with careful consideration. This top-down processing can elicit a cascade of consequences, particularly among forensic experts who are unaware of minor contextual influences in their work (Dror, 2018). Several research studies have suggested the existence of various errors in thinking to which we are susceptible, including the (the tendency to ignore disconfirming evidence to support pre- existing beliefs), (information more readily available or emotionally salient is retrieved more quickly), and the fundamental error (the tendency to attribute an individual’s actions to their personality traits rather than external circumstances) (Nickerson, 1998; Tversky & Kahneman,

1974; Ross, 1977). The focus of the current study is the bias blind spot (BBS)

(Pronin et al., 2002). Emily Pronin and colleagues identified the BBS as individuals’ belief that they are less susceptible to thinking errors or than others (Pronin et al., 2002).

Amos Tversky and Daniel Kahneman have described the decision-making process as being driven by two different kinds of (Kahneman,

2013). These are helpful for understanding the importance of studying processes that are outside of immediate awareness. The first decision-maker, System 1, is driven by quick decision-making that often relies on heuristics and biases to make efficient and quick decisions. The second decision-maker, System 2, is triggered when a higher cognitive load is needed to make the decision, such as a complex math problem without a calculator. There has been ample evidence to suggest that flaws in decision-making, such as over-reliance on shortcuts, contribute to the systematic breakdown in the decision-making and judgment processes (Arkes,

3 2013; Toplak, West, Toplak, & Stanovich, 2008; West, & Stanovich, 2017). Both systems, however, describe a split decision-making process; one system which prefers accessibility to less cognitively cumbersome decisions, and one which is triggered to keep the former in check. While the second system is cognitively accessible, the first operates at a level of consciousness which, if not kept in check, may lead to systematic errors.

There can be various consequences for relying on a thinking system rife with errors. On the one hand, minor use of stereotyping or representations may lead an individual to behave one way around a certain group of people. On the more extreme end of the spectrum, what is deemed “criminal thinking,” may lead someone to face major consequences such as becoming involved in the criminal justice system. By identifying points of intervention and factors that can decrease overreliance on flawed thinking heuristics, individuals can avoid critical consequences, including those caused by the BBS that Emily Pronin and colleagues have identified in empirical studies.

Some professions (e.g., medicine) have recognized the importance of mitigating the BBS by implementing techniques in their training. These principles have yet to be applied to a community sample, however, or to a justice- involved population. Before such an application is explored, it is important to better understand factors which contribute to the BBS that might be addressed through intervention. The current study seeks knowledge of the BBS from a community sample, with the goal of subsequently investigating justice-involved populations.

4 Evidence suggests that while criminal conduct is not ubiquitous, many are prone to violating the law, at least technically, when considering traffic violations.

For example, despite the presence of signage on roads, an average of 112,000 individuals receive traffic violations each day (“Driving Citation Statistics”). This results in approximately 41,000,000 speeding tickets per year (“Driving Citation

Statistics”). The prevalence of such tickets indicates that many individuals may be susceptible to more readily retrieving processes that promote decisions likely to result on consequences. The consequences of using heuristics or other tendencies to render this decision biases may range from a driving citation to more severe criminal activity if the decision-maker considers factors such as risk of getting caught or the potential reward. Because driving is a widespread activity and the prevalence of traffic violations is high, this may be a good first step to exploring the factors associated with individuals’ willingness to admit susceptibility to thinking errors that lead to negative consequences. The common experience of driving lends itself, arguably, to the common experience of such legal consequences such as receiving a traffic ticket. Rather than asking participants the likelihood of committing something more controversial, this is the first step in replicating the asymmetry of committing a risky act with potential legal consequences.

By examining the cognitive mechanisms associated with common decisions underlying legal violations in a community sample, we can become better aware of the potential impact of the BBS and how it affects conflict, stereotyping, relationships, and ethics (Pronin, 2009). By using a morally ambiguous scenario (speeding while driving may be “against the law” but is still

5 common practice), researchers can better understand this process in a community sample—which may be relevant to individuals who are justice-involved. Before proceeding, it is important to acknowledge that there are a variety of why people commit crimes or violate societal rules. For example, under rational choice theory, individuals may choose to commit a crime after careful consideration of its associated risks and rewards (Ward, Stafford, & Gray, 2006). By conceptualizing the existence of bias within these decisions, this does not imply that criminal behavior necessarily implicates biases in thinking. Conceptualizing bias within the theory of criminal behavior, however, does acknowledge that individuals might falter in their processing. This allows us to consider why individuals rely on the tendency—here labelled as bias—to retrieve processes supportive of such consequential decisions.

The following sections review the various types of biases or heuristics involved in creating systematic errors in decision-making, what is currently known about the BBS and factors with which it is associated, and what interventions have been attempted to change decision-making. Considerable research has focused on specific mechanisms that result in flawed decision- making. While the BBS has gained increasing attention, little has been done to understand its impact on a community sample in morally ambiguous scenarios.

This study aims to understand how examining the BBS can lead to a better understanding of its components. By identifying factors that contribute to the

BBS, future research can assess the impact of interventions that might reduce the effects of thinking errors.

6 1.1. Types of Bias

Exploring the BBS requires first considering other major biases that affect thinking processes. Heuristics are rules developed to aid judgments in the wake of limited data (Tversky & Kahneman, 1974). Which heuristics are used depends upon their cognitively availability, which in turn affects whether they are used when making estimations or predictions (Tversky & Kahneman, 1983). The tendency to readily retrieve one instance over another refers to bias. When working in tandem, heuristics and biases can lead to systematic errors in judgment and decision-making (1974). This section reviews some of these heuristics and biases, their components, and their implications for decision-making.

Availability Heuristic

“The more sharply the object is seen, the closer it appears to be” (Tversky

& Kahneman, 1974, p. 1). This sharpness illusion applies to a myriad of situations, from climbing a hill to assessing probabilities (1974). Factors that increase availability include salience and frequency (Neal & Grisso, 2014), meaning that a construct is more readily available if it was just seen, if it has a strong emotional impact (e.g., an airplane crash or a terrorist attack), or if it involves frequent exposure. When information is readily available in either salience or frequency, we tend to think it is more likely to occur than empirical probability would indicate.

Confirmation Bias

When beliefs are preexisting, people tend to seek or interpret evidence in ways that confirm these beliefs. This has been termed the confirmation or selective accessibility bias, and has been studied extensively (Kuckucka &

7 Kassin, 2014; MacLean & Dror, 2016; Nickson, 1998; O’Brien, 2009; Scopelliti et al., 2015). In the legal system, for example, confirmation bias can be associated with arguments made by a defense attorney or a prosecutor. Confirmation bias may also drive an individual’s analysis when they have a personal stake; for example, a researcher validating measure they developed (Nickerson, 1998). The positive test strategy, a component of the confirmation bias, is the “deliberate search for confirming evidence” (Kahneman, 2011, p. 81; Neal & Grisso, 2014;

Kassin, Dror, & Kukucka, 2013).

The Endowment Effect

The endowment effect explains the phenomenon whereby an individual places a higher value on what they currently possess than the market would support. Examples include a wine collector who paid $200 for wine years ago when today that wine is worth $20 (Kahneman, Knetsch, & Thaler, 1991) or a home owner who purchased a house for $100,000 before the housing market crash who hesitates to sell for a loss despite its market-supported value of only

$50,000. Another example might involve a basketball team placing higher value on their objectively worst player than on a much better player from an opposing team in a proposed trade (Lewis, 2016, pp. 44–45). In economic terms, this includes a low willingness to pay combined with low willingness to sell from placing a high price on what someone already has (Kahneman et al., 1991).

Fundamental Attribution Error

The fundamental attribution error (FAE) occurs when someone attributes the behaviors of another to their character while minimizing the impact of environmental influences (Ross, 1977). In a series of three studies, researchers

8 examined individuals’ internal attributions and found that not only did participants exhibit the FAE when evaluating others’ attributes, but that this may have been seen as socially desirable (Jellison & Green, 1981). While the FAE has garnered criticism mostly around the prospect that it oversimplifies individuals’ thought processes (Wagerman, 2015), this criticism does not consider the role played by the BBS or System 1 thinking, making it difficult to be aware of the

FAE. The FAE has been tested and replicated in several studies and appears in various contexts (Pronin et al., 2002; Ross, Green, & House, 1977; Shaw et al.,

2016).

The

The halo refers to the tendency of certain attributes to be assumed present when other unrelated attributes are also present (Nisbett & Wilson, 1977). For example, it may explain why we rate a person as more intelligent if that person is attractive. While in its early days of discussion, some postulated that the halo effect impacted global perceptions of people and colored perceptions of someone’s unambiguous traits--but Nisbett and Wilson’s findings suggest that the halo effect operates even when individuals have information sufficient to elicit an opinion of their own (1977). It has been used to explain and demonstrate how some organizations endure crises with little damage to their reputation due to the halo effect from their pre-crisis reputation (Coombs & Holladay, 2006). This is an example of a bias that operates outside of conscious awareness: those who exhibit the halo effect are often unaware that they are doing so. The halo effect may be particularly concerning in forensic psychology for those completing forensic mental health evaluations who must remain objective.

9

Also known as the “knew-it-all-along” effect, hindsight bias refers to an individual’s tendency to report that they knew an event was inevitable or very likely once the outcome is known (Christensen-Szalanski & Willham, 1991). It is seen most clearly in the greater likelihood ratings of events following such known outcomes (1991). A meta-analysis of 122 studies revealed that, despite hindsight bias, individuals would likely have acted the same way, that the bias’s effects can be moderated by familiarity with the task, and that the bias is likely due to cognitions rather than a to appear to have “known it all” (1991; Lewis,

2016, p. 45).

Present Bias

Present bias is rooted in behavioral economics and accounts for the tendency to discount future payoffs when comparing them to present values

(Frederick, Loewenstein, & O’Donoghue, 2002). Concepts in criminal law may benefit from an understanding of the present bias: it refers to variables of willpower, impulsivity, procrastination, and other temporal considerations. For example, the theory of deterrence requires individuals to evaluate future outcomes

(e.g., punishment) strong enough to deter them from the current benefits of committing a crime despite individuals’ tendency to discount these future outcomes (McAdams, 2011). This criminal law application could manifest in several ways, from criminal thinking and impulsivity to the temporal effects of considering a plea bargain offer.

10 Ambiguity Aversion

Ambiguity aversion describes a tendency to avoid taking risks or gambling on unknown circumstances. In other words, people prefer to place their bets on what is known rather than on less certain probabilities (Fox & Weber, 2002). This was initially demonstrated by what is now known as the Ellsberg paradox

(Ellsberg, 1961) using two urns:

each containing 100 balls. The first urn contains 50 red balls and 50 black balls, whereas the second contains red and black balls in an unknown proportion. When asked to bet on a blind draw from an urn, most people express no particular color preference, but they would rather bet on the clear (50-50) urn than on the vague (unknown probability) urn (Fox & Weber, 2002, p. 477).

In studies investigating ambiguity aversion, its effects were lessened by using a noncomparative design. In other words, when evaluating the likelihood of an event in isolation, ambiguity aversion is not as prevalent as when presented alongside the likelihood of other events (Fox & Weber, 2002).

Anchoring

Anchoring, also known as “focalism,” refers to the tendency to heavily weigh initial information as a reference point against information presented later.

This effect has been demonstrated numerous times in an exercise where researchers show an arbitrary number to participants and ask them to identify the number of African countries there are. When prompted to provide their estimate, participants demonstrably ‘anchored” their responses to the initial arbitrary number, which significantly predicted estimates (Tversky & Kahneman, 1974;

Lee et al., 2016). The anchoring bias has been studied extensively, as has its impact on various fields, including general knowledge, probability estimates,

11 legal judgments, valuations or purchasing decisions, forecasting, negotiation, and self-efficacy (Furnham & Boo, 2011). The anchoring bias has been related to other thinking errors as well, including the confirmation bias—both serve as a reference point to which people confirm—and the halo effect (Neal & Grisso,

2014). Its potential impact on legal decisions is particularly alarming and justifies further research, as the use of anchors may arbitrarily sway decision-makers toward harsher or lighter sentences despite relevant arguments made later. Some explain the anchoring bias using the Selective Accessibility Model, which suggests that the effect makes the anchored trigger number more accessible in the decision-maker’s mind by providing a suggestion (Mussweiler, Strack, & Pfeiffer,

2000).

Framing

The order or framework in which information is presented changes how it is interpreted. Kahneman presents an example by describing two individuals with the same traits. The order in which we see these traits (e.g., intelligent before stubborn or stubborn before intelligent) changes our initial evaluation of the person: “the stubbornness of an intelligent person is seen as likely to be justified and may actually evoke respect,” while the opposite may invoke a sense of dangerousness (2011, p. 82). This is particularly relevant to forensic experts, particularly in the context of adversarial allegiance (Neal & Grisso, 2014).

Conjunction Fallacy

The is the violation of another probability rule. The probability of the conjunctive A + B cannot be higher than A or B themselves

(Sloman, Over, Slovak, & Stibel, 2003; Tversky & Kahneman, 1983). Take the

12 following for example: if A “represents killing the employee to prevent him from talking to the police, and if B is all potential reasons for killing the employee, then

A cannot equal A + B” because B should be all-encompassing (Neal & Grisso,

2014, pp. 2002–203). However, individuals tend to assign a higher probability to the conjunctive, thereby inaccurately assessing some situations because of the conjunctive fallacy (2014).

Status-Quo, Omission, and Action/Inaction

The refers to the preferences for inaction to maintain the existing state of affairs (Ritov & Baron, 1992). This is a bias because it denotes the tendency to do nothing in order to maintain a previous decision and the tendency to avoid taking action. For example, individuals might resist new technology and system implementation due to the costs associated with uncertainty (Kim & Kankanhalli, 2009). Some have theorized the status quo bias to be a result of limited attention where, especially in large choice sets, individuals struggle to focus attention and properly consider action over inaction

(Dean, Kibris, & Masatlioglu, 2017). Other biases with which this may be associated include loss aversion, ambiguity aversion, the present bias, anchoring, the endowment effect, and the availability heuristic.

Representativeness Bias

This bias, also called stereotyping, includes the tendency to make judgments while ignoring rules of probability (Lee et al., 2016). Kahneman demonstrates this by providing a description of a man as “meek and tidy” who is often identified as a librarian, when in fact he is more likely to be a farmer given base rate of men in the farming profession compared to the base rate of men who

13 are librarians (2013, p. 88). This representation is flawed because, as Kahneman points out, there is a much higher likelihood that he is a farmer than a librarian

(2013, p. 88). An element of the representativeness bias is the base-rate fallacy.

This fallacy refers to the tendency to rely on descriptive information and ignore base rates when evaluating probabilities (Lee et al., 2016; Neal & Grisso, 2014).

For example, the tendency to select a diagnosis for a patient despite its rarity based on certain descriptions may influence professional decision-making (Neal

& Grisso, 2014). The representativeness bias is also elicited in the gambler’s fallacy, or the tendency to expect a coin flip to result in tails if the immediately preceding flip resulted in heads (Lee et al., 2016). By ignoring the statistical base rate, these decisions are based on flawed reasoning that is overcome by easily- accessible archetypal information.

Over the years, the list of biases and heuristics used in the course of “fast” decision-making has grown extensively. Other biases that have been described include disconfirmation bias, diffusion of responsibility, escalation of commitment, in-group favoritism, the ostrich effect, projection bias, self-serving bias, and stereotyping (Scopelliti, Morewedge, McCormick, Min, Lebrecht &

Kassam, 2015). As more biases are labelled, it becomes difficult to parse them apart and clearer that the potential influence of these biases on certain decisions is substantial. It may be that some of the newly identified biases have previously been identified, but those reviewed in this section constitute a reasonable summary of what has been identified in the literature to date.

14 1.2. The Bias Blind Spot: Definitions and Elements

Heuristics provide fast and easy ways to make decisions and prevent cognitive overload. The previous section described several ways in which this approach is flawed, and how it can lead to systematic errors if relied upon too much. There is a serious obstacle to overcoming these potentially damaging errors: The conviction that we are immune to these errors when others are susceptible. This meta-bias has been termed the bias blind spot (BBS). Three studies using Stanford undergraduate students (Pronin, Lin, & Ross, 2002) demonstrated that individuals more readily see bias in others than themselves, though this difference was not as distinct as an earlier survey which asked students to use the “average American” for target comparison. This “better-than- average bias,” or the belief that one is usually above average, persisted even when participants were provided information on bias and its impact (2002). When consequences were made obvious, however, participants admitted to viewing themselves as equally or more biased than their peers. Later termed the BBS,

Pronin and colleagues suggested that factors such as naïve realism, the illusion, social desirability, and cognitive availability combine and reinforce each other to maintain these thinking errors after individuals fail to eliminate the BBS (2002). Naïve realism, introspection, and social desirability are outcomes easily framed in discussion. But cognitive availability, or the ease at which information can be retrieved, is distinct from these factors. The following sections describe various elements that may contribute to the BBS.

15 The

When making judgments about another person, one important source of information is observable behaviors. However, when assessing our own objectivity, we have many other sources of information and introspection available-- that can be used in a post-hoc analysis to confirm our belief that we are objective individuals (Pronin, 2009; Pronin et al., 2001; Pronin et al., 2002). Not only do individuals have access to more information about their emotions, thoughts, intentions, and other factors affecting them in any situation, but this information provides a more accurate picture than the behavioral observation of outside parties (Jones & Nisbett, 1972; Pronin, Gilovich, & Ross,

2004). When questioning whether a response is biased or whether a decision is sound, introspections provide affirmation. Pronin and colleagues described this introspection as a “gold standard” to which we hold others—a standard on which others often fall short (2004).

The introspection illusion hypothesis was tested with 247 undergraduates regarding the extent of its contribution to the BBS (Pronin & Kugler, 2007).

Participants reported utilizing different information, but ultimately concluded that they were less susceptible to biases than were their peers (Pronin & Kugler,

2007). During these studies, researchers asked participants to write down all their thoughts while making a decision; participants still perceived less bias in themselves than did observers who had access to these thoughts (2007).

In their last study, researchers attempted to reduce the BBS by providing course credit for 78 undergraduates who were told that they were participating in two unrelated studies--but researchers were actually investigating participants’

16 reliance on introspections when assessing bias in themselves versus their student peers (Pronin & Kugler, 2007). Participants in the control condition exhibited a

BBS (t(38) = 4.67, p < .001) while participants in the experimental condition did not. Based on these results, Pronin and Kugler concluded that the introspection illusion may be a source of the BBS, but that education may help reduce the BBS

(2007).

Despite the fallibility of introspection, forensic evaluators continued to rely on introspection as a viable effort to reduce their own biases (Neal &

Brodsky, 2016). This reliance continues to exist among forensic mental health evaluators (Zapf, Kukucka, Kassin, & Dror, 2018). Researchers have also found that professionals conducting forensic mental health evaluations who have more experience in the field exhibit a BBS, which may be compounded by their reliance on introspection (2018).

Naïve Realism

Citing Jones and Nisbett’s (1972) “conceptual analysis of divergent actor- observer attributions,” Pronin, Gilovich, and Ross related these phenomena to naïve realism, a feature which precipitates the belief that the way an individual perceives the world is the world in its “true form” (2004, p. 781). The thought process is as follows: (1) I see the world as it is to (2) reasonable others should

“see it the same way to (3) if they don’t, then why? (2004). This thought process can strengthen an inability to accept alternative explanations for events. This may contribute to the BBS by leading to the conclusion that others do not see the world accurately because their views are susceptible to bias or errors in thinking.

17 Social Desirability

Under certain circumstances, the attempt to appear objective can contribute to dismissing one’s susceptibility to bias, which can actually strengthen the BBS (Hansen, Gerbasi, Todorov, Kruse, & Pronin, 2014). For example, forensic mental health evaluators and judges engage in work in which it is socially desirable to be free from bias. Likewise, when bias is framed to be a negative attribute, individuals are less likely to admit their susceptibility even when they are explicitly asked to complete a task using a specified bias. Hansen and colleagues randomly assigned 85 MTurk users to rate paintings in either an explicitly or implicitly biased task after receiving information on bias. Results revealed that participants in the explicitly biased condition not only claimed that they completed the task objectively but did so more strongly than they had before they were asked to complete the task. Researchers posited that this may be due to the participants’ desire to appear unbiased after being presented with information on bias susceptibility. Therefore, normalizing susceptibility to heuristics and biases may help reduce the BBS, but it is important to consider that social desirability may limit the inclination of some to acknowledge that they are subject to bias and therefore engage in bias-reducing behavior.

Cognitive Availability

While introspection may include thoughts that strengthen and justify individuals’ arguments that they are free from bias, there may be a self-serving exemption mechanism (Frantz, 2006). This depends upon accessibility; thoughts are readily available and can be written down almost in their totality, but the same cannot be said for the process of bias and System 1 thinking, which frequently

18 operate outside of conscious awareness. By examining such a cognitively inaccessible bias (the liking bias), Frantz attempted to show that it would both persist and remain outside of awareness when people were prompted to look for it

(2006). Results revealed that the BBS was clearly exhibited during conflict, and participants perceived those with whom they disagreed as engaging in biased processing, as contrasted with their own exemplary processing (2006).

Frantz describes the importance of the BBS as a barrier to resolving conflict successfully when there is an assumption that one is not affected by bias

(2006). Thus, evidence shows that the BBS can be reduced when conscious thought processes are addressed, but less so when thinking errors are outside of awareness or seen as highly socially undesirable. The research literature does not yet have an answer to this dilemma.

Unique Enlightenment

An attempt to consider the consequences of influences combining to create bias (Ehrlinger, et al., 2005) has identified “unique enlightenment,” or the belief that one’s special knowledge or experience allows them to justify their deviation from a more objective viewpoint. They provide the following explanation for how individuals can exhibit the BBS without awareness:

Individuals consistently rate themselves above average across a variety of domains (Alicke, Klotz, Breitenbecher, Yurak, & Vredenburg, 1995; Dunning, Meyerowitz, & Holzberg, 1989), take credit for their successes but explain away their failures (Miller & Ross, 1975; Whitley & Frieze, 1985), assume they are more likely than their peers to experience the good things in life and avoid the bad (Weinstein, 1980), and tend to detect more support for their favored beliefs than is objectively warranted (Lord, Ross, & Lepper, 1979).

19 They described two strategies used by individuals to assess for bias within the unique enlightenment construct. The first includes relying on one’s own theory bias. The second is what Ehrlinger and colleagues described as the “tug of wishful thinking”: individuals’ hope that they will be able to recognize bias is interacting with judgments and decision-making (2005, p. 681). Consequences from these methods exist, including failure to detect evidence of one’s own bias and thus concluding that one is free from bias while others are not (2005). In a series of four studies, researchers demonstrated that a personal connection to a controversial issue was seen as a source of bias in others, but such a personal connection was viewed as more enlightenment than bias in oneself (2005). Experience may, accordingly, not only stand in the way of recognizing one’s own bias, but may actually strengthen it.

Cognitive Ability/Intelligence

Cognitive ability is another factor which has been shown to increase the

BBS (West et al., 2008, 2012). Using a measure of seven different cognitive biases, investigators surveyed undergraduate students and found that students exhibited a significant BBS—but whereas most cognitive biases are negatively related to cognitive sophistication, the BBS was positively and significantly correlated with cognitive abilities (2012). Such cognitive sophistication was measured using the Cognitive Reflection Test, the Scholastic Aptitude Test (SAT) score, and the Need for Cognition Scale (2012).

The BBS’s existence and the combination of various factors that manifest raise the question of whether it can be measured, particularly when aspects are

20 outside of conscious awareness. Scopelliti and colleagues conducted a series of studies which tested the Bias Blind Spot Measure (BBSM; see Appendix A), asking participants to compare themselves to the “average American.” The series of studies was conducted using different samples of Amazon Mechanical Turk

(MTurk) users and were separately designed to test the measure’s reliability, factorial structure, and discriminant validity in relation to other constructs (e.g., intelligence, cognitive reflection, personality, and decision-making skills. Results indicated that the BBSM has high reliability (α = 0.86), loaded onto a single factor during confirmatory factor analysis, and measures the BBS as distinct from constructs such as need for cognition and intelligence (2015).

1.3. The Bias Blind Spot: Efforts to Understand and Mitigate

Researchers have attempted to measure, mitigate, and explain the BBS since it was first described in 2002, using both college students and professionals.

While research strongly supports its existence, efforts to mitigate the BBS in either population have yielded results that are discouraging or do not persist. The present section outlines efforts to reduce the BBS.

Recognizing that previous efforts to intervene against bias resulted in short-lived influence, Morewedge and colleagues tested the longevity of a single training intervention. Their first study targeted BBS, confirmation bias, and the

FAE while the second targeted the anchoring bias, projection bias, and representativeness bias. Participants were recruited in-person and randomly assigned to view a training video on debiasing or a computer game during which players make judgments that test how strongly they exhibited the biases--and are then provided personalized feedback. Results revealed significant reduction in

21 bias with large effect sizes immediately that persisted for two to three months after the single intervention occurred, with greater efficacy associated for those who played a video game and received personalized feedback (Morewedge,

Yoon, Scopelliti, Symborsky, Korris, & Kassam, 2015).

In a series of studies conducted among college students, Bessarabova and colleagues explored the use of video games in mitigating the BBS (2016). By manipulating factors such as duration of gaming, multiplayer versus single-player, training, repetition, and feedback, researchers found such video games successfully reduced the BBS. Two additional important findings were cited: the ability of gamers to rely on introspection was limited, and participants maintained the interventions’ effects for 8 weeks until they subsided (2016). Further research into the use of games or other interactive media that limit an individual’s access to introspection is thus a promising area of research (Shaw et al., 2016).

Individuals who are aware of their bias are not necessarily able to correct for them, although the literature on this point has been conflicting (Pronin et al.,

2002; West et al., 2012; Pronin & Kugler, 2007). It remains to be seen whether the BBS can be mitigated via intervention, and what this intervention would involve. Professional fields such as medicine and forensic psychology (Neal &

Brodsky, 2016; Neal, 2016) have an important interest in bias-reducing interventions, as they make decisions that are expected to be as bias-free as possible.

Debiasing interventions have been described in three domains: recognizing and creating awareness of possible biases (educational strategies), intervention during the decision-making process (workplace strategies), and

22 forcing functions (Croskerry, Singhal, & Mamede, 2013). Educational strategies include simulation training. Workplace strategies involve providing information, affective debiasing, slowing down, increasing skepticism, recalibration, and personal accountability. For example, one method involves providing physicians with a checklist to structure and inform decision-making. Forcing functions were defined as “rules that depend on the clinician consciously applying a metacognitive step and cognitively forcing a necessary consideration of alternatives” (2013, p. ii68). These methods require further empirical testing both among clinicians and community populations.

Mapping bias awareness onto the stages of change model (Prochaska,

DiClemente, & Norcross, 1992), other investigators (Neal & Brodsky, 2016;

Zappala, Reed, Beltrani, Zapf, & Otto, 2018) found that forensic psychologists exhibited the BBS when comparing themselves to their colleagues. When asked to identify strategies that may be effective at reducing bias, forensic psychologists cited those known to be ineffective in doing so. For example, introspection was identified as a key strategy for bias identification, although it is not supported by evidence for effective bias detection and control (Pronin et al., 2002; Pronin &

Kugler, 2007). But one promising result was that for clinicians who identified as forensic psychologists, the motivation to seek debiasing efforts was higher (Neal

& Brodsky, 2016). This may indicate that when professionals value objectivity, they may be more motivated and amenable to bias mitigation strategies despite being susceptible to thinking errors themselves. However, the method of debiasing should be informed by further research, as a recent study has

23 demonstrated that efforts to educate forensic evaluators about the BBS did not result in a reduction I bias (Zappala et al., 2018).

Encouraging forensic experts to acknowledge their own biases may help them to implement debiasing efforts in the future (Dror, 2018). What remains unclear is whether this striving for objectivity is separate from motivation or a need for cognition. This labelling effect has been observed among professionals but has yet to be measured among general community members. Whether this value on objectivity is a function of professional identify or something else is unclear. If the latter, then there may be an intervention for mitigating bias that could be effective for professionals and the general population alike.

Some debiasing strategies have been proposed (see, e.g., Neal & Brodsky,

2016) but not yet empirically tested (2016). Such strategies include considering the opposite before making a decision (Crokerry et al., 2013) and Linear

Sequential Unmasking (Dror, 2018). Some proposed strategies may apply generally; these include recognizing the limits of memory and hypothesis generation. It has been noted that most debiasing efforts do not require participants to apply information to “real-world” situations, and little is done to see that efforts extend beyond the tasks at hand (Lilienfield, Ammirati, &

Landfield, 2009). Given the current discrepancies in the literature, this is understandable; the field does not yet understand whether, and under what circumstances, debiasing efforts can meaningfully impact decision-making. As it stands, even when prompted to use biased thinking, individuals still claim objectivity (Hansen et al., 2013).

24 While some debiasing efforts have been implemented among professionals, it is not clear what might have in the general population. Some biases can be mitigated through serious video game training, but the BBS appears to be particularly resistant to attempts at bias mitigation

(Shaw et al., 2018). Given that the BBS may prevent individuals from improving their cognitive processes, it is important to determine how it can be mitigated.

Efforts to do so may continue to be unfruitful without a better understanding of what contributes to the BBS. By examining the BBS and its contributing factors in a community sample, research may detect factors for interventions to reduce bias. A better understanding of the impact of bias on decision-making in the general population is important to describe its nature and prevalence, as well as to guide the development of debiasing interventions that are effective immediately and beyond.

25 2. The Current Study

The current study sought to fill gaps identified in the literature by investigating the BBS construct. We need a better understanding of BBS influence, including its circumstances, impact, and associated factors. The current study was intended to assess the BBS in a community sample, moving beyond the less representative sample of undergraduates that has been largely used to date.

By using an example that has consequences (i.e., receiving a driving citation), the current study aims to contribute to research efforts that improve everyday decision-making in the general population. Apparently, no research to date has considered decision-making in the context of a law-breaking scenario such as driving above the speed limit. Future research may expand this kind of investigation to encompass decision-making in specific samples (e.g., justice- involved individuals, medical patients, forensic mental health evaluators).

2.1. Research Questions

1. Do questions asked of individuals in the general population relating to a

vignette involving speeding elicit evidence of the presence of BBS?

2. Are social desirability, need for cognition, cognitive reflection, and unique

enlightenment associated with the BBS?

2.2. Hypotheses

1. Participants will demonstrate the BBS by rating themselves as

significantly less likely than they rate the average person to commit

moving traffic violations.

2. Social desirability will be significantly positively correlated to the BBS

size.

26 3. Need for cognition will be significantly negatively correlated with the

BBS.

4. Cognitive reflection will be significantly negatively correlated with the

BBS.

5. Unique enlightenment will be significantly positively correlated with the

BBS.

6. The BBS as measured in the hypothetical vignette and assigned

likelihoods for committing a traffic violation will be significantly

positively correlated with the Bias Blind Spot Measure.

7. A model accounting for social desirability, cognitive reflection, and need

for cognition will significantly predict BBS size.

Participants received a survey using Amazon Mechanical Turk (MTurk).

The survey asked them to assess the likelihood they would commit traffic violations while driving. This likelihood was compared to the likelihood they estimate for the average person.

2.3. Participants

Individuals were invited to participate if they are 18 or older using MTurk.

The population of interest included a community sample who can identify with the assigned vignette. Participants who have never driven a car were excluded from the study using a screening question at the start of the survey. While some research has indicated that MTurk users do not differ demographically from a community population, there is also evidence that MTurk users endorse clinical symptoms of depression and social anxiety at higher rates than the general population (Arditte et al., 2016). Given the dearth of research in community

27 samples, MTurk provides a convenient method for data collection and may provide a more representative sample than seen in previous studies.

A power analysis was conducted using G*Power for a one-way ANOVA with 2 groups (having received versus never received a moving traffic violation) with a small effect size (d = .20). The power analysis revealed that with an alpha coefficient of .05, 209 participants would be needed to detect significant differences. To account for missing data and remove those who fail validity checks during the survey, 555 people opened the survey. In previous studies, approximately 4% of participants failed validity checks. The survey completion rate for MTurk is 91.6% (Paolacci, Chandler, & Ipeirotis, 2010). In the current study, 46% of participants failed at least one of the validity checks.

2.4. Procedure

After receiving informed consent, participants were given vignettes in which they read about a driving vignette and were asked to rate whether themselves and the average person would commit a traffic violation. Then, participants completed the measures below, demographic information, and questions to help determine their experiences with traffic violations. The comparison of interest was whether an individual identifies a discrepancy between their self-reported likelihood to violate a traffic provision as compared to the average person target. Upon completion of the survey, participants were provided with a number created from a random number generator that they can use to receive their Amazon credit as compensation.

Throughout the survey, two validation checks were used to assess for level of attention and reading comprehension. These validation checks asked

28 participants to select a specific response and participants who did not successfully answer both validation checks were excluded from analyses. The estimated time to complete the survey was 15-30 minutes. Analyses were conducted using IBM

SPSS Statistics Version 25 software.

2.5. Measures

Vignette Ratings. Vignette ratings compared the likelihood that participants assign to the average person target compared to the likelihood they assign to themselves when assessing if they will commit a traffic violation. The difference between these scores indicated the size of a person’s BBS by assessing the asymmetry with which they view themselves versus others. Thus, the larger the score, the larger the participant’s BBS were scored. Conversely, if a person indicates that it is more likely they will commit a traffic violation than the average person target, this will result in a negative score. Participants were asked to assign how likely they are to commit the traffic violation using a percentage (e.g., 0% likely to 100% likely to commit a traffic violation). The vignette questions were designed to convey a situation in which there was clearly a “bad” decision to make, but one that did not violate the law to the point of being considered reckless. The use of subtle cues (e.g., a four-way intersection) serves two purposes. First, they set the participant up to envision themselves within the scenario while providing implicit information that may be interpreted using quick tools or tendencies. Scores are calculated by subtracting the likelihood an individual reports they would commit a traffic violation from the likelihood they assign to the average person targets. Then, the two scores are added together so that overall, higher scores indicate that individuals perceive themselves as more

29 likely than the average person to commit a traffic violation and lower scores indicate the opposite.

The Bias Blind Spot Measure (BBSM; Scopelliti et al., 2015). This is a

14-item measure describing various forms of bias asks participants to rate much they are personally affected by this bias, and how much an “average American” is affected by the same bias (Scopelliti et al., 2015). Items on this measure include a description of a bias. For example:

Many psychological studies have shown that people react to

counterevidence by actually strengthening their beliefs. For example,

when exposed to negative evidence about their favorite political candidate,

people tend to implicitly counter-argue against that evidence, therefore

strengthening their favorable feelings toward the candidate (Scopelliti et

al., 2015, p. 2472).

This item describes the confirmation bias, and then participants rate on a scale of

1 (not at all) to 7 (very much) whether they exhibit that bias and whether the

“average American exhibits that bias” (2015, p. 2471).

The Marlowe-Crowne Social Desirability Scale. This is a 33-item scale

(Crowne & Marlowe, 1960) in which participants are asked to rate whether items are true or false (see Appendix C). Though developed in 1960, these items have remained relevant to concepts of social desirability today and is the most commonly used scale to measure socially desirable responding (Perinelli &

Gremigni, 2016).

The Cognitive Reflection Test. This three-question test (Frederick, 2005) elicits an intuitive but often incorrect response. This measure will be used to

30 assess an individual’s ability to resist the first request that comes to mind. The

CRT has been shown to be overlap with cognitive ability and rational thinking but to explain a significant amount of variance (11.2%) when predicting performance on heuristics and biases tasks (Toplak, West, & Stanovich, 2011). A longer version of the measure has been developed to assess a wider range of cognitive reflection; the Cognitive Reflection Test-Long (CRT-L; Primi, Morsanyi, Chiesi,

Donati, & Hamilton, 2016). Given that the CRT has become increasingly recognized by participants in research studies (Haigh, 2016), the longer version published by Primi and colleagues will be used (see Appendix D) and participants will be asked whether they recognize any of the questions presented on the CRT-

L.

The Need for Cognition Scale. The NFCS was developed to measure an individual’s engagement and enjoyment of difficult cognitive tasks (Cacioppo &

Petty, 1982; Cacioppo, Petty, & Kao, 1984). This measure contains 34 questions which asks individuals to rate how much they agree with each statement using a

Likert scale from -4 (very strong disagreement) to 4 (very strong agreement) (see

Appendix E).

31 3. Results

Participants were recruited via MTurk. Approximately 555 individuals opened and began the survey. They were eligible to complete the survey if they had ever driven a car and had an MTurk account. In total, 297 participants had ever driven car and passed both validation checks and were included in the final analyses. Demographic information for participants is presented in Table 1. On average, it took participants 16.01 minutes to complete the survey, with time to complete ranging from 3.43 to 73.73 minutes (SD = 12.03).

Analyses

Prior to hypothesis testing, descriptive statistics were conducted to assess whether associated assumptions were violated for each test. Results revealed that assumptions of skewness and kurtosis were violated for total CRT scores. For other relevant measures, scores were normally distributed. Information regarding scores on the measures used is reported in Table 2.

Hypothesis 1. Hypothesis 1 was supported. To assess the difference between self-reported likelihood of speeding and the likelihood of the average person, related samples t-test was used. Results revealed that for the first vignette

BBS question, participants rated themselves significantly less likely to speed (N =

297; M = 50.71, SD = 29.57) than the average person (M = 64.00, SD = 20.24; t(296) = 29.56, p <.001, one-tailed). Results revealed that for the second vignette

BBS question, participants rated themselves significantly more likely to come to a complete stop at a stop sign (N = 297; M = 78.09, SD = 24.79) than the average person (M = 63.60, SD = 21.53; t(296 = 54.283, p < .001, one-tailed). Both responses indicate that individuals perceived themselves as less likely to commit a

32 moving traffic violation than the average person target. To assess the differences between the two vignette BBS items, a related samples t-test was used. Results revealed a significant difference in the BBS size between the first (N = 297; M = -

13.29, SD = 23.93) and second item (M = -14.49, SD = 22.87; t(296) = -9.57, p <

.001, one-tailed).

Hypothesis 2. Hypothesis 2 was not supported. To assess whether socially desirability was significantly positively correlated to the vignette BBS size, a one- tailed Pearson correlation was used. Results revealed that there was no significant correlation, n = 264, r = .07, p = .110.

Hypothesis 3. Hypothesis 3 was not supported. To assess whether NFC scores were significantly negatively correlated to the vignette BBS scores, a one- tailed Pearson correlation was conducted. Results revealed that there was no significant correlation, n = 133, r = -.10, p = .117.

Hypothesis 4. Hypothesis 4 was not supported. To assess whether CRT-L scores were significantly negatively related with the vignette BBS scores, a one- tailed Spearman correlation was conducted. A Spearman correlation was used instead of a Pearson correlation given the kurtosis and skewness of CRT-L scores.

Results revealed that cognitive reflection and vignette BBS scores were not significantly related, n = 292, r = -.08, p = .089.

Hypothesis 5. Hypothesis 5 was not supported. To assess whether individuals who had received a moving traffic violation significantly differed from those who did not in vignette BBS scores, a simple t-test was used. No significant difference was observed (t(295) = -.61, p = .272, one-tailed). Group 1 included those who had never received a traffic violation (n = 139, M = -29.158,

33 SD = 39.091). Group 2 included those who had received a moving traffic violation (n = 158, M = -26.570, SD = 36.586). Information regarding participants’ driving and crime statistics are in Table 3.

Hypothesis 6. Hypothesis 6 was supported. A Pearson correlation was used to assess whether vignette BBS scores were significantly positively correlated to scores on the BBSM. Using a one-tailed Pearson correlation, BBSM and vignette BBS scores were significantly positively related, n = 279, r = .33, p <

.001.

Hypothesis 7. Hypothesis 7 was not supported. Hierarchical multiple regression was used to assess the ability of three measures (social desirability, cognitive reflection, and need for cognition) to predict vignette BBS size. Scores on the M-C SDS were entered at Step 1, explaining 0.4% of the variance in the

BBS (F(1, 108) = .46, p = .500). After entry of the CRT and NFC at step 2, the total variance explained by the model was .07%, F(1, 107)=.33, p = .568. After controlling for social desirability, cognitive reflection, and need for cognition, an additional 1.3% of the variance in BBS was explained, R square change = .006. In the final model, none of the factors was statistically significant.

34 4. Discussion

In the past, research has demonstrated that individuals have an asymmetrical view when reflecting upon their own biases versus those of others.

The current study aimed to investigate whether this asymmetry would present itself in individuals’ decision-making process when they may result in legal consequences. The current study also aimed to investigate whether previously theorized factors such as social desirability, need for cognition, and unique enlightenment predicted to what degree that asymmetry presented. Results revealed that while this asymmetry was present and significantly related to the

BBS, factors that were thought to contribute to the BBS did not significantly predict its size.

Participants viewed themselves asymmetrically and more favorably in their potential decision when comparing themselves to the average person. The size of this asymmetry was significantly and positively related to the BBSM, indicating that the questions captured a potential consequence of the BBS.

However, results also suggest that this asymmetry may be distinct from merely perceiving others as more biased than oneself. The current study also aimed to comprehensively study the factors that contribute to the BBS, but these factors were not supported by the results.

Several limitations to the current study exist, including the notably significant number of individuals who did not pass both validation checks, leaving data from less than half of individuals recruited out of analysis. This may be a result of the cognitive burden and amount of reading participants were required to do. However, in a study of cognitive biases even among those who passed the

35 validation checks and were paying attention to each question, results revealed a significant result in the vignette ratings. Future studies might investigate the cognitive engagement of MTurk users given this limitation. The current study demonstrates that MTurk users are largely highly educated and that samples drawn from MTurk may not be more representative of the community at large than studies whose samples consist of undergraduate students.

A second limitation to the study is the increased popularity of the CRT.

Descriptive analyses of the CRT indicate a skewed response, and researchers have begun to pay attention to whether or not participants have been exposed to CRT questions in the past. While this study did not ask participants whether they recognized any of the questions, past familiarity with the CRT may explain this result. The CRT is a useful measure for assessing the ability to resist an impulsive but incorrect response, but researchers may consider questions that elicit similar responses without being easily recognized or published. Another notable limitation to the study is that, while more representative than a college student sample, the participants in the current study were relatively well-educated and may not reflect the general population.

Lastly, it is important to note that the asymmetry measured by the BBS vignettes may be measuring a different construct than the BBS. Considering that its relationship with the BBSM only accounted for 10% of the variance in scores, future studies may consider asking whether participants thought they would get caught or whether they would experience negative consequences. While the vignette asymmetry is a step along the way to understanding whether individuals view themselves as different from the average person, future research will need to

36 clarify the nuances of the asymmetry including errors of executing their decisions or errors in judgment.

Overall, these results provide insight in response to literature that has theorized about factors that contribute to the BBS and found that while these factors do not predict its size, it continues to persist in the way we perceive ourselves versus others. Previous researchers theorized that factors including unique enlightenment (Ehrlinger et al., 2005), social desirability (Hansen et al.,

2014), and cognitive ability (West et al., 2008) contributed to or explained the

BBS. If factors such as unique enlightenment or need for cognition did contribute to the BBS, they may have been fruitful intervention points to mitigate bias.

However, since these were not supported, other theoretical justifications for intervention and efforts to mitigate the BBS must be explored. Three factors that the current study did not explore include naïve realism (Jones & Nisbett, 1972), the introspection illusion (Pronin, 2009), and cognitive availability (Frantz, 2006).

Some research has explored these latter three factors, including the strength of the introspection illusion, and found that it persists (Neal & Brodsky, 2016). Given that the BBS is particularly resistant to mitigation, it is increasingly important to identify these nuances since this blind spot can result in decisions that lead to legal consequences.

The asymmetry with which we view ourselves compared to others has implications for consequential decision-making. In the legal process, others are called upon to judge individuals like criminal defendants in criminal cases and respondents in civil cases. Given the natural asymmetry between our views on ourselves and others, judges, juries, and forensic mental health evaluators may be

37 even more inclined to view themselves differently from those who they are judging, leaving decision-making processes further vulnerable to extraneous factors.

In a broader legal policy sense, systematic error produced by unchecked biases and heuristics have larger implications. Though judges are no longer required to follow the federal sentencing guidelines, they remain points of reference in federal sentencing decisions (Stith & Koh, 1993; 2018 Guidelines

Manual). After the phase when a defendant is convicted of a crime, these guidelines provide recommended sentences based on a variety of factors, including defendants’ criminal history and specific characteristics of the crime

(2018 Guidelines Manual). If left unchecked, this can be an opportunity for the anchoring bias to influence decision-making without individual consideration

(Bennett, 2014). When arbitrary numbers tend to anchor decisions (Lee et al.,

2016), legal policies with little to no scientific justification may have the same impact on decision-makers. Coupled with the tendency to be blind to one’s own biases (Pronin et al., 2002), understanding and mitigating biases and heuristics among decision-makers in the criminal justice system may help counteract the lingering effects of retracted legal policies.

Though many proposed hypotheses were not supported, this supports the need for further investigations into the BBS and challenges the previous literature that considered BBS factors. First, researchers might consider how to study other factors such as the introspection illusion and naïve realism, which were not studied in this project. The asymmetry in a hypothetical but practical situation may provide insight into how the BBS can result in consequences for decision-

38 makers. If additional questions regarding these consequences can be developed, this may result in a measure distinct from but important to understanding not only the perception that others are more biased, but that individuals can act upon this perception. Finding measures that can accurately reflect the BBS and its consequences is critical, especially as participants continue to recognize measures such as the CRT. This phenomenon presents a threat and challenge in studying biases by increasing face validity that may elicit socially desirable responding.

The BBS refers to the asymmetry of the conviction that others are more susceptible to thinking errors than ourselves (Pronin et al., 2002). The current study demonstrated that this asymmetry was present in participants’ perception of their decision-making. This asymmetry may have implications for future directions within the criminal justice system and decision-makers’ ability to manage their biases and heuristics. The criminal justice system presents several scenarios in which individuals are susceptible to biases. To further the current literature, future directions should include replicating observational results using randomization in order to draw more stable conclusions regarding biases, the

BBS, and their potential impact on these decisions. By understanding and mitigating the consequences of the BBS, applied research can provide insight into how to increase confidence in the decision-making process.

39 List of References

2018 Guidelines Manual (November 1, 2018). Retrieved from https://www.ussc.gov/guidelines/2018-guidelines-manual.

Arditte, K., Demet, C., Shaw, A., & Timpani, K.R. (2016). The importance of assessing clinical phenomena in Mechanical Turk research. Psychological Assessment, 26(6), 684–691. doi: 10.1037/pas0000217

Arkes, H.R. (2013). The consequences of the hindsight bias in medical decision making. Current Directions in Psychological Science, 22(5), 356–360. doi: 10.1177/0963721413489988

Behrend, T.S., Sharek, D.J., Meade, A.W., & Wiebe, E.N. (2011). The viability of crowdsourcing for survey research. Behavior Research Methods, 43, 800–813. doi: 10.3758/s13428-011-0081-0

Bennett, M.W. (2014). Confronting cognitive “anchoring effect” and “bias spot” biases in federal sentencing: A modest solution for reforming a fundamental flaw. Journal of Criminal Law and Criminology, 104(3), 489–534.

Bessarabova, E., Piercy, C.W., King, S., Vincent, C., Dunbar, N.E., Burgoon, J.K., Miller, C.H., . . . Lee, Y. (2016). Mitigating bias blind spot via a serious video game. Computers in Human Behavior, 62, 452–466. doi: 10.1016/j.chb.2016.03.089

Bruine de Bruin, W., Parker, A.M., & Fischhoff, B. (2007). Individual differences in adult decision-making competence. Journal of Personality and Social Psychology, 92(5), 938–956. doi: 10.1037/0022-3514.92.5.938

Cacioppo, J.T., & Petty, R.E. (1982). The Need for Cognition. Journal of Personality and Social Psychology, 42(1), 116–131. doi: 10.1037/0022- 3514.42.1.116

Cacioppo, J.T., Petty, R.E., & Kao, C.F. (1984). The efficient assessment of Need for Cognition. Journal of Personality Assessment, 48, 306–307. doi: 10.1207/s15327752jpa4803_13

Christensen-Szalanski, J.J.J., & Wilham, C. (1991). The Hindsight Bias: A meta- analysis. Organizational Behavior and Human Decision Processes, 48, 147– 168. doi: 10.1016/0749-5978(91)90010-Q

Coombs, T.W., & Holladay, S.J. (2006). Unpacking the halo effect: Reputation and crisis management. Journal of Communication Management, 10(2), 123– 137. doi: 10.1108/13632540610664698

40

Croskerry, P., Singhal, G., & Mamede, S. (2013). Cognitive debiasing: 2 impediments to and strategies for change. BMG Quality & Safety, 22, ii65– ii72. doi: 10.1136/bmjqs-2012-001713

Crowne, D.P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology, 24(4), 349–354. doi: 10.1037/h0047358

Dean, M., Kibris, O., & Masatlioglu, Y. (2017). Limited attention and status quo bias. Journal of Economic Theory, 169, 93–127. doi: 0022-0531.

Driving Citation Statistics, retrieved from http://www.statisticbrain.com/driving- citation-statistics/.

Dror, I.E. (2018). Biases in forensic experts. Science, 360(6386), 243–244. doi: 10.1126/science.aat8443

Ehrlinger, J., Gilovich, T., & Ross, L. (2005). Peering into the bias blind spot: Peoples assessments of bias in themselves and others. Personality and Social Psychology, 31(5), 68–692. doi: 10.1177/0146167204271570

Ellsberg, D. (1961). Risk, ambiguity, and the savage axioms. The Quarterly Journal of Economics, 75(4), 643–669.

Epstein, S. (1994). Integration of the cognitive and the psychodynamic unconscious. American Psychologist, 49(8), 709–724.

Fox, C.R., & Weber, M. (2002). Ambiguity aversion, comparative ignorance, and decision context. Organizational Behavior and Human Decision Processes, 88(1), 476–498. doi: 10.1006/obhd.2001.2990

Frantz, C. (2006). I AM being fair: The bias blind spot as a stumbling block to seeing both sides. Basic and Applied Social Psychology, 28(2), 157–167. doi: 10.1207/s15324834basp2802_

Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42. doi: 10.1257/089533005775196732

Frederick, S., Loewenstein, G., & O’Donoghue, T. (2002). Time discounting and time preference: A critical review. Journal of Economic Literature, XL, 351– 401. doi: 10.1257/002205102320161311

Furnham, A., & Boo, H. (2011). A literature review of the anchoring effect. Journal of Socio-Economics, 40(1), 35–42. doi: 10.1016/j.socec.2010.10.008

41 Haigh, M. (2016). Has the standard Cognitive Reflection Test become a victim of its own success? Advances in Cognitive Psychology, 12(3), 145–149. doi: 10.5709/acp-0193-5

Hansen, K., Gerbasi, M. Todorov, A., Kruse, E., & Pronin, E. (2014). People claim objectivity after knowingly using biased strategies. Personality and Social Psychology Bulletin, 40(6), 691–699. doi: 10.1177/0146167214523476

Jellison, J.M., & Green, J. (1981). A self-presentation approach to the fundamental attribution error: The norm of internality. Journal of Personality and Social Psychology, 40(4), 643–649. doi: 10.1037/0022-3514.40.4.643

Jones, E.E., & Nisbett, R.E. (1972). The actor and the observer: Divergent perceptions of the cause of behavior. In E.E. Jones, D.E. Kanouse, H.H. Kelley, R.E. Nisbett, S. Valins, & B. Weiner (Eds), Attribution: Perceiving the causes of behavior (79–94). Morristown, NJ: General Learning Press.

Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.

Kahneman, D., Knetsch, J.L., & Thaler, R.H. (1991). Anomalies: The endowment effect, loss aversion, and status quo bias. Journal of Economics Perspectives, 5(1), 193–206. doi: 10.1257/jep.5.1.193

Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review, 103(3), 582–591.

Kassin, S.M., Dror, I.E., & Kukucka, J. (2013). The forensic confirmation bias: Problems, perspective, and proposed solutions. Journal of Applies Research in Memory and Cognition, 2(1), 42–52. doi: 10.1016/j.jarmac.2013.01.001

Kim, HW., & Kankanhalli, A. (2009). Investigating user resistance to information systems implementation: A status quo bias perspective. MIS Quarterly, 33(3), 567–582. doi: 100.19.43.214

Kuckucka, J., & Kassin, S.M. (2014). Do confessions taint perceptions of handwriting evidence? An empirical test of the forensic confirmation bias. Law and Human Behavior, 38(3), 256–270. doi: 10.1037/lhb0000066

Lewis, M. (2016). The Undoing Project: A friendship that changed our minds. New York, NY: W. W. Norton & Company, Inc.

Lilienfield, S.O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on Psychological Science, 4(4), 390–398.

42 Moore, D. (2007). Not so above average after all: When people believe they are worse than average and its implications for theories of bias in social comparison. Organizational Behavior and Human Decision Processes, 102(1), 42–58. doi: 10.1016/j.obhdp.2006.09.005

Morewedge, C.K., Yoon, H., Scopelliti, I., Symborski, C.W., Horris, J.H., & Kassam, K.S. (2015). Debiasing decisions: Improved decision making with a single training intervention. Policy Insights from the Behavioral and Brain Sciences, 2(1), 129–140. doi: 10.1177/2372732215600886

MacLean, C.L., & Dror, I.E. (2016). A primer on the psychology of . In A. Kesselheim & C.Robertson (Eds)., Blinding as a solution to bias (pp. 13–24). City of Publication: Elsevier.

McAdams, R.H. (2011). Present bias and criminal law. University of Illinois Law Review, 2011, 1608–1632.

Moore, D. (2007). Not so above average after all: When people believe they are worse than average and its implications for theories of bias in social comparison. Organizational Behavior and Human Decision Processes, 102, 42–58. doi: 10.1016/j.obhdp.2006.09.005

Mussweiler, T., Strack, F., & Pfeiffer, T. (2000) Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26(9), 1142–1150. doi: 10.1177/01461672002611010

Neal, T.M.S. (2016). Are forensic experts already biased before adversarial legal parties hire them? PLoS ONE, 11(4), 1–13. doi: 10.137/journal.pone.0154434

Neal, T.M.S., & Brodsky, S.L. (2016). Forensic psychologists’ perceptions of bias and potential correction strategies in forensic mental health evaluations. Psychology, Public Policy, and Law, 22(1), 58–76. doi: 10.1037/law0000077

Nickerson, R.S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. doi: 1089-2680/98

Paolacci, G., Chandler, J., & Ipeirotis, P.G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.

Perinelli, E., & Gremigni, P. (2016). Use of social desirability scales in clinical psychology: A systematic review. Journal of Clinical Psychology, 72(6), 534– 551. doi: 10.1002/jclp.22284.

Primi, C., Morsanyi, K., Chiesi, F., Donati, M.A., & Hamilton, J. (2016). The development and testing of a new version of the Cognitive Reflection Test applying Item Response Theory (IRT). Journal of Behavioral Decision Making, 29, 453–469. doi: 10.1002/bdm.1883.

43

Prochaska, J.O., DiClemente, C.C., & Norcross, J.C. (1992). In search of how people change: Applications to addictive behaviors. American Psychologist, 47, 1102–1115. doi: 1993-09955-001

Pronin, E. (2009). The Introspection Illusion. In P.Z. Mark (Ed.), Advances in experimental social psychology (Vol. 41, pp. 1–67). Burlington, VA: Academic Press.

Pronin, E., Gilovich, T., & Ross, L. (2004). Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others, Psychological Review, 111(3), 781–799. doi: 10.1037/0033-295X.111.3.781

Pronin, E., Kruger, J., Savitsky, K., & Ross, L.. (2001). You don’t know me, but I know you: The illusion of asymmetric insight. Journal of Personality and Social Psychology, 81(4), 639–656. doi: 10.1037/0022-3514.81.4.639

Pronin, E., & Kugler, M.B. (2007). Valuing Thought, Ignoring Behavior: The introspection illusion as a source of the bias blind spot. Journal of Experiential Social Psychology 43(2007), 565–578. doi: 10.1177/0146167202286008

Pronin, E., Lin, D.Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28, 369– 381.

Ritov, I., & Baron, J. (1992). Status-quo and omission biases. Journal of Risk and Uncertainty, 5, 49–61.

Ross, L. (1977). The intuitive psychologist and his shortcomings. In L. Berkowitz (Ed.) Advances in experimental social psychology (Vol. 10, pp. 173–220). New York, NY: Academic.

Ross, L., Green, D., & House, P. (1977). The “false consensus effect”: An in social perception and attribution processes. Journal of Experimental Social Psychology, 13(3), 279–301. doi: 10.1016/0011- 1037(77)90049-X

Scopelliti, I., Morewedge, C., McCormick, E, Min, H.L., Lebrecht, S., & Kassam, K.S. (2015). Bias blind spot: Structure, measurement, and consequences. Management Science, 61(10), 2468–2486. doi: 10.1287/mnsc.2014.2096

Shaw, A., Kenski, K., Stromer-Galley, J., Martey, R.M., Clegg, B.A., Lewis, J.E., Folkestad, J.E., & Strzalkowski, T. (2018). Serious efforts at bias reduction: The effects of digital games and avatar customisation on three cognitive biases. Journal of Media Psychology, 30(1), 16–28. doi: 10.1027/1864- 1105/a000174

44 Shaw, A., Kenski, J.S., Martey, R.M., Clegg, B.A., Lewis, J.E., Folkestad, J.E., & Strzalkowski, T. (2016). Serious efforts at bias reduction. Journal of Media Psychology, 1–13. doi: 10.1027/1864-1105/a000174.

Sloman, S.A., Over, D., Slovak, L., & Stibel, J.M. (2003). Frequency illusions and other fallacies. Organizational Behavior and Human Decision Processes, 91, 296–309. doi: 10.1016/S0749-5978(03)00021-9

Stith, K.S. & Koh, S.Y. (1993). The politics of sentencing reform: The legislative history of the federal sentencing guidelines, Wake Forest Law Review, 28, 223–290.

Toplak, M.E., West, R.F., & Stanovich, K.E. (2011). The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks, Memory and Cognition, 39, 1275–1289, doi: 10.3758/s13421-011-0104-1.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.

Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90, 293– 314. doi: 10.1037/0033-295X.90.4.293.

Tversky, A., & Kahneman, D. (1992). Advances in Prospect Theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297–323.

Tversky, A., & Kahneman, D. (2009). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. In Adler, J.E., & Lance, J. (Eds.), Reasoning: Studies of human and its foundations (pp. 114–135). New York: Cambridge University Press.

Wagerman, S.E. (2015). A fundamental disagreement about the fundamental attribution error, or: The situation made him write it. Journal of Integrated Social Sciences, 5(1), 58–62.

West, R.F., Meserve, R.J., & Stanovich, K.E.. (2012). Cognitive sophistication does not attenuate the bias blind spot. Journal of Personality and Social Psychology, 103(3), 506–519. doi: 10.1037/a0028857

West, R.F., Toplak, M.E., & Stanovich, K.E. (2008). Heuristics and biases as measures of critical thinking: Associations with cognitive ability and thinking dispositions. Journal of Educational Psychology, 100(4), 930– 941. doi: 10.1037/a0012842

Zapf, P.A., Kucucka, J., Kassin, S.M., & Dror, I.E. (2018). Cognitive bias in forensic mental health assessment: Evaluator beliefs about its nature and scope. Psychology, Public Policy, and Law, 24(1).

45 Zappala, M., Reed, A.L., Beltrani, A., Zapf, P.A., & Otto, R.K. (2018). Anything you can do, I can do better: Bias awareness in forensic evaluators. Journal of Forensic Psychology Research and Practice, 18(1), 45–56. doi: 10.1080/24732850.2017.1413532

46 Appendix A: The Bias Blind Spot Measure (Scopelliti et al., 2015)

1. Some people show a tendency to judge a harmful action as worse than an equally harmful inaction. For example, this tendency leads to thinking it is worse to falsely testify in court that someone is guilty, than not to testify that someone is innocent. 2. Psychologists have claimed that some people show a tendency to do or believe a thing only because many other people believe or do that thing, to feel safer or to avoid conflict. 3. Many psychological studies have shown that people react to counterevidence by actually strengthening their beliefs. For example, when exposed to negative evidence about their favorite political candidate, people tend to implicitly counter-argue against that evidence, therefore strengthening their favorable feelings toward the candidate. 4. Psychologists have claimed that some people show a “disconfirmation” tendency in the way they evaluate research about potentially dangerous habits. That is, they are more critical in evaluating evidence that an activity is dangerous when they engage in that activity than when they do not. 5. Psychologists have identified an effect called “diffusion of responsibility,” where people tend not to help in an emergency situation when other people are present. This happens because as the number of bystanders increases, a bystander who sees other people standing around is less likely to interpret the incident as a problem, and also is less likely to feel individually responsible for taking action. 6. Research has found that people will make irrational decisions to justify actions they have already taken. For example, when two people engage in a bidding war for an object, they can end up paying much more than the object is worth to justify the initial expenses associated with bidding. 7. Psychologists have claimed that some people show a tendency to make “overly dispositional ” in the way they view victims of assault crimes. That is, they are overly inclined to view the victim’s plight as one he or she brought on by carelessness, foolishness, misbehavior, or naiveté. 8. Psychologists have claimed that some people show a “halo” effect in the way they form impressions of attractive people. For instance, when it comes to assessing how nice, interesting, or able someone is, people tend to judge an attractive person more positively than he or she deserves. 9. Extensive psychological research has shown that people possess an unconscious, automatic tendency to be less generous to people of a different race than to people of their race. This tendency has been shown to affect the behavior of everyone from doctors to taxi drivers. 10. Psychologists have identified a tendency called the “ostrich effect,” an aversion to learning about potential losses. For example, people may try to avoid bad news by ignoring it. The name comes from the common (but false) legend that ostriches bury their heads in the sand to avoid danger.

47 11. Many psychological studies have found that people have the tendency to underestimate the impact or the strength of another person’s feelings. For example, people who have not been victims of discrimination do not really understand a victim’s social suffering and the emotional effects of discrimination. 12. Psychologists have claimed that some people show a “self-interest” effect in the way they view political candidates. That is, people’s assessments of qualifications, and their judgments about the extent to which particular candidates would pursue policies good for the American people as a whole, are influenced by their feelings about whether the candidates’ policies would serve their own particular interests. 13. Psychologists have claimed that some people show a “self-serving” tendency in the way they view their academic or job performance. That is, they tend to take credit for success but deny responsibility for failure. They see their successes as the result of personal qualities, like drive or ability, but their failures as the result of external factors, like unreasonable work requirements or inadequate instructions. 14. Psychologists have argued that gender biases lead people to associate men with technology and women with housework.

48 Appendix B: Vignettes

1. Imagine yourself in the following situation:

You are about to leave for work in the morning to work on a project for which there is a very tight deadline by the end of the day. On your way out, your neighbor asks for help setting up a ladder so that they can do some house work. Although you are running short on time, you stop to help them. Then, you get in your car to drive to work. While you are on the expressway, you see a sign that says that the speed limit is 55 miles per hour. Along this particular expressway, there are cameras designed to capture pictures of licenses plates on cars that are speeding. The flow of traffic is going at approximately 65 miles per hour. Despite this, you know that if you drive at the speed of 77 miles per hour, you will make it to work just on time to make a team meeting that must take place before you can complete your tasks for your deadline today.

Using the sliding scale below, please indicate how likely you are to increase your speed to make it to work.

0% likely ------100% likely

Using the sliding scale below, please indicate how likely the average person is to increase their speed to make it to work.

0% likely ------100% likely

1. Imagine yourself in the following situation:

You are driving home after dropping off a friend in a nearby neighborhood late at night. While you are driving, you come to a four-way intersection with a four-way stop sign. You know that this intersection has been the cause of several accidents recently and that cops often patrol the area. After slowing down to check, from your line of sight there are no other cars around.

Using the sliding scale below, please indicate how likely you are to come to a complete stop in this situation.

0% likely ------100% likely

Using the sliding scale below, please indicate how likely the average person is to come to a complete stop in this situation.

0% likely ------100% likely

49 Appendix C: Marlowe-Crowne Social Desirability Scale

Personal Reaction Inventory

Listed below are a number of statements concerning personal attitudes and traits. Read each item and decide whether the statement is true or false as it pertains to you personally.

1. Before voting I thoroughly investigate the qualifications of all the candidates. 2. I never hesitate to go out of my way to help someone in trouble. 3. It is sometimes hard for me to go with my work if I am not encouraged. 4. I have never intensely disliked anyone. 5. On occasion I have had doubts about my ability to succeed in life. 6. I sometimes feel resentful when I don’t get my way. 7. I am always careful about my manner of dress. 8. My table manners at home are as good as when I eat out in a restaurant. 9. If I could get into a movie without paying and be sure I was not seen I would probably do it. 10. On a few occasions, I have given up doing something because I thought too little of my ability. 11. I like to gossip at times. 12. There have been times when I feel like rebelling against people in authority even though I knew they were right. 13. No matter who I’m talking to, I’m always a good listener. 14. I can remember “playing sick” to get out of something. 15. There have been occasions when I took advantage of someone. 16. I’m always willing to admit it when I make a mistake. 17. I always try to practice what I preach. 18. I don’t find it particularly difficult to get along with loud mouthed, obnoxious people. 19. I sometimes try to get even rather than forgive and forget. 20. When I don’t know something I don’t at all mind ad mitting it. 21. I am always courteous, even to people who are disagreeable. 22. At times I have really insisted on having things my own way. 23. There have been occasions when I felt like smashing things. 24. I would never think of letting someone else be punished for my wrong- doings. 25. I never resent being asked to return a favor. 26. I have never been irked when people expressed ideas very different from my own. 27. I never make a long trip without checking the safety of my car. 28. There have been times when I was quite jealous of the good fortune of others. 29. I have almost never felt the urge to tell someone off.

50 30. I am sometimes irritated by people who ask favors of me. 31. I have never felt that I was punished without cause. 32. I sometimes think when people have a misfortune they only got what they deserved. 33. I have never deliberately said something that hurt someone’s feelings.

51 Appendix D: Cognitive Reflection Test-Long (Primi et al., 2016)

1. A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? a. Correct answer = 5 cents b. Heuristic answer = 10 cents 2. If it takes 5 minutes for five machines to make five widgets, how long would it take for 100 machines to make 100 widgets? a. Correct answer = 5 minutes b. Heuristics answer = 100 minutes 3. In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? a. Correct answer = 47 days b. Heuristic answer = 24 days 4. If three elves can wrap three toys in 1 hour, how many elves are needed to wrap six toys in 2 hours? a. Correct answer = 3 elves b. Heuristic answer = 6 elves 5. Jerry received both the 15th highest and the 15th lowest mark in the class. How many students are there in the class? a. Correct answer = 29 students b. Heuristic answer = 30 students 6. In an athletics team, tall members are three times more likely to win a medal than short members. This year the team has won 60 medals so far. How many of these have been won by short athletes? a. Correct answer = 15 medals b. Heuristic answer = 20 medals

52 Appendix E: Need for Cognition Scale (Cacioppo, Petty, & Kao, 1984)

1. I really enjoy a task that involves coming up with new solutions to problems. 2. I would prefer a task that is intellectual, difficult, and important to one that is somewhat important but does not require much thought. 3. I tend to set goals that can be accomplished only by expending considerable mental effort. 4. I am usually tempted to put more thought into a task than the job minimally requires. 5. Learning new ways to think doesn’t excite me very much.* 6. I am hesitant about making important decisions after thinking about them.* 7. I usually end up deliberating about issues when they do not affect me personally. 8. I prefer just to let things happen rather than try to understand why they turned out that way.* 9. I have difficulty thinking in new and unfamiliar situations.* 10. The idea of relying on thought to make my way to the top does not appeal to me.* 11. The notion of thinking abstractly is not appealing to me.* 12. I am an intellectual. 13. I only think as hard as I have to.* 14. I don’t well under pressure.* 15. I like tasks that require little thought once I’ve learned them.* 16. I prefer to think about small, daily projects to long-term ones.* 17. I would rather do something that requires little thought than something that is sure to challenge my thinking abilities.* 18. I find little satisfaction in deliberating hard and for long hours.* 19. I often think with other people about the reasons for and possible solutions to international problems than about gossip or tidbits of what famous people are doing. 20. These days, I see little chance for performing well, even in “intellectual” jobs, unless one knows the right people.* 21. More often than not, more thinking just leads to more errors.* 22. I don’t like to have the responsibility of handling a situation that requires a lot of thinking.* 23. I appreciate opportunities to discover the strengths and weaknesses of my own reasoning. 24. I feel relief rather than satisfaction after completing a task that required a lot of mental effort.* 25. Thinking is not my idea of fun.* 26. I try to anticipate and avoid situations where there is a likely chance I will have to think in depth about something.* 27. I prefer watching educational to entertainment programs.

53 28. I think best when those around me are very intelligent. 29. I prefer my life to be filled with puzzles that I must solve. 30. I would prefer complex to simple problems. 31. Simply knowing the answer rather than understanding the reasons for the answer to a problem is fine with me.* 32. It’s enough for me that something gets the job done, I don’t care how or why it works.* 33. Ignorance is bliss.* 34. I enjoy thinking about an issue even when the results of my thought will have no effect on the outcome of the issue.

* = Item is reverse-scored.

54 Appendix F: Demographics Questionnaire

1. Have you ever driven a car? (to be asked at the beginning to assess for exclusion criteria) a. Yes b. No 2. Have you ever received a moving traffic violation (e.g., speeding ticket, not coming to a complete stop at a stop sign)? a. Yes b. No 3. Do you have a driver’s license? a. Yes b. No 4. How old are you in years? _____ 5. What is your gender? a. Female b. Male c. Non-binary/third gender d. Prefer to self-describe ______e. Prefer not to say 6. Do you identify as transgender? a. Yes b. No c. Prefer not to say 7. What is your Sexual Orientation? a. Straight/heterosexual b. Gay or lesbian c. Bisexual d. Prefer to self-describe _____ e. Prefer not to say 8. I identify my race and ethnicity as (check all that apply) a. Asian b. Black/African c. White or Caucasian d. Hispanic/Latinx e. Native American f. Pacific Islander g. Middle Eastern h. Multiracial i. Prefer not to answer j. Prefer to self-describe _____ 9. Do you identify as someone with a disability or impairment? a. Yes b. No 10. Which option best describes your level of education?

55 a. No schooling completed b. Nursery school to 8th grade c. Some high school, no diploma d. High school graduate e. High school equivalent (e.g., GED) f. Some college credit, no degree g. Trade/technical/vocational training h. Associate degree i. Bachelor’s degree j. Some graduate school k. Master’s degree l. Professional degree m. Doctorate degree 11. What is your marital status? a. Single, never married b. Married or domestic partnership c. Widowed d. Divorced e. Separated 12. Are you currently . . . a. Employed for wages b. Self-employed c. Out of work and looking for work d. Out of work but not currently looking for work e. A homemaker f. A student g. Military h. Retired i. Unable to work 13. Have you ever been arrested? a. Yes i. How many times? b. No 14. Have you ever been convicted of a crime? a. Yes b. No 15. What is your political affiliation? a. Democrat b. Republican c. Independent/Other d. Not registered 16. What is your immigration/work status? a. US Citizen b. Lawful Permanent Resident (green card holder) c. Other (non-LPR) lawful immigration status d. Undocumented/no lawful status e. Unknown 17. What is your household’s estimated yearly income?

56 a. Less than $10,000 b. $10,000 – $14,999 c. $15,000 – $24,999 d. $25,000 – $34,999 e. $35,000 – $49,999 f. $50,000 – $74,999 g. $75,000 – $99,999 h. $100,000 – $149,999 i. $150,000 – $199,999 j. $200,000 or more

57 Table 1

Participant Characteristics

Characteristic n % M SD Age 296 32.0 10.5 Gender Male 175 58.9 Female 122 41.1 Transgender 14 4.7 Prefer not to say 2 .7 Race White or Caucasian 181 60.9 Asian 77 25.9 Black/African American 18 6.1 Hispanic/LatinX 17 5.7 Native American 9 3.0 Pacific Islander 0 0 Middle Eastern 1 .3 Multiracial 2 .7 Prefer not to answer 2 .7 Prefer to self-describe 1 .3 Disability/Impairment Yes 32 10.8 No 261 87.9 Missing 4 1.3 Education Some high school 2 .7 High school 19 6.4 High school equivalent 6 2.0 Some college credit (no degree) 39 13.1 Trade/technical/vocational training 7 2.4 Associate degree 22 7.4 Bachelor’s degree 136 45.8 Some graduate school 13 4.4 Master’s degree 42 14.1 Professional degree 7 2.4 Doctorate degree 4 1.3 Marital Status Single, never married 123 41.4 Married/domestic partnership 154 51.9 Widowed 2 .7 Divorced 14 4.7 Separated 3 1.0 Missing 1 .3

58 Table 1 (continued)

Employment Status Employed 212 71.4 Self-employed 52 17.5 Out of work and looking 10 3.4 Out of work and not looking 3 1.0 Student 9 3.0 Military 1 .3 Retired 7 2.4 Unable to work 2 .7 Missing 1 .3 Immigration/Work Status US Citizen 235 92.6 Lawful Permanent Resident 18 6.1 Other lawful status 13 4.4 Undocumented 2 .7 Unknown 28 9.4 Missing 1 .3 Political Affiliation Democrat 117 39.4 Republican 86 29.0 Independent/Other 86 29.0 Not registered 7 2.4 Missing 1 .3 Estimated Yearly Income Less than $10,000 44 14.8 $10,0000 - $14,999 29 9.8 $15,000 - $24,999 29 9.8 $25,000 - $34,999 39 13.1 $35,000 - $49,999 41 13.8 $50,000 - $74,999 48 16.2 $75,000 - $99,999 39 13.1 $100,000 - $149,999 22 7.4 $150,000 - $199,999 5 1.7 $200,000 or more 1 .3

59 Table 2

Measure Scores and Descriptive Statistics

Measure n Min Max Mean SD Vignette M-C BBSM NFC CRT BBS SDS Vignette 297 - 74 - 36.59 - - - - - BBS 171 27.78 M-C 264 0 33 16.30 .38 .08 - - - - SDS BBSM 279 - 1.36 -.99 .98 .33** .03 - - - 4.43 NFC 133 -87 95 .53 2.87 -.10 .24** -.30** - - CRT 292 -6 6 .45 3.37 -.06 -.13* .02 .04 - * = Correlation is significant at the .05 level (1-tailed) ** = Correlation is significant at the .01 level (1-tailed)

60 Table 3

Driving and Crime Statistics

Question n % Driver’s license Yes 289 46.8 No 7 2.4 Missing 1 .3 Moving violation Yes 158 53.2 No 139 46.8 Ever been arrested Yes 30 7.1 No 266 89.6 Missing 1 .3 Ever convicted of a crime Yes 21 7.1 No 275 92.6 Missing 1 .3