<<

MOTIVATED REASONING 1

Cognitive Dissonance, Ego-Involvement, and Motivated Reasoning

Christopher J. Carpenter

Department of Communication

Western Illinois University

[email protected]

In press at the Annals of the International Communication Association doi: 10.1080/23808985.2018.1564881

Author Acknowledgement: I would like to thank the Editor and the anonymous reviewers for providing thoughtful feedback that I believe substantially strengthened this article. MOTIVATED REASONING 2

Abstract

One of the enduring topics for research is motivated reasoning, when people respond to persuasive messages in ways other than seeking to form an accurate attitude. This essay advances the position that the existing research can be synthesized using the self-concept approach to with ego-involvement added as the key explanatory variable to indicate when an issue is likely tied to the individual’s self-concept and thus potentially produce cognitive dissonance. The research on motivated reasoning is reviewed with this theoretical lens with recommendations on how to use this theory to advance research on and reducing maladaptive motivated reasoning.

Keywords: cognitive dissonance, ego-involvement, motivated reasoning, persuasion, science communication, political communication, health communication MOTIVATED REASONING 3

Cognitive Dissonance, Ego-Involvement, and Motivated Reasoning

“Once a man’s understanding has settled on something (either because it is an accepted belief or because it pleases him), it draws everything else also to support and agree with it. And if it encounters a larger number of more powerful countervailing examples, it either fails to notice them, or disregards them, or makes fine distinctions to dismiss and reject them, and all this with much dangerous , to preserve the authority of its first conceptions” p. 43

-Francis Bacon (1620/2000)

“I took care not to dispute anything he said, for there’s no arguing with an Enthusiast. Better not to take it into one’s head to tell a lover the faults of his mistress, or a litigant the weakness of his cause- or to talk sense to a fanatic.” p. 5

-Voltaire “Philosophical Letters” (1732/1961)

The above quotations reveal that thinkers have long known that humans are biased towards their cherished beliefs. For some of our opinions, we are willing to ignore compelling evidence, credible sources, and even basic logic in order to go on maintaining our original opinion. If the arguments presented to us differ more substantially from our established opinions, we cling even more strongly to our own opinions. For example, if we strongly identify with a position on restrictions on firearm ownership, we may reject any argument in favor of substantially greater or weaker restrictions than those we already support.

For some other opinions, we are readily willing to shift our attitudes towards a different position if compelling evidence is presented. The greater the gap between the position presented MOTIVATED REASONING 4 and our existing opinion, the greater our shift towards the position presented. If a doctor told us that credible research suggests that eating kiwi greatly increased our cancer risk, most of us would give up eating kiwi (no such research exists, to my knowledge).

What these situations illustrate is the differential ways that we tend to react to attitude discrepant messages. A greater gap between what one already believes and what is advocated in a well-evidenced, logical, credible persuasive message can either greatly move us to change our own position or strengthen our resolve to maintain it. A number of variables have been studied to determine the outcomes associated with greater or lesser discrepancy between the message and the audience’s existing attitude (Fink & Cai, 2013). Greater resistance to an attitude discrepant message can be considered a type of motivated reasoning. Kahan defined motivated reasoning as

“the tendency of people to conform assessments of information to some goal or end extrinsic to accuracy” (2013, p. 408). That definition will be used for this review. Discovering the moderating variable(s) that predict motivated reasoning is a key problem for persuasion researchers as well as any applied researchers seeking to influence public behavior in areas such as health communication, science communication, risk communication, and advertising.

The position I articulate in the following review is that cognitive dissonance theory can be used to explain and predict motivated reasoning outcomes. In particular this review will expand the modification of cognitive dissonance proposed by Aronson (1968; 1992) and

Rokeach (1968; 1973) labeled the self-concept approach to cognitive dissonance. I will argue this approach to cognitive dissonance can be profitably expanded with the postulate that ego- involvement, as originally described by Sherif and Cantril (1947), can be used to predict the ego- defensive motivations associated with the self-concept approach to cognitive dissonance and thereby motivated reasoning more generally. MOTIVATED REASONING 5

This review will begin by describing cognitive dissonance theory. Then I will explain how ego-involvement can serve as an indicator of the key motives in the theory. This section will be followed by an exposition of the various ways it and similar variables have been studied.

Then I will explore how cognitive dissonance can be used to understand various types of motivated reasoning. Finally, I will review research on attempts to reduce people’s likelihood of using motivated reasoning before offering closing remarks. Aronson (1992) argued that the social sciences have become saturated with many, smaller theories, when the self-concept approach to cognitive dissonance could serve as a stronger tool for theoretical integration. This review attempts to continue that integration.

Cognitive Dissonance

Cognitive dissonance, according to Festinger (1957), is produced when one person has two cognitive elements that are inconsistent with each other. One of the key parts of cognitive dissonance theory is that such inconsistency produces a feeling of negative arousal. People experiencing dissonance are motivated to reduce the inconsistency because they wish to avoid or reduce that negative arousal. They then attempt to reduce the dissonance by altering one of the dissonant cognitive elements. For example recalling that one wasted time by going for a walk is inconsistent with a belief that one should use one’s time wisely. Dissonance might be reduced by considering that breaks make one more productive after the walk. Festinger also argued people can foresee when dissonance might occur and become motivated to avoid dissonance. The inconsistency between two elements is heightened when one or more of the dissonant elements are more important to the individual. In particular, he claimed that, “The maximum dissonance which could exist would, in such circumstances, be determined by the resistance to admitting that he had been wrong or foolish” (p. 29). MOTIVATED REASONING 6

One of the misunderstandings concerning cognitive dissonance theory is that it is only concerned with the effects of behavior on attitudes. This misperception is understandable given that the seminal demonstration of the theory focused on that aspect (Festinger & Carlsmith,

1959) and that much of the cognitive dissonance-inspired communication research has focused on the effects of counterattitudinal advocacy (see S. Y. Kim, Allen, Preiss, & Peterson, 2014, for a meta-analysis of this research). Yet, in the original formulation of the theory, Festinger (1957) argued that exposure to messages inconsistent with one’s own beliefs could produce cognitive dissonance. The belief that an otherwise reasonable individual believes something different than oneself is theoretically predicted to produce cognitive dissonance. He explained further that such dissonance “may be reduced if he can believe that the latter is a stupid, ignorant, unfriendly, and bigoted individual” (p. 183). Expanding on this aspect of cognitive dissonance theory, Aronson,

Turner, and Carlsmith (1963) argued that hearing a message that is highly discrepant from one’s own attitudes from a highly credible source would be likely to produce a substantial amount of cognitive dissonance. They noted that the audience can reduce the dissonance created by such a message through many means including changing their own attitudes, trying to change the source’s opinion, finding support for their own opinion from like-minded people, or rejecting the source.

The idea of general cognitive dissonance was narrowed to cognitive dissonance centered on the self-concept by one of Festinger’s students, Elliot Aronson (1968). He argued that dissonance is substantially more likely to be aroused when at least one of the dissonant elements is closely tied to one’s self-concept. Thus, people who feel that their affection for kiwifruit is a key part of their self-identity would experience cognitive dissonance when encountering a message about the long-term dangers of consuming this fruit. On the other hand, someone who MOTIVATED REASONING 7 has generally enjoyed eating kiwi but does not strongly identify with that habit would not feel cognitive dissonance when encountering a message about its dangers. This approach is consistent with the synthesis of cognitive dissonance and self- theory proposed by Fazio, Zanna, and Cooper (1977) that suggested that cognitive dissonance is only induced when messages are strongly attitude discrepant, whereas self-perception processes dominate when the messages are only slightly attitude discrepant. Only messages that substantially challenge one’s self-concept or values tied to one’s self-concept are predicted to be likely to spur the kind of motivated reasoning described here. Such messages are usually very attitude discrepant.

A further development by Aronson (1968) is the claim that self-esteem is likely a strong moderator of these effects as well. He claimed that cognitive dissonance is only likely to occur among those who have at least a moderately healthy level of self-esteem. For, unless one feels good about one’s self-concept, information attacking it would not be dissonant. The low self- esteem person who strongly identifies with his kiwi consumption would not experience cognitive dissonance from the anti-kiwi message because it would not be dissonant for his poor self- concept to be tied to foolish behavior. A pair of studies by Peterson, Haynes, and Olson (2008) were consistent with these predictions concerning the moderating impact of self-esteem on reactions to messages advocating healthy behavior changes.

Other cognitive dissonance theorists of the period also began shifting their towards cognitive dissonance theorizing that focused on the self-concept. Zimbardo (1968) wryly observed that someone’s behavior causes dissonance when the justification for the behavior is

“not strong enough for him to convince his grandmother why he did such a foolish thing” (p.

445). Aronson, Chase, Helmreich, and Ruhnke (1976) further clarified that cognitive dissonance was most likely to be aroused by any that indicated one was unintelligent or should MOTIVATED REASONING 8 feel guilty. Festinger (1957) argued that greater dissonance occurs when a larger number of cognitive elements are inconsistent. He also argued that part of what makes cognitive dissonance associated with the self-concept so strong is the way the self-concept connects to so many other cognitive elements. M. B. Smith (1968) argued that self-esteem maximizing was the key motivator in understanding the magnitude of cognitive dissonance, “Just as Festinger’s version of consistency theory weights the discrepancy between by their importance, so they self-esteem-maximizing trend would seem also to involve a weighting by importance or centrality to one’s self concept.” (p. 368).

One way to further develop the concept of weighting concepts in the dissonance paradigm stems from developments in coherence theory. Thagard and Verbeurgt (1998) explained that to understand human cognition one must consider all of the various cognitive elements, including , beliefs, attitudes, and of past behavior as a collection of elements such that each element either fits together (coheres) or does not (incoherence).

Coherence imposes a positive constraint on the cognitive system such that one must accept both elements. Incoherence between two elements imposes a negative constraint such that one must accept one and reject the other. Each constraint carries a particular weight such that some constraints are more important than others. Generally, if one constraint indicates acceptance of an element and another indicates rejection, humans try to satisfy the constraint with more weight and the element that is consistent with other constraints, though Thagard and Verbeurgt acknowledge that the exact mechanism for maximizing the satisfaction of the most constraints in the system is yet unclear. Various researchers have applied coherence theory to cognitive dissonance (Simon & Holyoak, 2002; Shultz & Lepper, 1996). MOTIVATED REASONING 9

One way to consider how people will react to persuasive messages with the self-concept approach to cognitive dissonance is to consider the extent to which the audience can accept the persuasive message without violating too many of their existing, heavily weighted constraints.

The self-concept approach predicts that cognitions that are central to the self-concept of the individual carry particularly strong weight. The individual who identifies their kiwi-eating consumption as a key part of their identity imposes constraints on any information about kiwi- eating. To continue with a stable self-identity, their cognitive system has many constraints involving kiwi beliefs such that they feel that kiwi consumption is safe and perhaps even that eating kiwi is healthy and makes one more attractive. The individual’s values also impose constraints on beliefs consistent with those values.

At the same time that Aronson (1968) was attempting to center the self-concept in cognitive dissonance, Rokeach (1968, 1973) was working along similar lines. Some of his insights can help clarify the self-concept approach to cognitive dissonance further. Generally, he, like Aronson, centered maintaining a positive self-concept as the driver of cognitive dissonance.

Rokeach argued, “consistency with self-esteem is probably a more compelling consideration than consistency with logic or reality” (1968, p. 19-20).

But Rokeach (1973) further clarified the self-concept. In his model, similar to other models of the self, people are generally motivated to “maintain and enhance one’s total conception of oneself” (p. 216). But he also included values, “an enduring belief that a specific mode of conduct or end-state of existence is personally and socially preferable to alternative modes of conduct or end-states of existence” (1968, p. 16). He argued that values are usually very strongly tied to the self-concept, whereas particular attitudes are not as closely tied to the self-concept. Thus, when someone realizes that one of their attitudes is inconsistent with one of MOTIVATED REASONING 10 their values, that person is more likely to change the attitude rather than the value because people are more strongly motivated to maintain the general values that form part of their self-concept than their more peripheral specific attitudes. Furthermore, in this theory, any inconsistencies involving values are also more likely to arouse dissonance because they are likely tied to the self- concept.

Rokeach (1973) argued that even values can be changed if someone can be convinced that one of their values is inconsistent with their core self-concept. Rokeach explained that his interpretation of cognitive dissonance was compatible with Aronson’s self-concept approach.

Where Aronson et al. (1976) pointed to situations where one felt stupid or guilty, Rokeach similarly suggested dissonance is aroused when one feels “incompetent or immoral” (p. 228).

Rokeach described a series of studies that successfully used the importance of values to change attitudes that would normally be strongly defended.

An important study by S. J. Sherman and Gorkin (1980) found support for the self- concept approach. Those researchers measured the extent to which the subjects identified with feminist values. The subjects were then given a logic problem that most students fail such that the solution required them to recognize that doctors can be male or female. Failure to solve the problem was expected to cause the feminist subjects to feel as though they had betrayed their feminist values. Control groups either had to solve a difficult irrelevant problem or no problem at all. Then all the subjects were given an ostensibly unrelated task in which they were asked to evaluate the behavior of a college being sued for sex discrimination in hiring. Those who had identified with feminist values and failed to solve the logic problem engaged in attitude bolstering such that they expressed high disapproval of the college’s possibly discriminatory hiring practices whereas disapproval was lower for those who were in the control groups or did MOTIVATED REASONING 11 not identify with feminist values. This study was consistent with Rokeach’s (1973) prediction that one can use people’s values to predict when cognitive dissonance effects will occur.

Another development in cognitive dissonance theory that centered the self-concept is

Steele’s (1988) self-affirmation theory. In his formulation “dissonance motivation is stirred by the implication of the inconsistency that one is not adaptively or morally adequate” (p. 278). His key insight was that an alternate route to reducing or preventing dissonance was to motivate people to affirm that they are competent and/or moral people in some other unrelated area. Thus one can withstand ego-threats by reaffirming one’s value in other areas. He argued such self- affirmations contribute to the individual’s overall goal of maintaining a positive self-image. D.

K. Sherman and Cohen (2002) reviewed studies indicating that self-affirmation may be a particularly effective way to reduce ego-defensive responses to counterattitudinal information.

Thus, if our strongly self-identified kiwi eater first reflected on his many charitable contributions, he would be better prepared to accept the information that his kiwi consumption needs to be curtailed.

The relationship between cognitive dissonance theory and self-affirmation is controversial. Aronson (1992) claimed that self-affirmation theory is merely an application of his already developed self-concept approach to cognitive dissonance. Aronson thus conceptualized self-affirmation as merely one of many means people may use to reduce or prevent dissonance aroused by threats to the self-concept. Steele and Spencer (1992) disagreed and argued that reducing inconsistency and self-affirmation are different processes. Further research is required to sort out these subtleties. It is worth noting that the research record on self-affirmation contains little research on the nature of its mechanism to clearly distinguish it from cognitive dissonance.

For example, McQueen and Klein (2006) pointed out that this area of research rarely uses MOTIVATED REASONING 12 induction checks so it is unclear what self-affirmation inductions actually do to people. D. K.

Sherman and Cohen (2006) note that no consistent mediators of the effect of self-affirmation on persuasion have been found. It is unclear what additional mechanism self-affirmation theory offers over cognitive dissonance. For the purposes of this review, self-affirmation is treated as consistent with Aronson’s (1992) view that it serves as a useful application of the self-concept approach to cognitive dissonance theory.

Though self-affirmation may have an unclear mechanism, cognitive dissonance theory also retains some ambiguities. One particular issue is the inability of the current theory to predict which method of dissonance reduction an individual will be likely to use (Aronson, 1968;

Kunda, 1990). Steiner and Johnson (1964) argued that each method people use to reduce dissonance is additive such that each one reduces dissonance a bit more. Festinger (1957) argued that one goes with the path of least resistance and that therefore one changes whichever cognitive element or elements are easiest to change. Walster, Bercheid, and Barclay (1967) found evidence consistent with their claim that people choose the dissonance reduction method that is both likely to reduce their current dissonance and likely to continue to minimize dissonance in the future.

Rokeach (1973) argued that the further from the self-concept a cognitive element is, the more likely it is to change to reduce dissonance. Thus, our strongly identified kiwi eater might find it easier to believe that the scientific evidence is weak rather than admit he has been acting foolishly because the message threatens his self-concept more than his preference for believing scientific research. On the other hand, if his belief in the value of science is more central to his identity than his kiwi affection, then his kiwi consumption would be expected to change.

Rokeach’s concept of distance from the self-concept may offer a useful avenue for predicting cognitive dissonance reduction methods. A coherence theory perspective (Thagard & Verbeurgt, MOTIVATED REASONING 13

1998) might suggest attempting to model all of these constraints on the cognitive system to predict how someone might satisfy as many of them as possible. Yet further research and theoretical development is required to make clearer predictions about how people might reduce dissonance when their self-concept is threatened.

Overall, Aronson (1968, 1992) and Rokeach (1968, 1973) developed the idea that ego- threatening information will be resisted by the audience. They also developed cognitive dissonance by specifying that some attitudes, values, and identities are closer to the core self- concept than others and predicting that inconsistencies involving them would produce stronger cognitive dissonance effects. Yet, these formulations still need a clearly defined variable that can indicate which cognitive elements are closer to the self-concept than others. One potentially effective variable to serve that function is ego-involvement

Ego-Involvement

Ego-involvement is a construct that has been altered, abused, and even twisted to fit many definitions and theories since its initial presentation. The reader may have any number of definitions in mind, but to define ego-involvement, I go back one of the earliest conceptualizations. M. Sherif and Cantril explained that ego-involving attitudes were, “attitudes that have been learned, largely as social values; that the individual identifies himself with, and makes a part of himself; and that have affective properties of varying degrees of intensity” (1947, pp. 126-27). Though I expand ego-involvement to values and identities as well as attitudes, that expansion is consistent with their basic conceptualization. Later, M. Sherif and Hovland noted an important aspect of ego-involvement, “When he is asked to appraise relevant stimuli, he may become as personally involved as though his own worth were called into question” (1961, p. 84). MOTIVATED REASONING 14

The key idea is that when someone ties an attitude, value, or identity to their concept of themselves, that person is ego-involved with that attitude, value, or identity.

The idea of connecting ego-involvement to the self-concept approach to cognitive dissonance is not entirely novel. M. Sherif and Cantril anticipated the kinds of cognitive defenses described by the self-concept approach to cognitive dissonance when they wrote, “If our ego is injured we resort to all kinds of rationalizations, to protective adjustments, to selective modes of reasoning and behavior in which we manipulate things, persons, memories or ideas, in a selective way” (1947, pp. 100-101). Later, Sereno (1969) argued that that ego-involvement can help researchers make better predictions concerning cognitive dissonance because it clarifies that dissonance reduction will be most likely to occur when the self is implicated. He suggested that adding ego-involvement to consistency theory helps predict how people will choose to reduce dissonance, because they will choose the route that best satisfies their self-concept. When ego- involvement in the topic is high, messages that contradict one’s attitude, value, or identity will produce cognitive dissonance, and thus spur motivated reasoning to resist that message. When ego-involvement is low, such messages will not produce dissonance. Ego-involvement can be increased by the attitude being connected to particular values or by the attitude being connected directly to one’s sense of morality or competence. In both cases, ego-involvement indicates that ego-defensive processing to protect the self-concept is likely.

There are a variety of studies examining similar constructs that attempt to predict motivated reasoning of the type predicted by cognitive dissonance in response to persuasive messages. One common one is “attitude importance.” For example, Zuwerink and Devine (1996) argued that attitude importance effectively predicts resistance to persuasion. They indicated that,

“attitude importance reflects the degree to which an individual cares about, is concerned about, MOTIVATED REASONING 15 and attaches personal significance to an attitude” (p. 932). Yet this definition includes heterogeneous aspects. One can care about whether it is raining and one can care about whether the state employs the death penalty. The former is likely to be substantially less ego-involving than the latter because attitudes about the death penalty often involves people’s political values rather than instrumental concerns (Tyler & Weber, 1982). Johnson and Eagly (1989) found that the kind of involvement associated with a personal stake in the outcome of something does not directly cause resistance to persuasion. On the contrary, such involvement is associated with the objective evaluation of information such that strong arguments are more likely to be accepted by the audience that a disinterested audience, even if the message advocacy is strongly counterattitudinal (as per dual process theories such as the elaboration likelihood model of Petty

& Cacioppo, 1986). On the other hand, Johnson and Eagly’s meta-analysis found the kind of involvement regarding something associated with one’s core values consistently causes resistance. Many researchers use vague concepts like “personal importance” but ego- involvement clarifies exactly what kind of importance is meant and connecting ego-involvement to cognitive dissonance offers a theoretical framework as to why that kind of importance results in resistant cognitions.

One related and prominent theory of the self is Tajfel and Turner’s (1986) social identity theory. The theory indicates that the social groups to which people belong are part of their identities. Furthermore, the theory indicates that people want to maintain a positive self-esteem and that part of that striving is derived from trying to maintain a positive view of the groups they perceive themselves to belong to, because it supports their self-esteem. When such groups are attacked, it motivates resistance because they feel as though the self is attacked. Pool, Wood, and

Leck (1998) found that when people strongly identified with a group, that the group MOTIVATED REASONING 16 held an attitude different from one of their own temporarily lowered their self-esteem. It appears that social identities can be just as key to the self-concept as values and attitudes.

Yet, there is no clear theoretical reason why the concept of ego-involvement cannot include social identities. Putting social identities under the same umbrella construct as attitudes and values permits a smaller number of variables to cover a wider array of phenomena. Social identity theory findings can certainly be helpful in exploring reactions to ego-threatening persuasive messages, but the theoretical umbrella of cognitive dissonance is likely to be able to make the same predictions, at least in the context of persuasion. For example, D. K. Sherman and

H.S. Kim (2005) found self-affirmation inductions can reduce ingroup bias for sports teams.

They argued their findings support a motivational view of social identity based on protecting the self-concept. The extent to which the self-concept approach to cognitive dissonance can also be used to understand intergroup conflict where social identity theory is also often employed is, however, beyond the scope of his paper.

Is Ego-Involvement Necessary for Motivated Reasoning?

Although some research suggests that ego-involvement and similar constructs are necessary to spur motivated reasoning, it is far from established in the literature that ego-defense is the only cause. One useful area of investigation is attempting to falsify what I will call the strong ego-involvement hypothesis. To wit, all motivated reasoning occurs in the service of protecting the self-concept from threat. In the absence of ego-involvement, only accuracy motives will exist, i.e. an absence of motivated reasoning. Contrary to this claim, Kunda (1990) argued that people can have a non-accuracy goal for processing a persuasive message without that goal serving self-esteem maintenance. In a direct response to Aronson’s (1992) claims= that the self-concept approach to cognitive dissonance covers all motivated reasoning, Kunda (1992) MOTIVATED REASONING 17 specified a number of areas in which the self-concept approach may not be able to explain outcomes including situations in which people bias their of people they expect to meet and other social cognitive effects. Both Kunda and Aronson assert that the other’s perspective is a special case of their own and the research record cannot firmly adjudicate their claims yet.

Situations in which people deliberately try to reject information about the negative consequences of their behaviors may present a possible falsification of the strong ego- involvement hypothesis. For example, people enjoy drinking coffee and information that coffee is unhealthful threatens their enjoyment of coffee. Coffee drinkers would be motivated to resist anti-coffee information to secure a utilitarian enjoyment regardless of their self-concept.

Liberman and Chaiken (1992) tested coffee drinkers in this way. They found coffee drinkers were more critical of information indicating coffee consumption was dangerous than information indicating it was not. Non-drinkers did not show this bias.

Yet, some research suggests that even in such utilitarian contexts, the self-concept is heavily involved. Kassarjian and Cohen (1965) asked smokers and nonsmokers to complete the phrase “Adults who smoke are…” and found that smokers were substantially less likely to report negative phrases like “foolish” or “stupid.” Boney-McCoy, Gibbons, and Gerrard

(1999) explained that “The belief that one is a good person is not compatible with the knowledge that one engages in unnecessary, dangerous behavior.” In their study, they found that self-esteem moderated how people who engage in risky sexual behavior responded to a fear appeal message about sexually transmitted infections. Those engaging in risky behavior who had high self- esteem were less likely to admit their behavior was risky than those engaging in the same behavior with low self-esteem. As Aronson (1992) noted, the self-concept approach to cognitive MOTIVATED REASONING 18 dissonance predicts that when self-esteem is low, the knowledge that one does foolish things does not challenge the already weak ego, whereas when self-esteem is high, he argued that one is motivated to protect it from information that one behaves foolishly. Other studies have shown that people engaging in risky behavior are more likely to accept ego-threatening messages encouraging safer behavior when they are self-affirmed (Harris & Napper, 2005; D. K. Sherman,

Nelson, & Steele, 2000). Clearly, even utilitarian information about one’s health can spur cognitive dissonance due to the implicit attack on the self-concept implied by information showing that it is foolish to engage in the unhealthful behavior.

Future research might employ a riskier test. In many of the health behaviors studied in the motivated reasoning context, the people engaging in the behavior are likely to have heard before that their behavior was potentially unhealthful (e.g. high coffee consumption, smoking tobacco, risky sex). People who engage in these behaviors are choosing to do so despite the known, potential dangers. Those with moderately healthy self-esteem must deal with the cognitive dissonance caused by the conflict between their self-concept as a reasonable human being and their continued choice to engage in dangerous behavior. If someone were exposed to novel, credible information that a behavior they had been engaging in for years was dangerous without their knowing it, that information would be less ego-threatening. Despite wanting to believe that their preferred behavior was safe, the strong ego-involvement hypothesis suggests they would be likely to accept the message recommendations because their past behavior did not indicate foolishness. On the other hand, if motivated reasoning can occur without any implied ego-threat, then the usual pattern of resistant cognitions would occur and the audience would derogate the message and the speaker in order to continue with its allegedly dangerous behavior. One might even highlight or downplay the ego threat with phrasing such as, “Even though this information MOTIVATED REASONING 19 is new, people eating kiwi really should have known that eating all those seeds could not have been healthful” or “Up until now, it was generally believed that kiwi were safe to eat, so it’s understandable why they were so popular. But the new research indicates….” Such research would help determine if the strong ego-involvement hypothesis is more tenable than a weak ego- involvement hypothesis indicating that ego-involvement is merely one of several causes of motivated reasoning.

The clearest distinctions between the strong ego-involvement hypothesis concerning persuasion and Kunda’s (1990; 1992) perspective emerge for the effects of self-esteem and ego- involvement. A consistent pattern of motivated reasoning effects in a particular area that are not moderated by self-esteem or ego-involvement would weaken the strong ego-involvement hypothesis. In particular, studies showing motivated reasoning effects for people with low self- esteem or people with low ego-involvement would falsify the strong ego-involvement hypothesis. Attempting to falsify the strong ego-involvement hypothesis may offer a potentially fruitful line of empirical research and theoretical development.

Studying Ego-Involvement

If ego-involvement is to be used as a core variable in exploring cognitive dissonance effects, it needs to be well-measured. Initially, ego-involvement was measured using a variety of methods associated with the social judgment approach (C. W. Sherif, M. Sherif, & Nebergall,

1965). In particular, C. W. Sherif, Kelly, Rodgers, Sarup, and Tittler (1973) found that the various sizes of the individual’s latitudes of acceptance and rejection could indicate ego- involvement. Such latitudes indicated the number of positions on a topic that one would be willing to accept and those one would definitely reject, respectively. Several studies have, on the other hand, found that these measures are not able to accurately measure ego-involvement (Eagly MOTIVATED REASONING 20

& Telaak, 1972; Wilmont, 1991). Due to such difficulties, the social judgment approach to measuring ego-involvement, despite being strongly associated with the variable, is unlikely to prove fruitful.

In place of the social judgment approach measures, many researchers have employed ad hoc, often single-item, self-report measures of the extent to which people consider a topic important. Yet, such unvalidated measures will only hope to measure ego-involvement as defined here if the topic itself is primarily focused on values or social identities rather than utilitarian concerns. As Johnson and Eagly (1989) noted in their meta-analysis of involvement, there are many occasions in which people consider the topic important due to practical concerns about the outcomes rather than a deep personal connection to the topic. People are likely to consider it pretty important whether or not the lettuce they just purchased at a grocery store is contaminated, but they are unlikely to consider it a key part of their identities.

Other measures have some evidence associated with their construct validity, but are confined to particular contexts. For example Kahan, Peters, Dawson, and Slovic (2017) found a measure of the extremity of someone’s political identification was associated with motivated reasoning concerning political topics. Similarly, in the area of consumer science, Escalas and

Bettman (2003) developed a measure of “self-brand connection” that indicates the extent to which people’s brand choices are connected to their self-concept. Cheng, White, and Chaplin

(2012) found this measure was associated with the feeling that negative information about a brand is felt as a personal failure. A self-affirmation induction reduced this effect. Yet, it would be more profitable to search for a measure that can be used in any context.

The Cho and Boster (2005) value-relevant involvement measure may be such a measure.

Following Johnson and Eagly (1989), they distinguished between involvement that was related MOTIVATED REASONING 21 to obtaining external rewards via positive outcomes (outcome-relevant involvement) and involvement that was tied to one’s personal values (value-relevant involvement). Johnson and

Eagly had conceptualized value-relevant involvement as the same concept as ego-involvement, but they changed the name. I use the term ego-involvement to maintain consistency with earlier research and to include other key aspects of the self-concept besides values. The Cho and Boster measure includes seven self-report items with 7-point Likert response scales. Consistent with social judgment approach research, they found value-relevant involvement was positively associated with attitude extremity. Additional validity evidence was found by C. J. Carpenter

(2018) such that value-relevant involvement was associated with attitude extremity, attitude certainty, and perceptions that Cable News on the focal topic was biased (study 1). That report also found that value-relevant involvement increased the extent to which message discrepancy was associated with perceptions that the message source was biased and unlikeable (study 2).

Although these studies are promising, further validation research is required before one can confidently use the Cho and Boster measure in any motivated reasoning context.

Research is required to determine if these various measures of ego-involvement and allied constructs are in fact all measuring the same construct. Additionally, further development in this area requires research consistently demonstrating which of these measures best captures ego-involvement and can be adapted to a variety of attitudes, values, and identities. In addition to predicting motivated reasoning outcomes, measurement work needs to assess the extent to which measures of ego-involvement can determine which cognitions are closest to the self-concept.

Rokeach predicted that cognitions further from the self-concept were more likely to change when in conflict with those that were closer. An ideal measure of ego-involvement will need to be able to make fairly fine-grained distinctions. Such careful measurement could also contribute to MOTIVATED REASONING 22 constructing broader networks of cognitive elements to predict which messages can conform to more of an audience’s cognitive constraints.

One alternative to measuring pre-existing levels of ego-involvement is to find methods of experimentally inducing high- and low ego-involvement. C. W. Sherif (1979) noted that ego- involvement can vary by situation. Ostrom and Brock (1969) developed a procedure they called

“value-bonding” such that participants were asked to draw connections between statements in a message and values that were either central to the self or not. They found less persuasion among those who tied the proposal issue to their important values. They noted that although they did not out to induce ego-involvement, their induction could be considered an ego-involvement induction. Additional research is required to determine if this method can be used to study cognitive dissonance.

Effects of Ego-Involvement on Motivated Reasoning

Motivated reasoning is a phenomenon whose effects are easier to define than its process.

As defined above by Kahan (2013), motivated reasoning occurs when someone processes information with a goal other than accuracy. Kunda (1990) argued that all processing of information is either motivated by a desire for accuracy or a desire to hold a particular belief or attitude. One would expect the desire for accuracy to prevail in most cases, but as M. Sherif and

Sargent explained, “when the ego is involved in any situation, in any capacity, our reactions are not impartial. We become highly selective, accentuating certain aspects, glossing over other aspects to the point of recasting the whole situation to protect or enhance our ego” (1947, p. 10).

The Lord, Ross and Lepper (1979) study is one of the seminal articles demonstrating motivated reasoning in response to a message. They found that the extent to which their subjects found a description of a study about the death penalty convincing was dependent upon their pre- MOTIVATED REASONING 23 existing attitude, measured a week prior to message exposure. Subjects did not become more opposed to the death penalty in response to the message based on the difference between their pretest and posttest attitudes. But they did self-report believing that they had become more extreme in their beliefs. Aronson (1992) argued that these results could have been predicted by cognitive dissonance theory. The death penalty was a topic on which he argued many subjects might hold attitudes that are strongly tied to their self-concept (highly ego-involved). Cognitive dissonance would predict they would find information uncongenial to their beliefs dissonance arousing. When Lord et al.’s subjects were faced with evidence that was inconsistent with a value that may be closely tied to their self-concept, accepting that evidence would create cognitive dissonance. There were a variety of methods of reducing that dissonance available to them, but Aronson’s interpretation of the Lord et al. study is that the subjects reported little change to their attitudes to avoid the cognitive dissonance from accepting the message.

The expansion of the self-concept approach would predict that had Lord et al. (1979) measured ego-involvement, it is likely they would have found that it moderated their findings such that only those with higher levels of ego-involvement in capital punishment would have been substantially more likely to reject the study they read that contradicted their beliefs. Those whose attitudes about capital punishment were not tied to their self-concept would have theoretically had no motivation to reject the capital punishment research. Many of their subjects were likely to be at least moderately ego-involved in the issue given how often it is and was discussed in the public sphere as an important political issue. Though such contentions require replication of their findings with the additional measures included.

Principles of Motivated Reasoning MOTIVATED REASONING 24

Based on existing theory and research, I will next propose two principles concerning when ego-involvement is more or less likely to result in motivated reasoning. The first I will call the ambiguity principle. Several researchers have observed that for most people, motivated reasoning can only hold out against the truth so far (Kunda, 1990; Pyszczynski & Greenberg,

1987). When something is unambiguously the case, one cannot reason it away no matter how motivated one is because one must maintain the part of one’s self concept that one is a reasonable person. Most people consider themselves rational actors who can be persuaded by strong evidence. The upshot is that in some cases people will change even highly ego-involving beliefs or attitudes when they are presented with extremely unambiguous evidence that they are in error. For example, an architect might be resistant to any arguments that he had designed an unstable building. But if it fell over, he would be forced to admit he had failed (though his lawyers would strongly advise that he not admit it in public). Kahan, Hoffman, Evans, Devins,

Lucci, and Cheng (2016) found that in their domain of legal issues, judges and lawyers are not usually affected by their political ideology, whereas in other areas they are just as affected as the rest of the public. Their self-concepts as objective legal thinkers pushes them to see legal cases clearly but not necessarily other matters of public policy.

On the other hand, there are many topics on which people have ego-involved beliefs whose verisimilitudes are not as clear-cut. For many matters of public policy, it is at least somewhat unclear exactly what effects a particular policy might have and many messages about public policy do not contain well-developed arguments. For example, many political messages posted on social network sites are overly simplistic so that ego-involved partisans can either interpret them as well-argued if congenial or facile attacks if uncongenial (McEwan, C. J.

Carpenter, & Hopke, 2018). Advocates and opponents of laws limiting access to firearms both MOTIVATED REASONING 25 believe that their opponents’ position would increase crime and both can marshal some evidence consistent with their belief. Even in cases where there is clear evidence of scientific consensus

(e.g., climate change, vaccines) some people will perceive there is sufficient ambiguity to believe that the scientific consensus is incorrect. The exact strength of the ambiguity principle, however, remains to be seen. It is possible that even when confronted with clear evidence people may find ways to reach their desired conclusion such that they do not have to violate too many cognitive constraints stemming from their belief in themselves as rational actors. One interesting qualitative study by Prasad et al. (2009) interviewed people who believed that the War in Iraq was justified because of Iraq’s ties to Al Qaeda. They were confronted with evidence that

President George W. Bush had admitted there was no such tie. One of the responses was what the researchers described as “disputing rationality” in which supporters insisted they have a right to an opinion and that it can be based on whatever they want. Pushing the limits of the ambiguity principle in cases of extremely high ego-involvement could be a fruitful and interesting area of research.

My second principle is what I will call the utilitarian principle such that the greater the impact on someone’s life an inaccurate belief or attitude can have, the less likely ego- involvement will be to result in motivated reasoning. On the low end of utilitarian impact, Kahan

(2013) argued that holding an incorrect political belief has almost no immediate impact on the life of the average voter. People can vote for dangerous policies or inept politicians, but because their votes are, individually, unlikely to affect the outcome of the election, they can vote in a way that only satisfies their ego-involvement in their partisan identity. In a behavioral economics analysis, Hillman (2010) made a similar argument by noting that the material losses from voting poorly are nearly zero. MOTIVATED REASONING 26

On the other end of the utilitarian scale, we often cannot afford to act more consistently with our ego-involvements than our actual material interests. Many of our attitudes and beliefs function to help us make practical decisions in our everyday lives (Fazio, 2000). Jussim (2017) argued that accuracy motives tend to dominate most of our attitudes and behavior. We could not successfully navigate everyday life if we acted based only on what we want to be true rather than what was true. People might watch a performance of an extreme sport on television and smugly think, “I could do that.” But very few will ask someone to hold their beer while they actually go out and try to ride their skateboard over a gorge. Perhaps people’s avoidance of negative outcomes exists to serve ego-needs to feel like a reasonable and successful person. Alternatively, perhaps utilitarian concerns overwhelm ego-defense needs via some other mechanism. What’s clear is that in most cases, strong utilitarian concerns do serve as a boundary condition to the self-concept approach to cognitive dissonance. People will usually only engage in motivated reasoning if they do not perceive a clear negative consequence to doing so. However, people sometimes do let their ego-involvement outweigh their safety. M. Sherif and Cantril (1947) noted that Joan of Arc chose to be burnt at the stake rather than recant her beliefs. We may admire such courage but most of us are willing to take a hit to our self-esteem and let go of an ego-involved belief if it makes a material difference in our lives.

Types of Motivated Reasoning

Enhanced critical cognition. One of the most commonly researched types of motivated reasoning is the tendency to be extremely critical of arguments that are highly attitude discrepant and uncritical of arguments that support one’s own position. Ditto, Scepansky, Munro,

Apanovitch, and Lockhart (1998) called this the “quantity of processing view of motivated reasoning” (p. 54). When the message position is consistent with their own ego-involved beliefs, MOTIVATED REASONING 27 people do not carefully evaluate that information, whereas when a message is attitude discrepant, they carefully consider the strength of the evidence, compare it to other information, and search their for contradictory information (Klaczynski & Gordon, 1996) because the attitude discrepant message produces cognitive dissonance in those ego-involved in the topic. The audience of an attitude discrepant message will experience cognitive dissonance from the attitude-consistent message and they will thus be motivated to counterargue against it (Festinger,

1957). The self-concept approach to cognitive dissonance predicts that tendency will be magnified by higher ego-involvement because people will feel a stronger motive to reject the message and thus maintain attitudes that their self-concept constrains to particular values.

In one particularly compelling example, Kahan et al. (2017) measured their subjects’ numeracy skills (a type of mathematics ability related to intelligence). Then they presented the subjects with a simple data table, that if read without carefully considering base rates, would indicate one interpretation. But if the subjects took base rates into account, the data table would indicate the opposite interpretation. When the table described the results of a study on an innocuous topic, the subjects’ numeracy skills were positively associated with their likelihood of interpreting the table correctly. But when the topic was political, they were usually only more likely to get the answer right if the right answer was consistent with their political ideology.

Numeracy skills were not associated with correctly interpreting the table when the true results contradicted their political beliefs. This study demonstrates that even aspects of intelligence may not be enough to protect one from distortions stemming from motivated reasoning. People only engaged their numerical reasoning skills if they either had no personal stake in the issue or they wanted to try to retain their original attitudes about the political issue in the face of data that MOTIVATED REASONING 28 initially seemed to threaten their position. When their personal beliefs were better served by failing to carefully examine the evidence, they tended to do so.

Biased perceptions of message sources. In addition to critiquing the specific arguments and evidence in a message more carefully, one may also reduce dissonance by rejecting the source of the message. Sereno (1968) found that when his subjects were exposed to a counterattitudinal message, their ego-involvement was negatively associated with perceptions of speaker expertise. Nauroth, Gollwitzer, Kozuchowski, Bender, and Rothmund (2017) found that when a study result was ego-threatening, their subjects reported a poorer evaluation of the study, lower perceived credibility of the author, poorer likely reputation of author, and that the author’s opinion was unlikely to be shared by other scientists. Interestingly, when I used the authors’ correlation matrix of these impressions to test them for a one-factor model with a confirmatory factor analysis, it fit the data well (Gerbing, 2014). That finding suggests that when people are reacting ego-defensively, they tend to employ these defenses as a group rather than relying on one particular method. People may simply employ all of the dissonance reduction strategies they have available, as long as they are internally consistent with each other. C. W. Sherif and

Jackman (1966) summed up this tendency to perceive counterattitudinal sources as biased by writing of their ego-involved subjects, “participants regarded themselves as reasonable men advocating what any honest, right-thinking person must recognize as the truth. They viewed their adversaries as polar opposites of themselves…. Each side viewed the other as unreasonable even irrational” (p. 177).

A similar and well-researched finding in the mass media literature is the hostile media effect (Vallone, Ross, & Lepper, 1985). People with highly partisan positions on a topic tend to believe that the mass media presents the issues in a way that is biased against their own stand on MOTIVATED REASONING 29 the topic. Any mass media presentation of a politically charged topic will likely be a balanced presentation in countries where balance is the journalistic norm. But such news reports will include information and opinions that produce cognitive dissonance in the highly partisan individual because they will be faced with some claims that are inconsistent with their own beliefs, which may motivate them to believe that the mass media’s balanced portrayal must be biased otherwise the media source would not suggest both sides have valid points. Choi, Yang, and Chang (2009) argued that the Cho and Boster (2005) value-relevant involvement measure is a better measure of the key motivating variable for the hostile media effect than other measures of partisanship, position extremity, or issue importance. They found value-relevant involvement was correlated with hostile media perceptions as well as the belief that the media is influencing the public away from its own opinion. M. Kim (2016) also found that value-relevant involvement predicts hostile media beliefs. Kim argued that merely being a member of a particular group is substantially less important than having value-relevant involvement in an issue. These studies are consistent with the view that ego-involvement is a primary motivator of motivated reasoning regarding the biases of message sources.

Selective exposure. In the original conceptualization of cognitive dissonance, Festinger

(1957) predicted that one method of avoiding cognitive dissonance was to avoid information that might produce dissonance, known as selective exposure. Here cognitive dissonance predicts that people will actively avoid coming into contact with information that might challenge their beliefs and thereby they avoid cognitive dissonance. Generally, selective exposure has been found to be correlated with political extremity (Barnidge et al., 2017). Extremity is conceptualized above to be associated with ego-involvement, and it does tend to be correlated with ego-involvement (C.

J. Carpenter, 2018). Frimer, Skitka, and Motyl (2017) found that across a series of studies, when MOTIVATED REASONING 30 given the choice to view attitude-consistent information or attitude-inconsistent information, subjects were more likely to choose attitude-consistent information, even though they knew they would be in a drawing for a smaller cash prize for choosing the attitude-consistent information than if they viewed the attitude-inconsistent information.

A unique large-scale field study conducted by Ryan and Brader (2017) tested for selective exposure effects on Facebook. They used people’s Facebook profiles to identify users who supported Governor Romney or President Obama in the 2012 election. People tend to self- present accurately on Facebook (Back et al., 2010) so such profile choices may reflect their self- concepts. Then the researchers used Facebook to display ads to these partisans for an information website that promised embarrassing economic information for either the Romney or Obama campaigns, i.e. information inconsistent with one of the candidates’ economic policies. They found users were more likely to click on the ad when it promised information congenial to their preferred candidate (i.e. President Obama supporters were more likely to click when the information was critical of Governor Romney than the President and vice versa). The researchers argued this finding was consistent with cognitive dissonance predictions concerning selective exposure.

Heterogeneous types of motivated reasoning. Other researchers have produced lists of specific types of cognitions that allow motivated reasoning. For example, Riet and Ruiter (2013) argued people trying to resist unfavorable health information avoid thinking about it and sometimes embrace fatalistic attitudes about life being full of risk. Also in the health context,

McQueen et al. (2013) described strategies such as “self-exemption” such that the subjects choose to believe that they are somehow a special case so the information does not apply to them and “normalizing the risk” such that people consider that they cannot do everything right MOTIVATED REASONING 31 anyway. Good and Abraham (2007) added “religiosity” such that people believe that the issue is in God’s hands and therefore they need not do anything about it.

It is unclear how profitable it is for researchers to explore these specific types of resistant cognitions. In their review, Riet and Ruiter (2013) argued that more research is needed on the extent to which these varying cognitions each offer varying ability to engage in ego-defense. If these various cognitions have differential effects on cognitive dissonance reduction it may be profitable to pursue them in order to design interventions that reduce the likelihood of people employing the cognitions that are most effective at helping them avoid unpleasant truths. But it may be more effective for researchers wishing to find strategies for reducing motivated reasoning in these contexts to focus on the broader principles that motivate the use of such strategies. If protecting the self-concept motivates all or most of these strategies, many of the tools described below may be more effective than attempting to prevent particular strategies. It seems likely that trying to prevent specific strategies may prove to be analogous to a “whack-a- mole” game in which every attempt to prevent a particular strategy will fail because a reasonably intelligent and motivated human may always find a different way to rationalize the rejection of an ego-threatening persuasive message.

Motivated reasoning in interpersonal relationships. Although cognitive dissonance is usually considered a theory of persuasion, cognitive dissonance and ego-defense are applicable to some of the research on interpersonal relationships. One would expect the same kinds of motivated reasoning in relationships and the relationship literature suggests it does occur. Just as we idealize our own political positions, Murray and Holmes (1997) argued we idealize our romantic partners. They explained, “These positive , including idealized self- perceptions, exaggerated perceptions of control, and unrealistic optimism, appear to function as MOTIVATED REASONING 32 buffers, protecting self-esteem in the face of the threats posed by negative information” (p. 587).

Zhang and Merolla (2006) argued that people should be reluctant to tell their friends when they dislike their friends’ romantic partners. They surveyed people who disliked one of their friends’ romantic partners and found disclosing that dislike to their friend often hurt the subjects’ relationships with that friend. If one sees one’s partner as part of the self (Aron, Aron, &

Smollan, 1992), then it will arouse cognitive dissonance for one to hear from a trusted friend that the partner is less than ideal. Secord (1968) argued that people respond to criticisms of people close to themselves the same way they respond to criticisms of themselves. Beware trying to burst the illusions of those in love.

On the positive side, Donovan and Priester (2017) suggested that motivated reasoning from cognitive dissonance can help with the forgiveness process. They found that when people want to maintain a relationship with someone who has transgressed against them, they engage in a type of motivated reasoning to try to downplay the degree of the transgression. This process reduces the dissonance aroused by remaining close to someone who hurt them. Without this rationalization of the friend or partner’s behavior, it would be difficult for people to forgive transgressors due to the remaining cognitive dissonance stemming from the conflict between the transgressors’ behavior and their own standards for the behavior of others.

Summary. So far, I have argued that cognitive dissonance is produced when any aspect of the self-concept is threatened. I have also argued that ego-involvement is a good predictor of which attitudes, values, and identities are closer or further from the self-concept. When ego- involvement in a topic is low, the production of cognitive dissonance when encountering a counterattitudinal message is unlikely. When ego-involvement is high, however, cognitive dissonance can be easily spurred by information critical of that attitude, value, or identity. The MOTIVATED REASONING 33 result of experiencing cognitive dissonance is the reduction of accuracy motives and the increase in motivated reasoning to reduce that cognitive dissonance by finding a flaw in the information, the source, avoiding thinking about it, or other means of reducing the cognitive dissonance. A number of approaches have been developed to try to prevent that process.

Reducing Motivated Reasoning

Researchers have long been interested in finding methods of discouraging motivated reasoning. The most prominent method of reducing motivated reasoning has been applications of self-affirmation theory (Steele, 1988). A more recent development has been the use of narrative or entertainment messages (e.g. Slater, Rouner, & Long, 2006). There are also various approaches to tailoring the message to the audience’s values (e.g. Braman, Kahan, Slovic, Gastil,

& Cohen 2007). The concepts of the ambiguity principle and the utilitarian principle also imply suggested remedies that some studies have examined. Other researchers have had some success testing various cognitive processing instructions to think differently. Each of these methods offers promise and warrants further research both singly and in combination.

Self-Affirmation Theory

As explained above, Steele’s (1988) self-affirmation theory indicates that people are less likely to use motivated reasoning if they are first given the opportunity to affirm some positive aspect of their self-concept, usually one of their unrelated values. Although this method has been shown repeatedly to work well in the lab (McQueen & Klein, 2006), more research is needed on employing the method in a widespread intervention. Falk et al. (2015) found that daily, affirming text messages that also contain a proexercise message helped keep their sample more active than a sample whose text messages only contained the proexercise message. It may be possible to use regular affirmation to help sustain healthful behavior. It remains to be seen if mass media MOTIVATED REASONING 34 messages can be constructed that use self-affirmation to pave the way for a subsequent persuasive message that the audience might normally resist due to a perceived ego-threat.

Rokeach (1973) emphasized that some values are closer to the self-concept than others.

Improvements in self-affirmation implementation may occur if the self-affirmation inductions deliberately target a value closer to the core of the self-concept than the one threatened by the message advocacy. Researchers might also test if self-affirmation inductions actually reduce measured ego-involvement in other positions. Overall, the self-affirmation method may prove to be a flexible method of reducing motivated reasoning in a variety of contexts.

Narrative Persuasion

Although research on using narratives to persuade has substantially increased in the past two decades, M. Sherif and Sargent suggested the possibility in 1947 when they wrote, “Ego- involvements may be quite general, or they may become personalized and specific. When one enjoys a movie, a radio drama or a novel, one projects himself into the situation and lives it vicariously through a kind of identification” (p. 12). Slater et al. (2006) found that television dramas can help increase support for controversial policies that people might normally resist.

They noted that narratives may be persuasive by reducing counterarguing, but they argued that is unlikely to be the only mechanism. One possibility is that narratives serve as a sort of self- affirmation induction such that identifying with a character who lives up to her values causes the audience to also feel as though they have lived up to their own values. The upshot is the audience is more open to persuasion on ego-threatening topics. Moyer-Gusé and Nabi (2010) found that identification was a particularly important variable in explaining the reduction of resistance to persuasion via narratives in an entertainment-education paradigm. Research on the various mechanisms of narrative persuasion is ongoing and will likely prove fruitful in strengthening this MOTIVATED REASONING 35 method of reducing motivated reasoning (Moyer-Gusé & Dale, 2017). It may be that narrative persuasion operates differently when accuracy motives are pursued rather than motivated reasoning. Measures of ego-involvement may help determine which process is likely.

Tailoring Messages

A number of researchers have been studying methods of reducing motivated reasoning by tailoring the message to the audience’s values. For example, Braman et al. (2007) argued that traditional proenvironmental messages appeal to the values of environmentalists but not conservatives. They found that conservatives supported combating climate change if the message indicates that such efforts support the nuclear power industry. They argued that the message appealed to conservative values of growing business and mastering the forces of nature. Feinberg and Willer (2013) had similar success with a proenvironment message that was tailored to conservatives’ values of purity and authority rather than the usual liberal, care-based messaging.

Wolsko, Ariceaga, and Seiden (2016) pursued a similar strategy and found some evidence that a measure focusing on the extent to which the message reflected the subjects’ ingroup values mediated the effect of the tailoring on support for proenvironmental protection policies. Even subtle value tailoring such as labeling something a tax vs. an offset has been shown to affect audience reactions based on political ideology (Hardisty, Johnson, & Weber, 2010).

Rokeach (1973) theorized that values vary in the strength of their connection to the self- concept. In many occasions, a group of people will usually associate a particular proposal with a value that constrains them to reject that proposal (e.g. some conservatives feel constrained to reject certain climate change abatement policies because they believe such policies conflict with their values concerning free-market capitalism). The self-concept approach to cognitive dissonance suggests that persuaders who wish to shift that group’s attitudes might try to MOTIVATED REASONING 36 determine which other value or values that the proposal might be connected to that would constrain the audience to accept the proposal (e.g. connecting climate change reduction policies to national security). Such values would theoretically have to be either closer to the self-concept than the value constraining the original negative evaluation or perhaps the persuader might succeed by attempting to tie acceptance of the proposal to multiple values that are important to the audience. Successfully tying the proposal to one or more alternative values that constrain acceptance of the proposal may cause the audience to shift their attitudes in order to satisfy more cognitive constraints.

This conceptualization of values may explain the interesting findings from Kahan,

Landrum, K. Carpenter, Helft, and Jamieson (2017). They found that those with high scores on a measure of “science curiosity” were more likely to accept climate change science regardless of political orientation, whereas for those with low scores, political orientation strongly predicted climate change science acceptance. From Rokeach’s perspective, science curiosity is likely to be more closely tied to the self-concept than political orientation for those who scored higher on the science curiosity measure. As mentioned above, measurement advances in determining which values are closer to the self-concept than others may aid efforts to continue to explore values- based message tailoring.

Tailoring to values may be another area where the inclusion of coherence theory concepts

(Thagard and Verbeurgt, 1998) can assist the self-concept approach in making predictions.

Although each individual’s set of constraints are unique, some political ideologies tend to carry a consistent set of constraints. Rather than trying to focus on tailoring to one value at a time, the coherence perspective would suggest trying to look at the sets of constraints imposed by value systems and trying to find messaging that would be consistent with as many of the constraints in MOTIVATED REASONING 37 that system as possible. Indicating to the audience that accepting the message would be consistent with more of their core values would increase the pressure to accept the message so that more of their constraints can be satisfied. Another possibility is to use the kinds of coherence simulations used in this research tradition (e.g. Shultz & Lepper, 1996) to assess the impact of accepting various types of messages on the likely value systems of the targeted audience.

Another line of research suggests that tailoring the perceived source of the message can greatly alter the likelihood of the audience reacting with motivated reasoning. Three options have appeared in the literature. The first is to use a speaker whose values match the audience’s.

Kahan, Slovic, Braman, Gastil, Cohen, and Kysar (2008) found that subtle manipulations of speaker photos and background affected the audience’s likelihood of perceiving the speaker shared the audience’s values. When the speaker appeared to share the audience’s values, motivated reasoning was reduced. Such inductions may be powerful because they both emphasize values as well as social identity, which offers multiple routes to someone’s self- concept.

A second option is to emphasize that the speaker is advocating a policy that is against the speaker’s usual ideological stances. Eagly, Wood, and Chaiken (1978) found that speakers were more persuasive and perceived as less biased when they advocated policies that seemed inconsistent with their ideological orientation. Levy (2017) suggested a situation like a member of the American Republican Party advocating the Democratic Party’s Affordable Care Act might be persuasive to both Republicans and Democrats alike.

The third option is more subtle and may be better able to target a wider variety of people.

Kahan, Jenkins-Smith, and Braman (2011) showed that if a variety of expert speakers who appear to have a variety of value orientations are all advocating a policy, then the audience is MOTIVATED REASONING 38 cued to the idea that value-orientation is not tied to this stance and they may pursue an accuracy goal in their cognitive processing of the message. It may be possible that such an induction reduces ego-involvement in the topic, though that hypothesis awaits future research. Determining if any one of these options is more effective than the others or if source and message tailoring combinations are more effective than only one type of tailoring also awaits additional research.

Using the Ambiguity Principle

Researchers focusing on increasing people’s belief in the science of climate change have had some success with approaches using the ambiguity principle. Theoretically, if people learn that the overwhelming consensus of scientists working in an area agree on something, it reduces the ambiguity surrounding what the truth of the matter is and people will change their belief even if it violates their closely held political identity. The self-concept approach to cognitive dissonance predicts that people’s belief in their own reasonableness is closer to the core of their self-concept than their political identity. Lewandowsky, Gignac, and Vaughan (2012) found that adherence to free market ideology was negatively correlated with belief in human-caused climate change. Yet, when the subjects were told there was a 97% consensus about climate change among climate scientists, the free market ideology effect on climate change belief was substantially reduced such that people high in free marketer ideology became more likely to accept the evidence than when they did not see consensus information. It may also be that such inductions are less ego-threatening when acceptance of climate science is not tied to accepting any particular policy solution that might more clearly implicate political values. The effects of policy solutions also tend to be more ambiguous because they make probabilistic predictions about future outcomes on a large scale based on limited evidence. Unfortunately most controversies do not have a 97% agreement among scientists researching in an area. Additional MOTIVATED REASONING 39 research is needed to determine what the threshold is for scientific consensus to overwhelm ego- involvement. Another option is to try to directly target their sense of being a reasonable individual without insulting them at the same time. That is, however, a delicate needle to thread.

Research on finding new methods for reducing the ambiguity of the evidence are needed to reduce motived reasoning.

Using the Utilitarian Principle

It is at least conceptually possible to change perceptions of the benefits and costs to possessing a particular belief or attitude such that people shift from an ego-defensive motive to an accuracy motive. For beliefs and attitudes that are closely tied to the self-concept, it seems likely that to change such beliefs the message must describe particularly large benefits and costs that are likely to occur soon and happen directly to the audience.

On the other hand, it may only take a small bribe. Bullock, Gerber, Hill, and Huber

(2015) conducted a pair of studies on the phenomenon of people giving incorrect, partisan answers to factual questions, particularly about the successes or failures of the political party they prefer. They were interested in cases in which people estimated their answers to factual questions based on their political preferences rather than their actual knowledge. Answering that way will generally satisfy ego-needs at no cost to the individual because expressing opinions to a stranger on the phone has no utilitarian costs whatsoever. In the control condition, people’s responses generally supported the policies proposed by the party they identified with, even when those answers were factually incorrect. Yet, in the experimental condition, when people were given a small monetary reward for correct answers, their answers were more accurate and conformed to party ideology substantially less. The researchers argued that it is not the case that people are totally ignorant of the facts. People may just express partisan beliefs in cases where MOTIVATED REASONING 40 there is no cost to being incorrect. If the costs of inaccurate beliefs can be made real to the audience, they may be more likely to pursue an accuracy goal, even in cases where they would normally use motivated reasoning. Theoretically, a combination of the ambiguity utilitarian principle and the utilitarian principle suggests that ego-motivated resistance can be overcome by creating an unambiguously strong case that a particular belief or attitude will produce an extremely negative outcome for the audience or those they care about.

Instructions to the Audience

A number of studies have explored instructing the audience to think in particular ways.

Although these may be difficult to scale up to an intervention, they are instructive. Nyhan and

Reifler (2015) examined methods of debunking false information that was tied to politics. They first provided their subjects with a salacious news story about a politician. Then for some of the subjects they merely followed it by telling them that the story was false and provided the real facts. They asked other subjects to generate alternative explanations of what actually caused the focal event. Those who did the latter were less likely to have a view of the politician colored by the original false news article. It is possible that generating alternate explanations gives the audience a sense of ownership over the information because they figured out those explanations for themselves.

Another intriguing method was proposed by Fernbach, Rogers, Fox, and Sloman (2013).

They argued that people accept partisan policy positions without actually knowing the reasons why those policies are likely to produce the positive effects claimed by the political party.

According to the researchers, “We predicted that asking people to explain how a policy works would make them aware of how poorly they understood the policy, which would cause them to subsequently express more moderate attitudes and behaviors” (p. 939). They found some support MOTIVATED REASONING 41 for this claim, though the effects were somewhat small. The self-concept approach to cognitive dissonance would predict that such inductions reduce ego-involvement based on the ambiguity principle. Their need to perceive themselves as reasonable is closer to their self-concept than the value tied to that particular political belief so the induction made it harder to firmly maintain their belief while perceiving themselves as reasonable. Further research measuring the extent to which ego-involvement is reduced by such inductions could prove fruitful. It is also possible that advertisements encouraging these various cognitive strategies would reduce the audience’s likelihood of engaging in motivated reasoning. Like some of the other methods proposed, it remains to be seen if these lab-tested methods can be scaled up to large-scale interventions.

Conclusion

The core of this approach to motivated reasoning is the assumption that people who engage in motivated reasoning do so because they are motivated to protect their self-concept

(Aronson, 1992). That motivation creates a set of constraints on the kinds of beliefs they are willing to accept. Beliefs that are constrained by their connection to the individual’s self-concept as a reasonable and moral person are difficult to change because their connection to the self- concept gives them more weight such that the individual will be motivated to satisfy those constraints on their cognitive network rather than others. Similarly, people’s values are also part of their self-concept such that they help people maintain their belief that they are reasonable and moral people (Rokeach, 1973). Some values are closer to the self-concept than others and their closeness to the self-concept increases the weight on the constraints on beliefs connected to those values. Ego-involvement serves as a useful construct for exploring the extent to which people consider particular beliefs and attitudes to be closer or further from their self-concept. Consistent MOTIVATED REASONING 42 with the original formulation of cognitive dissonance (Festinger, 1957) any threat to the self- concept produces negative arousal that people will be motivated to reduce or prevent.

This position offers several advantages to researchers interested in motivated reasoning in the persuasion context. First, ego-involvement can potentially be manipulated or at the very least measured to predict on which topics a particular audience is likely to resist persuasion attempts.

The focus on the self-concept indicates that messages that attack the audience’s self-concept by criticizing the audience are likely to be resisted. Furthermore, most people’s self-concept includes a belief in their reasonableness so that the ambiguity principle and the utilitarian principle suggest avenues for traditional persuasion using well-supported evidence and qualified sources to show that rejection of the message will produce negative outcomes for the audience.

Persuasive agents may also consider a variety of the audience’s values and attempt to tie the belief or attitude they advocate to as many values as they can that constrain the audience to accept their position rather than the value or values that currently constrain the audience to reject their position. All of these persuasive processes can be derived from the self-concept approach to cognitive dissonance described above and may prove fruitful for advancing motivated reasoning research.

I have attempted to pull together the research on motivated reasoning by positioning it as an application of the self-concept approach to cognitive dissonance. I further argued that ego- involvement could serve as the key moderating variable for when such cognitive dissonance is likely to occur. The many studies reviewed here show that this is an exciting and active area of research. I argue it could benefit from the theoretical integration I propose so that researchers in communication, psychology, political science, advertising, and other interested disciplines could find common principles and variables that explain motivated reasoning across these diverse MOTIVATED REASONING 43 domains. This review began with the observation that great thinkers have long recognized the dangers of motivated reasoning. But I will conclude by quoting a Victorian thinker who suggests such strong conviction is not always a bad thing for, “…intense convictions make a memory for themselves, and if they can be kept to the truths of which there is good evidence, they give a readiness of intellect, a confidence in action, a consistency in character, which are to be had not without them.” (Bagehot, 1871, p. 40). MOTIVATED REASONING 44

References

Aron, A., Aron, E. N., & Smollan, D. (1992). Inclusion of other in self scale and the structure of

interpersonal closeness. Journal of Personality and , 63, 596-612. doi:

10.1037//0022-3514.63.4.596

Aronson, E. (1968). Dissonance theory: Progress and problems. In R. P. Abelson, E.

Aronson, W. J. McGuire, T. M. Newcomb, M. J. Rosenberg, & P. H. Tannenbaum (Eds.),

Theories of cognitive consistency: A sourcebook (pp. 5-27). Chicago, IL: Rand McNally.

Aronson, E. (1992). The return of the repressed: Dissonance theory makes a comeback.

Psychological Inquiry, 3, 303-311. doi: 10.1207/s15327965pli0304_1

Aronson, E., Chase, T., Helmreich, R., & Ruhnke, R. (1976). Feeling stupid and feeling

guilty-two aspects of the self-concept which mediate dissonance arousal in a

communication situation. In E. Aronson (Ed.), Readings about the social animal (2nd Ed.;

pp. 159-173). San Francisco, CA: W. H. Freeman and Company.

Aronson, E., Turner, J. A., & Carlsmith, J. M. (1963). Communicator credibility and

communication discrepancy as determinants of opinion change. Journal of Abnormal and

Social Psychology, 67, 31-36. doi: 10.1037/h0045513

Back, M. D. et al. (2010). Facebook profiles reflect actual personality not self-idealization.

Psychological Science, 21, 372-374. doi: 10.1177/0956797609360756

Bacon, F. (1620/2000). The new organon. Cambridge, UK: Cambridge University Press.

Bagehot, W. (1871). On the emotion of conviction. The Contemporary Review, 17, 32-40.

Barnidge, M., Gunther, A. C., Kim, J., Hong, Y., Perryman, M., Tay, S. K., & Knisely, S.

(2017). Politically motivated selective exposure and perceived media bias.

Communication Research. Advance online publication. doi: 10.1177/0093650217713066 MOTIVATED REASONING 45

Boney-McCoy, S., Gibbons, F. X., & Gerrard, M. (1999). Self-esteem, compensatory self-

enhancement, and the consideration of health risk. Personality and Social Psychology

Bulletin, 25, 954-965. doi: 10.1177/01461672992511004

Braman, D., Kahan, D. M., Slovic, P., Gastil, J., & Cohen, G. L. (2007). The second national risk

and culture study: Making sense of –and making progress in- the American culture war of

fact. G. W. Law Faculty Publications & Other Works. Paper 211.

http://scholarship.law.gwu.edu/faculty_publications/211

Bullock, J. G., Gerber, A. S., Hill, S. J., & Huber, G. A. (2015). Partisan bias in factual beliefs

about politics. Quarterly Journal of Political Science, 10, 519-578. doi:

10.1561/100.00014074

Carpenter, C. J. (2018). An assessment of the measurement validity of the value-relevant

involvement scale. Western Journal of Communication. Advance online publication. doi:

10.1080/10570314.2018.1475680

Cheng, S. Y. Y., White, T. B., & Chaplin, L. N. (2012). The effects of self-brand connections on

responses to brand failure: A new look at the consumer-brand relationship. Journal of

Consumer Psychology, 22, 280-28. doi: 10.1016/j.jcps.2011.05.005

Cho, H., & Boster, F. J. (2005). Development and validation of value-, outcome-, and

impression-relevant involvement scales. Communication Research, 32, 235-264. doi:

10.1177/0093650204273764

Choi, J., Yang, M., & Chang, J. J. C. (2009). Elaboration of the hostile media phenomenon: The

roles of involvement, media skepticism, congruency of perceived media influence, and

perceived opinion climate. Communication Research, 36, 54-75. doi:

10.1177/0093650208326462 MOTIVATED REASONING 46

Ditto, P. H., Scepansky, J. A., Munro, G. D., Apanovitch, A. M., & Lockhart, L. K. (1998).

Motivated sensitivity to preference-inconsistent information. Journal of Personality and

Social Psychology, 75, 53-69. doi: 10.1037//0022-3514.75.1.53

Donovan, L. A. N., & Priester, J. R. (2017). Exploring the psychological processes underlying

interpersonal forgiveness: The superiority of motivated reasoning over empathy. Journal

of Experimental Social Psychology, 71, 16-30. doi: 10.1016/j.jesp.2017.02.005

Eagly, A. H., & Telaak, K. (1972). Width of the latitude of acceptance as a determinant of

attitude change. Journal of Personality and Social Psychology, 23, 388-397. doi:

10.1037/h0033161

Eagly, A. H., Wood, W., & Chaiken, S. (1978). Causal inferences about communicators and their

effect on opinion change. Journal of Personality and Social Psychology, 36, 424-435.

doi: 10.1037//0022-3514.36.4.424

Escalas, J. E., & Bettman, J. R. (2003). You are what they eat: The influence of reference groups

on consumers’ connections to brands. Journal of Consumer Psychology, 13, 339-348.

doi: 10.1207/s15327663jcp1303_14

Falk, E. B. et al. (2015). Self-affirmation alters the brain’s response to health messages and

subsequent behavior change. PNAS, 112, 1977-1982. doi: 10.1073/pnas.1500247112

Fazio, R. H. (2000). Accessible attitudes as tools for object appraisal: Their costs and benefits. In

G. R. Maio & J. M. Olson (Eds.), Why we evaluate: Functions as attitudes (pp. 1-36).

Mahwah, NJ: Erlbaum.

Fazio, R. H., Zanna, M. P., & Cooper, J. (1977). Dissonance and self-perception: An integrative

view of each theory’s proper domain of application. Journal of Experimental Social

Psychology, 13, 464-479. doi: 10.1016/0022-1031(77)90031-2 MOTIVATED REASONING 47

Feinberg, M., & Willer, R. (2013). The moral roots of environmental attitudes. Psychological

Science, 24, 56-62. doi: 10.1177/0956797612449177

Fernbach, P. M., Rogers, T., Fox, C. R., & Sloman, S. A. (2013). Political extremism is

supported by an of understanding. Psychological Science, 24, 939-946. doi:

10.1177/0956797612464058

Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.

Festinger, L., & Carlsmith, J. M. (1957). Cognitive consequences of forced .

Journal of Abnormal and Social Psychology, 58, 203-210. doi: 10.1037/h0041593

Fink, E. L., & Cai, D. A. (2013). Discrepancy models of belief change. In J. P. Dillard & L. Shen

(Eds.), The Sage handbook of persuasion: Developments in theory and practice (2nd ed.;

pp. 84-103). Thousand Oaks, CA: Sage.

Frimer, J. A., Skitka, L. J., & Motyl, M. (2017). Liberals and conservatives are similarly

motivated to avoid exposure to one another’s opinions. Journal of Experimental Social

Psychology, 72, 1-22. doi: 10.31234/osf.io/mqzue

Gerbing, D. W. (2014). R data analysis without programming. New York, NY: Routledge.

Good, A., & Abraham, C. (2007). Measuring defensive responses to threatening messages: A

meta-analysis of measures. Health Psychology Review, 1, 208-229. doi:

10.1080/17437190802280889

Hardisty, D. J., Johnson, E. J., & Weber, E. U. (2010). A dirty word or a dirty world? Attribute

framing, political affiliation and query theory. Psychological Science, 21, 86-92. doi:

10.1177/0956797609355572

Harris, P. R., & Napper, L. (2005). Self-affirmation and the biased processing of threatening MOTIVATED REASONING 48

health-risk information. Personality and Social Psychology Bulletin, 31, 1250-1263. doi:

10.1177/0146167205274694

Hillman, A. L. (2010). Expressive behavior in economics and politics. European Journal of

Political Economy, 26, 403-418. doi: 10.1016/j.ejpoleco.2010.06.004

Johnson, B. T., & Eagly, A. H. (1989). Effects of involvement on persuasion: A meta-analysis.

Psychological Bulletin, 106, 290-314. doi: 10.1037//0033-2909.106.2.290

Jussim, L. (2017). Précis of social perception and social reality: Why accuracy dominates bias

and self-fulfilling prophecy. Behavioral and Brain Sciences, 40, 1-65. doi:

10.1017/s0140525x1500062x

Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and

Decision Making, 8, 407-424. doi: 10.2139/ssrn.2182588

Kahan, D. M., Hoffman, D., Evans, D., Devins, N., Lucci, E., & Cheng, K. (2016). “Ideology” or

“situation sense”? An experimental investigation of motivated reasoning and professional

judgment. University of Pennsylvania Law Review, 164, 349-439.

Kahan, D. M., Jenkins-Smith, H., & Braman, D. (2011). Cultural cognition of scientific

consensus. Journal of Risk Research, 14, 147-174. doi: 10.1080/13669877.2010.511246

Kahan, D. M., Landrum, A., Carpenter, K., Helft, L., & Jamieson, K. H. (2017). Science

curiosity and political information processing. Advances in Political Psychology, 38, 179-

199. doi: 10.1111/pops.12396

Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2017). Motivated numeracy and

enlightened self-government. Behavioural Public Policy, 1, 54-86. doi:

10.1017/bpp.2016.2

Kahan, D. M., Slovic, P., Braman, D., Gastil, J., Cohen, G., & Kysar, D. (2008). Biased MOTIVATED REASONING 49

assimilation, polarization, and cultural credibility: An experimental study of

nanotechnology risk perceptions. Retrieved from the Cultural Cognition Project at Yale

Law School. http://ssrn.com/abstract=1090044

Kassarjian, H. H., & Cohen, J. B. (1965). Cognitive dissonance and consumer behavior:

Reactions to the Surgeon General’s report on smoking and health. California

Management Review, 8, 55-64. doi: 10.2307/41165660

Kim, M. (2016). The role of partisan sources and audiences’ involvement in bias perceptions of

controversial news. Media Psychology, 19, 203-223. doi:

10.1080/15213269.2014.1002941

Kim, S. Y., Allen, M., Preiss, R. W., & Peterson, B. (2014). Meta-analysis of counterattitudinal

advocacy data: Evidence for an additive cues model. Communication Quarterly, 62, 607-

620. doi: 10.1080/01463373.2014.949385

Klaczynski, P. A., & Gordon, D. H. (1996). Self-serving influences on adolescents’ evaluations

of belief-relevant evidence. Journal of Experimental Child Psychology, 62, 317-339. doi:

10.1006/jecp.1996.0033

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480-498. doi:

10.1037//0033-2909.108.3.480

Kunda, Z. (1992). Can dissonance theory do it all? Psychological Inquiry, 3, 337-339. doi:

10.1207/s15327965pli0304_11

Levy, N. (2017). Nudges in a post-truth world. Journal of Medical Ethics, 43, 495-500. doi:

10.1136/medethics-2017-104153

Lewandowsky, S., Gignac, G. E., & Vaughan, S. (2012). The pivotal role of perceived scientific MOTIVATED REASONING 50

consensus in acceptance of science. Nature Climate Change, 3, 399-404. doi:

10.1038/nclimate1720

Liberman, A., & Chaiken, S. (1992). Defensive processing of personally relevant health

messages. Personality and Social Psychology Bulletin, 18, 669-679. doi:

10.1177/0146167292186002

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The

effects of prior theories on subsequently considered evidence. Journal of Personality and

Social Psychology, 37, 2098-2109. doi: 10.1037//0022-3514.37.11.2098

McEwan, B., Carpenter, C. J., & Hopke, J. E. (2018). Mediated skewed diffusion of issues

information: A theory. Social Media + Society. doi: 10.1177/20563051188003

McQueen, A., & Klein, W. M. P. (2006). Experimental manipulations of self-affirmation: A

systematic review. Self and Identity, 5, 289-354. doi: 10.1080/15298860600805325

Moyer-Gusé, E., & Dale, K. (2017). Narrative persuasion theories. In P. Rössler, C. A. Hoffner,

& L. van Zoonen (Eds.), The international encyclopedia of media effects. [Wiley Online

Library.] Retrieved from

http://onlinelibrary.wiley.com/doi/10.1002/9781118783764.wbieme0082/abstract

Moyer-Gusé, E., & Nabi, R. L. (2010). Explaining the effects of narrative in an entertainment

television program: Overcoming resistance to persuasion. Human Communication

Research, 36, 26-52. doi: 10.1111/j.1468-2958.2009.01367.x

Murray, S. L., & Holmes, J. G. (1997). A leap of faith? Positive illusions in romantic

relationships. Personality and Social Psychology Bulletin, 23, 586-604. doi:

10.1177/0146167297236003

Nauroth, P., Gollwitzer, M., & Kozuchowski, H. (2017). The effects of social identity threat and MOTIVATED REASONING 51

social identity affirmation on laypersons’ perception of scientists. Public Understanding

of Science, 26, 754-770. doi: 10.1177/0963662516631289

Nyhan, B., & Reifler, J. (2015). Displacing misinformation about events: An experimental test of

causal corrections. Journal of Experimental Political Science, 2, 81-93. doi:

10.1017/xps.2014.22

Ostrom, T. M., & Brock, T. C. (1969). Cognitive bonding to central values and resistance to a

communication advocating change in policy orientation. Journal of Experimental

Research in Personality, 4, 42-50.

Peterson, A. A., Haynes, G. A., & Olson, J. M. (2008). Self-esteem differences in the effects of

hypocrisy induction on behavioral in the health domain. Journal of

Personality, 76, 305-322. doi: 10.1111/j.1467-6494.2007.00487.x

Petty, R. E., & Cacioppo, J.T. (1986). Communication and persuasion: Central and

peripheral routes to attitude change. New York, NY: Springer-Verlag.

Pool, G. J., Wood, W., & Leck, K. (1998). The self-esteem motive in social influence:

Agreement with valued majorities and disagreement with derogated minorities. Journal

of Personality and Social Psychology, 75, 967-975. doi: 10.1037//0022-3514.75.4.967

Prasad, M., Perrin, A. J., Bezila, K., Hoffman, S. G., Kindleberger, K., Manturuk, K., & Powers,

A. S. (2009). “There must be a reason”: Osama, Saddam, and inferred justification.

Sociological Inquiry, 79, 142-162. doi: 10.1111/j.1475-682x.2009.00280.x

Pyszczynski, T., & Greenberg, J. (1987). Toward an integration of cognitive and motivational

perspectives on social inference: A biased hypothesis-testing model. Advances in

Experimental Psychology, 20, 297-340. doi: 10.1016/s0065-2601(08)60417-7

Riet, J. V., & Ruiter, R. A. C. (2013). Defensive reactions to health-promoting information: An MOTIVATED REASONING 52

overview and implications for future research. Health Psychology, 7, S104-S136. doi:

10.1080/17437199.2011.606782

Rokeach, M. (1968). A theory of organization and change within value-attitude systems. Journal

of Social Issues, 24, 13-33. doi: 10.1111/j.1540-4560.1968.tb01466.x

Rokeach, M. (1973). The nature of human values. New York, NY: The Free Press.

Ryan, T. J., & Brader, T. (2017). Gaffe appeal a field experiment on partisan selective exposure

to election messages. Political Science Research and Methods, 5, 667-687. doi: 10.1017/

psrm.2015.62

Schultz, T. R., & Lepper, M. R. (1996). Cognitive dissonance reduction as constraint

satisfaction. Psychological Review, 103, 219-240. doi: 10.1037//0033-295x.103.2.219

Secord, P. F. (1968). Consistency theory and self-referent behavior. In R. P. Abelson, E.

Aronson, W. J. McGuire, T. M. Newcomb, M. J. Rosenberg, & P. H. Tannenbaum (Eds.),

Theories of cognitive consistency: A sourcebook (pp. 349-354). Chicago, IL: Rand

McNally.

Sereno, K. K. (1968). Ego-involvement, high source credibility, and response to a belief-

discrepant communication. Communication Monographs, 35, 476-481. doi:

10.1080/03637756809375597

Sereno, K. K. (1969). Ego-involvement: A neglected variable in speech-communication

research. Quarterly Journal of Speech, 55, 69-77. doi: 10.1080/00335636909382930

Sherif, C. W. (1979). Social values, attitudes, and involvement of the self. In M. M. Page (Ed.),

Nebraska symposium on motivation: Beliefs, attitudes, and values (pp. 1-64). Lincoln,

NE: University of Nebraska Press.

Sherif, C. W., & Jackman, N. R. (1966). Judgments of truth by participants in collective MOTIVATED REASONING 53

controversy. Public Opinion Quarterly, 30, 173-186. doi: 10.1086/267398

Sherif, C. W., Kelly, M., Rodgers, H. L., Sarup, G., & Tittler, B. I. (1973). Personal

involvement, social judgment, and action. Journal of Personality and Social Psychology,

27, 311-328. doi: 10.1037/h0034948

Sherif, C. W., Sherif, M., & Nebergall, R. E. (1965). Attitude and attitude change: The social

judgment-involvement approach. Philadelphia, PA: W. B. Saunders Company.

Sherif, M., & Cantril, H. (1947). The psychology of ego-involvements. New York, NY: John

Wiley and Sons.

Sherif, M., & Hovland, C. I. (1961). Social judgment: Assimilation and contrast effects in

communication and attitude change. New Haven, CT: Yale University Press.

Sherif, M., & Sargent, S. S. (1947). Ego-involvement and the mass media. Journal of Social

Issues, 3, 8-16. doi: 10.1111/j.1540-4560.1947.tb02208.x

Sherman, D. K., & Cohen, G. L. (2002). Accepting threatening information: Self-affirmation and

the reduction of defensive biases. Current Directions in Psychological Science, 11, 119-

123. doi: 10.1111/1467-8721.00182

Sherman, D. K., & Cohen, G. L. (2006). The psychology of self-defense: Self-affirmation theory.

Advances in Experimental Social Psychology, 38, 183-242. doi: 10.1016/s0065-

2601(06)38004-5

Sherman, D. K., & Kim, H. S. (2005). Is there an “I” in “Team”? The role of the self in group-

serving judgments. Journal of Personality and Social Psychology, 88, 108-120. doi:

10.1037/0022-3514.88.1.108

Sherman, D. K., Nelson, L. D., & Steele, C. M. (2000). Do messages about health risks threaten MOTIVATED REASONING 54

the self? Increasing the acceptance of threatening health messages via self-affirmation.

Personality and Social Psychology Bulletin, 26, 1046-1058. doi:

10.1177/01461672002611003

Sherman, S. J., & Gorkin, L. (1980). Attitude bolstering when behavior is inconsistent with

central attitudes. Journal of Experimental Social Psychology, 16, 388-403. doi:

10.1016/0022-1031(80)90030-x

Simon, D., & Holyoak, K. J. (2002). Structural dynamics of cognition: From consistency theories

to constraint satisfaction. Personality and Social Psychology Review, 6, 283-294. doi:

10.1207/s15327957pspr0604_03

Slater, M. C., Rouner, D., & Long, M. (2006). Television dramas and support for controversial

public policies: Effects and mechanisms. Journal of Communication, 56, 235-252. doi:

10.1111/j.1460-2466.2006.00017.x

Smith, M. B. (1968). The self and cognitive consistency. In R. P. Abelson, E.

Aronson, W. J. McGuire, T. M. Newcomb, M. J. Rosenberg, & P. H. Tannenbaum (Eds.),

Theories of cognitive consistency: A sourcebook (pp. 366-372). Chicago, IL: Rand

McNally.

Steele, C. M. (1988). The psychology of self-affirmation: Sustaining the integrity of the self.

Advances in Experimental Social Psychology, 21, 261-302. doi: 10.1016/s0065-

2601(08)60229-4

Steele, C. M., & Spencer, S. J. (1992). The primacy of self-integrity. Psychological Inquiry, 3,

345-346. doi: 10.1207/s15327965pli0304_14

Steiner, I. D., & Johnson, H. H. (1964). Relationships among dissonance reducing responses.

Journal of Abnormal and Social Psychology, 68, 38-44. doi: 10.1037/h0046751 MOTIVATED REASONING 55

Tajfel, H., & Turner, J. C. (1986). The social identity theory of intergroup behavior. In S.

Worchel, & W. G. Austin (Eds.), Psychology of intergroup relations (2nd ed.; pp. 7-24).

Chicago, IL: Nelson-Hall.

Thagard, P., & Verbeurgt, K. (1998). Coherence as constraint satisfaction, Cognitive Science, 22,

1-24. doi: 10.1207/s15516709cog2201_1

Tyler, T. R., & Weber, R. (1982). Support for the death penalty: Instrumental response to crime,

or symbolic attitude? Law & Society Review, 17, 21-46. doi: 10.2307/3053531

Vallone, R. P., Ross, L., & Lepper, M. R. (1985). The hostile media phenomenon: Biased

perception and perceptions of media bias in coverage of the Beirut massacre. Journal of

Personality and Social Psychology, 49, 577-585. doi: 10.1037//0022-3514.49.3.577

Voltaire. (1732/1961). Philosophical letters. Indianapolis, IN: Bobbs-Merrill.

Walster, E., Berscheid, E., & Barclay, A. M. (1967). A determinant of preference among modes

of dissonance reduction. Journal of Personality and Social Psychology, 7, 211-216. doi:

10.1037/h0024992

Wilmot, W. W. (1971). A test of the construct and predictive validity of three measures of ego

involvement. Speech Monographs, 38, 217-227. doi: 10.1080/03637757109375713

Wolsko, C., Ariceaga, H., & Seiden, J. (2016). Red, white, and blue enough to be green: Effects

of moral framing on climate change attitudes and conservation behaviors. Journal of

Experimental Social Psychology, 65, 7-19. doi: 10.1016/j.jesp.2016.02.005

Zhang, S., & Merolla, A. J. (2006). Communicating dislike of close friends’ romantic partners.

Communication Research Reports, 23, 179-186. doi: 10.1080/08824090600796393

Zimbardo, P. G. (1968). Cognitive dissonance and the control of human motivation. In R. P. MOTIVATED REASONING 56

Abelson, E. Aronson, W. J. McGuire, T. M. Newcomb, M. J. Rosenberg, & P. H.

Tannenbaum (Eds.), Theories of cognitive consistency: A sourcebook (pp. 439-447).

Chicago, IL: Rand McNally.

Zuwerink, J. R., & Devine, P. G. (1996). Attitude importance and resistance to persuasion: It’s

not just the that counts. Journal of Personality and Social Psychology, 70, 931-

944. doi: 10.1037//0022-3514.70.5.931