Copyright by

Maxim Victorovich Baryshevtsev 2020

The Dissertation Committee for Maxim Victorovich Baryshevtsev Certifies that this is the approved version of the following dissertation:

Sharing is Not Caring: News Features Predict False News Detection and Diffusion

Committee:

Matthew S. McGlone, Supervisor

René M. Dailey

Jeffrey Hancock

Anita L. Vangelisti

Sharing is Not Caring: News Features Predict False News Detection and Diffusion

by

Maxim Victorovich Baryshevtsev

Dissertation

Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment

of the Requirements for the Degree of

Doctor of Philosophy

The University of Texas at Austin May 2020 Dedication

I would like to dedicate this project to my late grandfather Stanislav Pismenny

(1940 – 2020). He obtained multiple degrees and certificates in engineering, which helped him build his own house, where my grandmother still lives. He was always a man of reason and innovation, teaching me to attack problems head-on and with an open mind. Without the skills and smarts that he had passed down to me, I doubt I would have been able to complete this difficult journey. Thank you Stas, you will always be in our hearts and minds, pushing us to be better humans every day. Мы тебя любим.

Acknowledgements

First and foremost, I would like to thank my advisor Dr. Matthew McGlone for his guidance and support throughout my studies. He pushed me to be a better scholar, whether it was through holding me to the highest methodological standards, or always being there to bounce ideas off of. His advice will follow me for the rest of my life. I would also like to thank my parents and sisters for always acting excited to hear about my research, even if it made no sense to them. They all taught me valuable lessons on life and its challenges. Thank you to my animals, Kumar, Sarabi, Astro, Sasha, and Lola, for distracting me at the right times (and sometimes the worst times). And last, but certainly not least, I would like to thank my wife Stephanie for being right there next to me the entire time, cheering me on and supporting me in the most difficult moments. She truly is a superhero.

v

Abstract

Sharing is Not Caring: News Features Predict False News Detection and Diffusion

Maxim Victorovich Baryshevtsev, PhD

The University of Texas at Austin, 2020

Supervisor: Matthew S. McGlone

Misinformation research has identified numerous news story features that predict susceptibility to false news. Four of these features seem to be consistently studied and reported as problematic: belief congruence (false news that matches one’s personal beliefs about the world), political congruence (false news that matches one’s political orientation), moral-emotional language (words that share a sense of morality while being emotionally charged), and social consensus (knowing that others also believe the false news). Reported are two different paradigms where participants were asked to read through a Facebook newsfeed and either choose which posts they would share (diffusion paradigm) or choose which posts were false news (detection paradigm). First, the studies reported below were concerned with determining the effect each of these features had on the detection and diffusion of false news, while accounting for the effects of the other features. Second, the detection paradigm was also used to determine the effect base-rates had on false news detection because according to Truth-Default Theory the number of deceptive messages people encounter in the world is directly related to how accurate they vi will be. All of the news features were found to have at least some effect on the diffusion and detection of false news, with belief congruence (ORdiffusion = 2.8, ORdetection = 1.4) and political congruence (ORdiffusion = 2.4, ORdetection = 1.3) having the strongest and most consistent effects. Regarding testing the effect of base-rates on detection accuracy, the more false news participants encountered, the more accurate they were, indicating the presence of a -bias. This contradicts the truth-bias prediction Truth-Default Theory makes, some of which is accounted for by the general suspicion people have of online news. Theoretical and practical implications are discussed from the perspective of today’s growing problem with online .

vii Table of Contents

List of Tables ...... x

Chapter 1: Defining False News ...... 1

Chapter 2: Diffusion of Misinformation ...... 5

Why People Believe Misinformation ...... 5

Correcting Misinformation Exposure ...... 8

Credibility Heuristics and False News...... 11

Heuristics for Credibility Assessment of Online News ...... 15

Chapter 3: Base-Rates and False News Detection ...... 24

Deception Detection and Base-Rates ...... 24

Chapter 4: Method ...... 27

Stimulus False News Stories...... 27

Story Diffusion Feature Coding ...... 28

Social Consensus ...... 28

Moral-Emotional Language ...... 28

Belief Congruence ...... 29

Political Congruence ...... 32

Control Variables ...... 33

Procedure ...... 34

Chapter 5: Results ...... 36

Diffusion of False News ...... 36

Participants ...... 36

Diffusion and News Features ...... 38 viii Detection of False News ...... 44

Participants ...... 44

Detection and News Features...... 47

Base-Rate Effect on Detection ...... 50

Chapter 6: Discussion ...... 53

Contributions to Misinformation Literature ...... 56

Contributions to Truth-Default Theory...... 60

Practical Contributions ...... 63

Limitations ...... 64

Future Directions ...... 65

Appendices ...... 68

Appendix A: Facebook Article Example ...... 68

Appendix B: Facebook Mock Feed Example ...... 69

Appendix C: Meyer’s (1988) News Credibility Scale ...... 70

Appendix D: Internet Scavenger Hunt ...... 71

References ...... 72

ix List of Tables

Table 1. Belief Items ...... 31 Table 2. Perceived News Credibility in Diffusion Study ...... 37 Table 3. News Exposure by Medium in Diffusion Study ...... 37

Table 4. Social Media Site Use in Diffusion Study ...... 38 Table 5. Social Media Site Engagement in Diffusion Study ...... 38 Table 6. Liking GEE Logistic Regression Coefficients ...... 42

Table 7. Sharing GEE Logistic Regression Coefficients ...... 43 Table 8. Perceived News Credibility in Detection Study ...... 45 Table 9. News Exposure by Medium in Detection Study ...... 45 Table 10. Social Media Site Use in Detection Study ...... 46

Table 11. Social Media Site Engagement in Detection Study ...... 46 Table 12. Veracity Judgements GEE Logistic Regression Coefficients ...... 49 Table 13. Mean Accuracy by Base-Rate Condition ...... 51

x Chapter 1: Defining False News

Although the term “” is relatively new, misinformation with mass circulation surely predates the printing press. Long before newspapers used fact-checking staff, objective and verifiable news stories were hard to come by because of how subjective and intangible the supporting evidence typically was, relying mostly on the testimony of eyewitnesses and on documents that were easy to fabricate. Sensational claims and visceral responses made false stories easy to spread, which in turn rendered them that much more dangerous when certain groups were targeted. In 15th century

Trento, Italy, a priest claimed that Jews abducted and sacrificed a young boy so they could drink his blood to celebrate Passover. The accusation prompted a city magistrate to arrest and torture members of the Jewish community as retributive “justice.” Fifteen citizens were burned at the stake before a papal envoy disputed the accusation and suspended the magistrate’s prosecutions. From 15th century claims of Jewish “blood libel” to 21st century claims of politician-run child sex rings, fake news has a long history of preying on the anger and gullibility of the public (Soll, 2016).

The term “fake news” has seen a sharp increase in use over the past few years, especially after the 2016 presidential election. Both politicians and voters came out claiming that certain stories were fabricated by the opposing party to smear their preferred candidates. However, the definition of fake news is something that has evaded public attention. The term has been used by scholars in the past to describe satirical publications (Holbert, 2005), but more recently it has been used to refer to

1 misinformation and (Hernon, 1995; Rubin, Chen, & Conroy, 2015).

Before discussing why people believe fake news and how it is related to detection base- rates, I will operationally define “fake news” for the current set of studies.

Deception is typically defined as a deliberate communicative act intended to mislead the recipient about what the speaker believes to be the truth, without forewarning

(e.g., DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996; Serota, Levine, & Boster,

2010). This definition applies well to the term “fake news” in the context of journalism.

Rubin and colleagues (2015) describe three types of fake news -- serious fabrications

(i.e., fraudulent reporting), (large scale jokes, sometimes meant to deceive), and satire. In information and library sciences, “serious fabrication” refers to disinformation because the intent of the message is to deceive, in contrast with misinformation, which is false but not intended to mislead (Hernon, 1995). When news articles contain misinformation, we may assume it is because the journalist has not thoroughly fact- checked the sources and/or is unwittingly biased, neither of which can be directly related to intent. If the author truly did not intend to deceive, then the article cannot be classified as fake news according to Rubin et al.’s (2015) framework. However, when false information is included deliberately in the article, it counts as disinformation. The issue of intent is crucial when trying to distinguish between mis- and dis-information.

Determining what counts as “genuine” news is not an easy matter. Typically, a news story consists of multiple parts and pieces of evidence, any of which can be called into question based on its specificity and verifiability. Does this mean a story can be

2 described as fake if one quote has been misrepresented? Asked differently, what portion of a story needs to be true for it to be classified as genuine? This problem has been addressed by deception scholars in the past (e.g., G. D. Bond & Speller, 2009) and there seems to be some agreement that most consist of both deceptive and honest elements

(McCornack, Morrison, Paik, Wisner, & Zhu, 2014). Therefore, fake news should be described as a composition of honest and deceptive elements, and a story should be classified as fake only when its central claims have been intentionally manipulated by its author(s).

The last conceptual issue is the identification of the author’s intent. As mentioned, a deceptive message hinges on the intent of the sender, and one way to determine intent is by directly asking the source. However, people’s desire to be viewed positively by others often prevents them from disclosing deception and other morally compromising acts

(Crowne & Marlowe, 1960). Consequently, how is it possible to identify intent?

Deception researchers have battled with this problem for decades and most agree that identifying a deceptive message requires the notion of “ground truth” – that is, documented evidence that something actually occurred, which can then be compared to what the sender communicated (e.g., Levine, Blair, & Clare, 2013). The message can then be classified as honest or deceptive, but only in relation to the ground truth. In the case of a news story, it can only be classified as fake when there is evidence the central claims are indeed false. Although this formulation doesn’t solve the issue of identifying intent (because the author may have deviated from the ground truth by mistake), it is the

3 closest researchers have come to classifying news stories as fake without a confession from the author. Instead, some scholars have suggested using the term “false news” because it does not assume intent of the author, and simply describes news that contains misinformation, as opposed to disinformation (Vosoughi, Roy, & Aral, 2018). Therefore,

I will use the term “fake news” when referring to news that is intended to deceive and

“false news” when referring to news when its intention is unknown.

4

Chapter 2: Diffusion of Misinformation

WHY PEOPLE BELIEVE MISINFORMATION One straightforward explanation for why people believe in false news is that their cognitive default is to initially believe information they encounter. Gilbert (1991) describes this tendency as deriving from effort-dependent processing. To merely comprehend a proposition requires an implicit belief in its contents, although we may elect to withdraw our belief if we encounter other conflicting propositions or evidence. In the context of deception, this default manifests itself as a “truth bias,” which predisposes people to believe that most messages they encounter are truthful (Levine, Park, &

McCornack, 1999). This bias helps society function because it decreases suspicion between group members and encourages people to rely on one another for information and aid. However, it also compromises our ability to distinguish deception from honesty

(Levine, 2014). Specifically, truth bias inflates our accuracy in detecting truths – because when we make detection errors, we are more likely to mistake a lie for truth than vice- versa – and by the same token works against our accuracy in detecting deception (Levine,

Kim, Park, & Hughes, 2006). In terms of false news, this means that the news we encounter daily is initially cognitively classified as truthful, and this classification is questioned only when we perceive a motive for the author to be deceptive (Levine, Kim,

& Blair, 2010). The truth bias accounts for people’s generally mediocre deception detection ability, but it doesn’t explain why we believe certain false news stories and not others.

5

Research on false news is still in its infancy, but substantial work has explored misinformation and its effects. Lewandowsky and colleagues (2012) describe four factors that contribute to the acceptance of misinformation. First, people subscribe to information that confirms their beliefs. There has been a plethora of studies demonstrating how people avoid disconfirming and seek confirming information, even when it is clear the confirming information is incorrect (for a meta-analysis, see Hart et al., 2009). Typically this search occurs to protect positive conceptions of the self and to avoid conflicting cognitions (Festinger, 1957). As a consequence, people prefer news that confirms their own views and discount news that conflicts with them (Tsfati, 2007; Tsfati & Cohen,

2005).

Second, Lewandowsky and colleagues (2012) identify story coherence as a factor that promotes misinformation believability. They claim that stories are more persuasive and believable when elements flow well with the other elements. Pennington and Hastie

(1992) found that jurors were influenced by the degree to which an attorney’s story consisted of components that relate to one another and flow together, thereby making the elements easier to comprehend. The easier the story was to comprehend, the more credible jurors deemed the attorney presenting the case. So when a news story is easier to process, the more likely it is to be accepted as accurate (for a review of processing fluency, see Alter & Oppenheimer, 2009). Unfortunately, ease of processing is a poor proxy for story accuracy. For example, when a tabloid story describes an astronaut’s

DNA being altered after extended time on the International Space Station because “space

6 is unknown and dangerous,” people may find it easier to grasp than the truth -- that changes in gene expression are actually common when one is exposed to harsh conditions over time (e.g., camping in high elevation, doing research in Antarctica, etc.).

Third, source credibility can affect belief in a news story, regardless of content.

Source credibility is predicated on two separate but related factors, expertise and trustworthiness (Applbaum & Anatol, 1972). Meta-analyses have found source expertise to account for more variance in persuasive effects (e.g., attitude change) compared to non-expertise manipulations (Wilson & Sherrell, 1993), although this may not always be the case across all contexts. When sources are perceived as knowledgeable and trustworthy enough to discuss a particular topic, they are perceived as more persuasive.

In false news, sources may be perceived as experts when they share views similar to the reader, making them more influential. However, the effect of a source’s credibility is fleeting because as time passes the source is forgotten, and if the source was initially perceived as not credible, the information may seem more persuasive over time because the source is no longer associated with the information. This phenomenon is known as the “sleeper effect” (Kumkale & Albarracín, 2004). So, if a source is not credible to the reader, the information may be discounted for a time (as fake news), but later remembered as accurate.

The last factor is the influence of one’s social network. A classic study found that people often change their opinion when it disagrees with the group consensus, even when the consensus view is obviously wrong (Asch, 1956). Outside the laboratory, the effect

7 commonly occurs when a situation is ambiguous and there is consensus among others.

When we are unsure of the proper behavior, we will model our behavior on others around us (Rimal & Real, 2005), especially similar others (Burger, Messian, Patel, del Prado, &

Anderson, 2004). This tendency may occur because we perceive similar others as more interpersonally attractive, and the more we like others the more we listen to them

(Montoya, Horton, & Kirchner, 2008). This preference becomes evident when considering the types of people we get our news from.

We tend to consume news that agrees with our views and is shared online within our social networks. Thus, false news shared on Facebook by our friends is more easily believed, which in turn prompts more sharing and is then more likely to be believed by other friends. Scholars refer to this as an “echo chamber” (Jamieson & Cappella, 2008), and the Internet has allowed people to selectively expose themselves to confirmatory information among their in-group (e.g., Colleoni, Rozza, & Arvidsson, 2014). However, we can be influenced by sources besides close and similar others. Lapinski and colleagues (2012) demonstrated that simply being exposed to a flyer that described the ubiquity of hand washing influenced study participants to engage in more hand washing.

Being exposed to an indicator that many others are reading and sharing certain news can provide people with a cue to how credible the news is.

CORRECTING MISINFORMATION EXPOSURE In an era of internet-based social networking, much of the information we encounter is shared among our peers. Unfortunately, misleading information may be

8 included in the information shared. Chen and Sin (2013) found that people share misinformation on social media to get opinions from others, share their own opinions, and interact with others. Their findings indicate that even when people are aware of the misleading nature of certain information, they still share it with others to verify its validity. This sharing of misinformation becomes problematic when people who are exposed to shared false news cannot distinguish it from genuine stories. To combat misinformation, one’s first inclination is to retract it and replace it with a correction.

When people are exposed to misleading information, they form impressions of the parties involved, and those first impressions may be difficult to change after the fact. For example, when people misquote important figures, those exposed to the misquotation attribute certain qualities to the figure. Even after the quote is corrected, the initial attitudes and beliefs are resistant to change (McGlone, 2005). This effect goes beyond misquotations. When politicians convey incorrect information and thereby create misperceptions, corrections tend to have small effects on those perceptions and sometimes even result in a “backfire effect” in which the initial misperceptions are strengthened (Nyhan & Reifler, 2010; Thorson, 2015). Overall, correcting the effects of false news is much more difficult than simply identifying and debunking misinformation.

An intuitive response to these persistent misperceptions is to prevent initial exposure to misinformation. However, to prevent such exposure, one must first be able to detect it. With increased reliance on automation in today’s computationally advanced world, many scholars have begun to develop tools to help detect deceptive

9 communication, such as language analysis tools (e.g., Zhou, Burgoon, Twitchell, Qin, &

Nunamaker, 2004). Unfortunately, detecting deception using language cues has turned out to be more difficult than originally anticipated, with reliability being inconsistent across contexts and studies (Hauch, Blandon-Gitlin, Masip, & Sporer, 2014). The problem arises from the complexities of language, such as paralinguistic features (rate, pitch, etc.), syntax, pragmatics, and contextual factors. The latter can be accounted for to some degree if detection efforts are confined to false news, but even then automated detection tools are not especially accurate (Conroy, Rubin, & Chen, 2015; Kumar &

Geethakumari, 2014). However, there have been attempts to detect social bots (i.e., software programmed to diffuse misinformation online) which have been relatively successful with a detection rate of over 85% (Varol, Ferrara, Davis, Menczer, &

Flammini, 2017). Nevertheless, real social media users diffuse more false news than bots

(Vosoughi et al., 2018).

Researchers understand that humans are susceptible to false news due to a number of factors, so we must turn to the crux of the problem, which is how people assess credibility, specifically online content. Understanding the features people use to determine credibility can inform strategies for discouraging the sharing of misinformation. In the next section, online credibility assessment is discussed and how it relates to false news.

10

CREDIBILITY HEURISTICS AND FALSE NEWS Credibility has been investigated for many years, but typically in the context of face-to-face interpersonal interactions. Simply put, credibility is the degree to which information and/or its source is believable and is comprised of perceived expertise and trustworthiness (McGinnies & Ward, 1980). Expertise refers to a source’s knowledge and competence. For example, a fully licensed medical doctor is typically perceived as an expert in medicine because of her degree, the assumption being that the degree required the completion of rigorous education and training. Trustworthiness, on the other hand, is related to the honesty and integrity of the source. Nurses (more than doctors) have been perceived as the most trustworthy profession for many years because of the ethical standards they are expected to uphold when interacting with their patients (Brenan,

2017). People typically don’t believe that a nurse would lie to them while in his/her care.

Scholars have argued that new technological advances online have decreased the need for people to use traditional authority figures and longstanding methods of assessment to determine the credibility of information (Flanagin & Metzger, 2008).

Instead, people crowdsource credibility evaluations to others on the Internet, as well as use other heuristic approaches. This outsourcing can decrease information overload and enable faster credibility assessments, but it can also induce overreliance on superficial cues and thereby promote the spread of unreliable information (Sundar, 2008). Heuristics aimed at identifying these superficial cues provide people with a quick and efficient way of assessing the value of information.

11

The heuristic-systematic model (S. Chen & Chaiken, 1999; Eagly & Chaiken,

1993) offers a dual process account of the way people process information in judgment and choice. It posits that heuristic processing requires the presence of heuristic cues, less cognitive effort, and less time than does systematic processing. When people are engaged in heuristic processing, they use cues to inform them of the quality of the information and/or source. On the other hand, systematic processing is more effortful and time consuming. It involves scrutinizing a message or source in depth. This scrutiny does not preclude the use of heuristic cues. If the cues are relevant to determining message/source quality, then those cues may be used. However, systematic processing does not rely chiefly on these cues, while heuristic processing does. For example, when engaged in systematic processing, one could use the M.D. diploma hanging on the wall of a doctor’s office as a cue to her knowledgeability. The degree is a cue related to determining if she had a high level of education (e.g., a degree from Stanford University), while also considering her competence during the interaction by analyzing her and recommendations.

When engaged in heuristic processing, one may use the M.D. degree as a cue of knowledge, but heuristically the doctor’s attractiveness can also be used as a cue to her expertise. In this situation, the patient uses the “beautiful is good” heuristic to make a quick and effortless decision about expertise, something that has been found in studies on attractiveness (Eagly, Ashmore, Makhijani, & Longo, 1991). When engaging in heuristic

12 processing, it is easy to conflate different source cues with source credibility, which has the unfortunate outcome of ascribing undeserved trust or suspicion.

Both types of processing can be activated at the same time, although at varying degrees depending on a person’s cognitive motivations. Accuracy is one of three such motivations Chen and Chaiken (1999) describe as antecedents to systematic and heuristic processing. When people are motivated to be accurate in their decisions and possess the necessary ability to process information, they will engage in systematic processing. This happens because accuracy requires gathering all sorts of information from different sources and examining the evidence to determine the best course of action. For example, people engage in less selective exposure and seek out counter-attitudinal information when motivated to be accurate (Hart et al., 2009).

HSM also posits two other types of motivation that can affect information processing: to manage one’s impression and to defend one’s beliefs. Impression management relates to how people attempt to influence the way they are seen by others

(Goffman, 1959). HSM describes impression management as a motivational force that can cause people to engage in either form of processing depending on the resources they possess. The type of processing they engage in is dependent on the identity they are trying to highlight. Someone who wants to be seen as intelligent will be more likely to engage in systematic processing of news articles before posting them, while someone who identifies as a whistle blower may engage in heuristic processing, skimming articles and relying on cues to identify news that is shocking. The final motivation, defense, is

13 one that drives people to protect their views of themselves and their beliefs. Increasing one’s defense motivation can make false news more attractive if it aligns with one’s beliefs, while ignoring anything that discounts their view (Hart et al., 2009).

When it comes to online credibility specifically, heuristics have been identified as important factors when making an assessment. According to Sundar’s (2008) MAIN

(Modality, Agency, Interactivity, and Navigability) model of online credibility, the mere presence of technological affordances triggers heuristics and allows readers to make snap judgments about credibility, resulting in less systematic processing. Sundar (2008) outlines a number of different heuristics that are triggered by certain affordances, such as a large screen on a phone triggering the feeling of realism, and if an advertisement on the phone feels real, then it is credible and more appealing (K. J. Kim & Sundar, 2016).

Unfortunately, when people use website features (e.g., web design, site complexity, etc.) to assess credibility, they typically don’t verify the credibility of the website by attending to the site sponsors or cross-referencing the information with other sites (Flanagin & Metzger, 2007). This reliance on irrelevant cues is a large problem with misinformation because people are using the heuristics they believe will inform them about a topic, but they don’t exert the effort to test their hypotheses. Additionally, the heuristics used in web credibility assessment are typically superficial in nature and aren’t especially relevant for quality assurance (for a review, see Metzger, 2007). It is especially difficult to systematically process news when there are many articles online discussing the same topic from different perspectives. People fall into habits online because on

14 average they spend little time on most of the sites they visit (Fogg et al., 2003). Those habits/heuristics make it easy for people to make hasty decisions that go uncorrected and perpetuate the use of those heuristics in the future. There’s also evidence that increased use of social media can make someone more susceptible to deception online

(Vishwanath, 2014; Vishwanath, Harrison, & Ng, 2018), possibly because they have identified a set of heuristics that they use to assess trustworthiness, but that are not diagnostically useful.

HEURISTICS FOR CREDIBILITY ASSESSMENT OF ONLINE NEWS The MAIN model (Sundar, 2008) identifies 29 different heuristics triggered by different technological affordances, but only a few are important for assessing online news. Out of the four affordances (modality, agency, interactivity, and navigability), agency cues seem the most relevant. The agent of the information commonly is its source, but online the source is sometimes ambiguous. The source could be the website/application, news outlet, editor, author, or possibly the person being quoted/polled. Consequently, to assess the credibility of the news source, the MAIN model posits two possible heuristics, the authority heuristic and the bandwagon heuristic.

The premise behind the authority heuristic is that people will assign expertise and trust to a source who is perceived as an authority, such as an email that has a company logo or a person wearing a uniform (Milgram, 1963). In the context of false news, the news outlet that is associated with an article, or even the person sharing the news online, can carry certain authoritative weight or lack thereof. When processing information

15 heuristically, the reader may not attend to the argumentation and content of the article, and instead assume credibility based on the publisher, whom they have learned to

(dis)trust. Fogg and colleagues (2003) found that source reputation influences judgements of credibility, such that sources with low reputations are considered untrustworthy or incompetent. There are a number of outlets that have been classified as “fake” or untrustworthy by fact-checkers, which should be informative to news consumers (Allcott

& Gentzkow, 2017). However, there is much variance in what outlets people view as reliable sources regardless of what fact-checkers say (Pew Research Center, 2014). This makes identifying certain sources as “reliable” or “trusted” difficult (Vosoughi et al.,

2018). Even the search-engine algorithm Google uses to rank search hits relies on human ratings of website trustworthiness, which may be flawed (Nakashima, 2018).

The other relevant heuristic the MAIN model discusses is the bandwagon heuristic. When people experience uncertainty, they look to others for guidance (Asch,

1956). The uncertainty can come in the form of information overload online, therefore people will look to the available information to help them decide. On social media sites, this could be the number of people who have liked or shared an article with others. When someone is given a count of people that responded to a particular article, then that person will have a valuable cue for credibility. If thousands of people shared an article with others, then there is no way that they could all be wrong, right? Unfortunately, we know that false information is diffused among a social network more frequently and faster than true information (Vosoughi et al., 2018), so social consensus should not be a reliable

16 credibility cue. However, people still use this heuristic to gauge credibility, although they do not consider all opinions equally. When the identity of others is salient, it can change the directional influence of the bandwagon heuristic.

According to literature on referent information influence (Hogg & Turner, 1987), an individual in a salient in-group (classmate, coworker, etc.) is more influential than another in an out-group, even in mediated communication (Lee, 2007). Thus, if an in- group member endorses a news article, its perceived credibility should increase relative to no endorsement, and should decrease if endorsed by an outgroup member. Also, when beliefs and identity are threatened by counter-attitudinal information, people sometimes strengthen their views and attitudes, even if they have been given correct information

(Munro, 2010; Nyhan & Reifler, 2010). Therefore, if an oppositional group endorses an article that is counter-attitudinal, the person can “double down” and dismiss an article as false, regardless of its actual veracity. However, if the identity of others is ambiguous

(e.g., a ticker that tracks the total number of people who share an article online), then the bandwagon heuristic should induce a moderately positive degree of credibility.

Although the MAIN model provides some insight into the heuristics that people use when assessing credibility, there are several others not described by the model.

Metzger and colleagues (2010) used focus groups to uncover credibility assessment strategies people use online. In addition to the heuristics discussed above, people also judge credibility via the self-confirmation and expectancy violation heuristics, described below.

17

According to cognitive dissonance theory, people prefer their beliefs and actions to stay aligned, but when they are skewed people will engage in rationalization to realign them (Festinger, 1957). For example, consider a frequent drinker who thinks there are few negative consequences to this behavior. When he encounters an article about medical research indicating that frequent alcohol consumption has serious negative consequences, he experiences dissonance due to the disconnect between his behavior and the research findings. To relieve this uncomfortable state, he could do one of several things: 1) drink less in light of the finding; 2) downplay his behavior as merely moderate drinking and thus diminishing the personal relevance of the finding; 3) or introduce a third belief that allows his behavior and the finding to coexist, such as a suspicion that the research was flawed, biased, or premature. Research on the public’s understanding of science has found that people frequently choose the third option, and discount scientific evidence if it disagrees with previously held beliefs (Munro, 2010).

The self-confirmation heuristic is a quick way to sift through different perspectives online and choose the pieces of information that can be “trusted.” The least cognitively taxing dissonance reduction technique is to introduce a new belief that allows for coexistence. This new belief can come in the form of denial when confronted by disconfirming information. One reason why people deny the credibility of new information is because discounting it does not require any flexibility to the beliefs one holds. Changing one’s behavior requires time and effort that would otherwise be saved, while changing the current belief to coincide with the new evidence is an attack on the

18 self, something that would require amending one’s identity. This is not to say that the latter two strategies never occur, but they are harder to implement.

The ubiquity of avoiding disconfirming information can be seen in previous work on confirmation bias and how people seek out confirmatory information to help support their currently held beliefs (Nickerson, 1998). More recently, a meta-analysis found that people will avoid contradictory information to support their current beliefs, unless they are motivated to be accurate (Hart et al., 2009). When accuracy is a salient goal that individuals strive for, they are motivated to protect that view of the self by engaging in accurate decision making. Online, the self-confirmation heuristic is especially relevant because people have little time and motivation to seek out additional information, which leads them to seek out selective information that confirms their attitudes and beliefs

(Fischer, Jonas, Frey, & Schulz-Hardt, 2005).

People read numerous posts on social media and strive to assess this information quickly and efficiently. When they come across information that aligns with their beliefs, using the self-confirmation heuristic is a painless way of judging credibility. However, it does put them at risk for overreliance on misinformation. Much of the reason for accepting misinformation is because it confirms one’s beliefs (Lewandowsky et al.,

2012), and because people seek out confirming information, they are more likely to share false news with others (Vosoughi et al., 2018).

Along a similar vein, there is growing evidence that partisanship is related to susceptibility to misinformation (e.g., Knobloch-Westerwick, Mothes, & Polavin, 2017).

19

Anger seems to motivate individuals to accept information that supports their political party, leaving people especially vulnerable to false news that is meant to tug on the emotions of the public (Weeks, 2015). Although there has been some evidence that conservative-leaning individuals are more susceptible to trusting unreliable news sources

(Guess, Nagler, & Tucker, 2019; Pennycook & Rand, 2019), partisan bias (i.e., favoring information about one’s own party) has been found to be fairly equal across the political spectrum (Ditto et al., 2019). This type of bias is especially concerning because political false news is more difficult to correct than any other subject (Walter & Murphy, 2018). It is therefore important to consider political bias as separate from confirmation bias.

The other news credibility evaluation strategy Metzger and colleagues (2010) discuss is the expectancy violation heuristic. When people have their expectations violated in an interaction, they draw conclusions from those violations (Burgoon, Stern,

& Dillman, 1995). For example, you may view people as interpersonally incompetent when they crack jokes that are situationally inappropriate. Although Metzger and colleagues discuss online violations as typically damaging to credibility (e.g. a website that is unexpectedly unresponsive), it is also possible for violations to have a positive effect (e.g., a website that is unexpectedly polished and professional looking). With false news, the violation can be affectively negative, such as a story about a well-liked celebrity being accused of sexual assault, but this negative violation could have a positive effect on diffusion. In fact, high arousal in general, negative or positive, has been found to drive information diffusion online (Berger & Milkman, 2012). Even specific types of

20 arousal can drive diffusion of false news. People share false news with their social networks more frequently when the information is surprising or disgusting (Vosoughi et al., 2018), and emotionally arousing language, such as those that invoke feelings of morality, can drive people to share those messages (Brady, Wills, Jost, Tucker, & Van

Bavel, 2017). These affective violations can be powerful drivers of misinformation diffusion. In fact, it has been documented that emotional arousal affects how people perceive misinformation (Weeks, 2015). News can induce affective arousal in a number of ways, one of which is moral-emotional language. Brady and colleagues (2017) found that the presence of moral-emotional words can boost social media post diffusion substantially. If people rely on these heuristics, as research suggest, then these heuristics should predict detection and diffusion.

As research on online credibility and misinformation continues, we are gaining a better understanding of why people believe and share false news. The literature on online credibility assessment points to a number of features of false news, outlined above, that may lead to higher diffusion rates than true news. The proposed set of studies will test the utility of these features in predicting the sharing and detection of misinformation. These features include political and belief congruence, language use, and social consensus.

Formally,

H1: As the frequency of features in a news story that promote diffusion (political

congruence, belief congruence, moral and emotional language, and social

21

consensus) increase, a) the greater likelihood people will intend to share/like it

and b) the greater the likelihood that they will identify it as true.

Although there is evidence that the aforementioned features have their own effects on credibility and diffusion, their impact relative to other features is unclear.

Warranting theory posits that when people evaluate others online, they prioritize evidence that is not easily manipulated by the person being judged (Walther & Parks, 2002). For example, when people assess the attractiveness of someone based on a social media profile, features generated by profile owners, such as personal posts about their interests or activities are considered less diagnostic than features generated by others, such as positive comments by the owner’s friends (Walther, Van Der Heide, Hamel, & Shulman,

2009). People exhibit this preference because self-generated cues can be altered to reflect oneself more positively, while altering other-generated cues is less feasible (although still possible).

Regarding false news, certain features should have more value than others when assessing credibility because they are harder for the author to manipulate. Social consensus could be one of those cues. It would be difficult for a news outlet to artificially inflate the number of retweets, likes, or shares of a story, unless they use tools such as social bots. On the other hand, cues such as moral-emotional language are more easily manipulated by the author and should therefore have a smaller impact on believability.

With the audience in mind, an author could choose buzz words s/he believes will elicit a strong response. While belief and political congruence are technically author-generated

22 cues when it comes to false news, it is unclear if social media users will view and treat them as such. As this phenomenon has been hereto unexplored and warranting theory can only predict the effects of some features, the following research question is posed:

RQ: How will story diffusion features differ from one another in their effects on

false news sharing and detection?

23

Chapter 3: Base-Rates and False News Detection

DECEPTION DETECTION AND BASE-RATES Recent research on false news detection has typically been focused on computational approaches, such as machine learning, while research on message features of false news is still in its infancy. Although there is much work currently being done in this area, one crucial issue that has not been explored heretofore is the impact of base rate

(signal frequency in a sample) on false news detection.

In a typical deception detection study, experimenters use a base-rate of 50% – i.e., half of the statements being assessed are truthful and half are deceptive (e.g., Levine et al., 2013). Doing this allows for a simple and interpretable calculation of detection accuracy. Although this may seem trivial, Levine and colleagues (2006) make a compelling that a 50% base-rate artificially inflates deception detection rates and deflates truth detection rates (Levine et al., 2006). The results are artificial because naturally people do not tell a lie for every truth. A more realistic estimate is that, on average, people tell one lie in every four interactions they have (DePaulo et al., 1996). If the base-rate in an experiment is 50% (as opposed to the more realistic deception base- rate of 25%), then participants will have more opportunities to detect lies and less opportunities to detect truths, thus inflating deception detection and deflating truth detection.

Generally, people are only slightly better than chance at detecting deception with a 50% base-rate (54%; C. F. Bond & DePaulo, 2006). Although there are several possible

24 explanations for this, Levine argues that this happens because people (1) rely heavily on unreliable cues and (2) are typically trusting of others (Levine, 2014). The latter argument is contingent upon the concept of truth bias, where people have a tendency to believe others because the majority of people are honest most of the time (DePaulo et al.,

1996; Serota et al., 2010; Serota & Levine, 2015). The ubiquity of honesty should be unsurprising because a society that has frequent deception could not function efficiently with constant mistrust of others. Imagine a typical day, but you go about it suspicious of everyone. Is the cashier at the grocery store charging you extra? Is your colleague lying about what she had for lunch? Is your partner cheating on you? That would make for an exhausting, depressing existence. Fortunately, most people do not sustain such high levels of suspicion, and instead their default is to trust and only respond with skepticism when they perceive a motive to lie (Levine et al., 2010). This default induces what is known as a “veracity effect” in deception detection studies, whereby people are more likely to correctly identify truths than lies (Levine et al., 1999).

Park and Levine (2001) hypothesized that as the number of truths increases relative to the number of lies, a person’s ability to discriminate between truth and lies will increase. A test of the Park-Levine probability model has found support in interpersonal face-to-face deception (Levine et al., 2006), but remains unexplored in mediated communication in general or false news detection in particular. Similar to interpersonal deception detection, people are not particularly good at identifying misinformation online

(Ott, Choi, Cardie, & Hancock, 2011), and although automated techniques are being

25 developed (Conroy et al., 2015), they are in their infancy. It is important to determine whether the Park-Levine probability model can predict false news detection because it can provide scholars with a robust theoretical framework for investigating multimodal deception. Therefore, the following hypothesis is posed:

H2: False news detection accuracy is positively correlated with the true news

base-rate, such that as the proportion of true news increases, so will detection

accuracy.

26

Chapter 4: Method

STIMULUS FALSE NEWS STORIES Stimulus news stories were collected from fact-checking organizations

Snopes.com and FactCheck.org (a project of the Annenberg Public Policy Center at the

University of Pennsylvania). Both organizations use a rigorous methodology to investigate the veracity of news articles. They initially contact the source of an article to gather additional information about questionable claims. Subsequently, they consult experts on the topic and review related publications, prioritizing those that are produced by government sources, are non-partisan, and/or are peer-reviewed information. All sources used to evaluate a story are identified in an appendix to the fact-checking report so that readers may check the references themselves. Other evidence (e.g., interviews, open access videos, etc.) is presented within the article itself.

Forty articles identified by fact-checkers as “false” or “true” were chosen as stimuli for both studies. Articles assigned with veracity gradations (e.g., “mostly false”) were avoided in an effort to decrease the likelihood of regarding the chosen article’s veracity status. Only articles with extant webpages were retained. Article images, headlines, and summaries were copied and used as stimuli in the study. The topics for both false and true articles include politics (16), science (12), and health (12).

The topics are in different proportions to allow for an even distribution of political issues across partisanship, while keeping the distribution across the three topics as equal as possible when a subset of 20 articles was chosen for the different conditions. Of the

27 political articles, there was an even split between those favoring liberal or conservative orientations. For example, an article that falsely claimed that President Trump is a murderer was considered a liberal-leaning story (Newport & Dugan, 2017). The articles were formatted to resemble how they would appear on Facebook, which included an image, article headline, and article summary (see Appendix A for an example).

STORY DIFFUSION FEATURE CODING

Social Consensus Social consensus is a measure of how many people liked and/or shared a particular news article. Facebook and Twitter provide “tickers” underneath the body of a post that inform viewers how many people have engaged with it. These tickers are dynamic and can vary depending on the visibility of the post. This is the single feature that was manipulated to test its effect. The number of likes, shares, and comments for an article was randomly chosen with a maximum of 1,000 for each to keep engagement in a plausible range, as most people have fewer than 1,000 friends (Smith, 2014).

Moral-Emotional Language Using word lists created by Brady and colleagues (2017), the word counts for moral, emotional, and moral-emotional language were tallied for each article headline and summary. Both moral (e.g., duty, defend, exploit; n = 329) and emotional (e.g., fear, abusive, cheerful; n = 819) words were identified by previous research (Graham, Haidt,

& Nosek, 2009; Tausczik & Pennebaker, 2010), while moral-emotional words (e.g., hate,

28 abandon, evil; n = 159) were words that appeared on both lists. All articles, both false and true, had no more than one moral-emotional word within the headline or summary, which meant that moral-emotional words could not be treated as continuous variables. Instead they were treated dichotomously, with the presence of a moral-emotional word being coded as 1 and absence as 0.

Belief Congruence Independent judges (n = 35) coded belief congruence and political congruence. For belief congruence, I identified the main claim(s) each article posited. For example, an article headline that states “President Trump lied during a speech” was reduced to “President

Trump lies often.” After all claims were identified and summarized into brief belief statements (e.g., “President Trump lies often”), coders were asked to report the degree to which each article headline and summary was consistent with the identified belief statements on a 7-point scale (-3 = Strongly Inconsistent to 3 = Strongly Consistent). Two to four different claims were generated for each article that coders rated, of which the one receiving the strongest (in)consistency rating was selected for the main study. Note that inconsistency was described to coders as a claim that was the opposite of what the article was positing. For example, if an article headline and summary described how a self- driving car hit and killed a woman in Phoenix, then an inconsistent statement would be

“self-driving cars are safe” while a consistent statement would be “self-driving cars are dangerous.” These consistency ratings were used to calculate belief congruence in the

29 main studies. A total of 30 belief statements were chosen to measure participants’ beliefs

(Table 1).

To create the belief congruence scores, each participant rated the battery of 30 belief statements at the end of the main studies to avoid response contamination.

Participants were asked to report the degree to which they agreed with the belief statements on a 7-point Likert-type scale (-3 = Strongly Disagree to 3 = Strongly Agree).

Each participant belief item was paired with its corresponding article consistency score, allowing for a calculation of congruence. If an article was congruent with a participant’s belief, that relationship was coded as a 1 and if it was incongruent it was coded as a 0. In addition to this dichotomous measure, a continuous measure of congruence was created by subtracting the participant’s belief from the article’s belief score. The result was a measure of the degree to which a participant’s personal belief was inconsistent with the claim the article was making (0 = Perfectly Consistent to 6 = Strongly Inconsistent).

30

Table 1. Belief Items

Item Mean SD Alternative medicine (e.g., herbal remedies) can cure illnesses. 0.45 1.49 Animals can possess human-like traits. 1.69 1.03 Barack Obama makes smart decisions. 0.98 1.24 Conservatives are moral people. 0.40 1.37 Conservatives are rational people. 0.27 1.36 Conspiracy theorists are mentally unstable. -0.67 1.27 Doctors are moral people. 1.10 1.09 Dogs can experience human-like emotions. 1.77 1.04 Eating raw fish is dangerous. -0.21 1.53 Fluoride is healthy for human consumption. -0.06 1.48 Food manufacturers focus on profit over quality. 1.22 1.19 Fracking is harmful to the environment. 1.44 1.27 Humans can survive without food for months at a time. -1.36 1.55 Liberals are moral people. 0.76 1.17 Liberals are rational people. 0.58 1.20 Medical marijuana use is dangerous. -1.51 1.37 National debt reduction is important. 1.56 1.08 Non-scientists can do science. 0.58 1.53 Politicians are moral people. -0.51 1.22 President Trump is a moral person. -1.35 1.61 President Trump makes smart decisions. -1.21 1.68 Recreational drug use is dangerous. 0.21 1.63 Renewable energy is important. 2.31 0.89 Scientific research is important. 2.44 0.82

31

Continuation of Table 1 Surgeries are dangerous. 0.39 1.47 The attracts talent. 1.39 1.18 There are several threatening volcanoes in the United States. 0.21 1.39 Tick bites are dangerous. 1.16 1.21 Vaccines are safe to use. 2.11 1.15 Young people show little respect toward others. -0.58 1.39 Note. Items were rated on a 7-point Likert-type scale anchored at (-3) Strongly Disagree to (3) Strongly Agree. All participants (n = 526), for both main studies, answered the same battery.

Political Congruence The same independent coders as described above were asked to evaluate the partisanship of each article. Coders were asked, “which political party members are likely to share the above article with friends and family?” They were given a choice between Democrats,

Republicans, Both, and Neither. The most common response for each article (i.e., the mode) was used to determine the partisanship of the article. Next, participants from the main studies were asked to report their own political orientation on a 7-point scale (1 =

Strongly Liberal to 7= Strongly Conservative). Finally, a variable was created that matched the participant partisanship with the article partisanship. For example, if a participant identified as Slightly Liberal, every article that was coded as Democrat or

Both was considered politically congruent (coded 1). For the same Slightly Liberal participant, if an article was coded as Republican or Neither, then it was considered politically incongruent (coded 0).

32

The same procedure was implemented for all points on the political orientation scale except for when a participant identified as Neither Liberal nor Conservative. For these participants, if an article was coded as Democrat, Republican, or Both, it was considered incongruent because the individual does not subscribe to either party.

However, if the article was coded as Neither, then it was considered missing data because it was unclear if the article and participant shared political beliefs. These missing data were fairly uncommon, only making up 3.5% of the cases across both studies.

CONTROL VARIABLES Several measures were used to assess participants’ general perceptions of news credibility, social media use, and news consumption. All measures employed a 7-point

Likert-type scale. Participants’ perceptions of news credibility were assessed with a 5- item measure developed by Meyer (1988) probing the degree to which they view online news as accurate, fair, unbiased, and trustworthy ( = .85; 1 = Strongly Disagree, 7 =

Strongly Agree).

To measure social media use, participants reported their frequency of use for

Facebook, Twitter, Reddit, and Instagram (1 = Never, 2 = Once a year, 3 = A few times a year, 4 = A few times a month, 5 = A few times a week, 6 = Once a day, 7 = Multiple times a day). They also reported the frequency of different behaviors they engaged in on social media, which included how often they share, like, post, and comment/reply (1 =

Never, 7 = Multiple times a day). Finally, they reported where they typically get their

33 news, including television, social media, news apps (e.g., Google News, Apple News, etc.), or other (1 = Never, 7 = Multiple times a day).

PROCEDURE Participants were randomly assigned to two different studies (diffusion or detection evaluation). To test the effects of news features on intended diffusion behavior, one set of instructions asked participants to look through a mock Facebook feed (see

Appendix B for an example) that displayed 20 news article posts (half true and half false) and indicate which articles they would share/like using the criteria they normally would when glancing through their own personal Facebook feed. Allowing participants to like articles, as well as share, provides a more comprehensive look into false news diffusion because Facebook posts that are liked by friends also appear on one’s newsfeed just as ones that were shared. Therefore, both sharing and liking an article in the context of this study are considered measures of potential diffusion, but both are reported separately to allow for comparisons between the two behaviors. The 20 articles that were chosen were a random subset of the total 40 articles collected, with the different topics, politics (8), science (6), and health (6), roughly equally represented to allow political articles to be equally split among liberal and conservative perspectives. The order of article posts was randomized to prevent order effects.

Participants assigned to the detection study were randomly assigned to 1 of 11 conditions with varying base-rate combinations of true/false news (i.e., 20:0, 18:2, 16:4,

14:6, 12:8, 10:10, 8:12, 6:14, 4:16, 2:18, 0:20). A procedure analogous to the one Levine

34 et al. (2006) used to select true and false statements for their study was used to randomly select articles for each condition in this study. For example, starting with the all true condition, two true articles were selected at random to be excluded from the next condition, while two false articles were selected at random to be included, which resulted in an 18:2 ratio. The same procedure was used to select the remaining conditions until the final condition was all false articles. All participants in a particular condition saw the same articles (e.g., all participants in the 18:2 condition saw the same 18 true and 2 false articles). The same type of mock Facebook feed was used as the diffusion paradigm with the ordering of the posts randomized. Participants were instructed to read through a list of articles and determine if they were real or false articles. Underneath each article they were prompted to indicate whether they believe the story to be false or true.

After participants finished reviewing the mock newsfeed for their respective study, they were asked to complete an interpolated task. This task was an internet scavenger hunt consisting of seven search queries (e.g., “find a cheap flight to New

York”) and locked for 5 minutes to prevent premature submissions (see Appendix D for all scavenger hunt questions). The interpolated task provided a buffer between the tasks and the measures that followed, including the personal belief statements. The measures included perceived news credibility, news exposure, and social media use/engagement.

Finally, after answering several demographic questions, participants completed the personal belief item battery, were thanked for their participation, and compensated with course extra credit.

35

Chapter 5: Results

DIFFUSION OF FALSE NEWS

Participants There were 201 (78.6% female) participants with an average age of 20.29 (SD =

2.1) in the diffusion paradigm. These participants were recruited from several communication studies courses at a large southwestern university for the opportunity to receive extra credit. The average time spent on the sharing task was 4.66 minutes (SD =

8.42). Thirty-five participants were excluded from data analysis because they spent less than 30 seconds or more than 120 minutes on the task. The lower limit of 30 seconds was chosen because it should be reasonable to assume a typical person can look through a 20 post Facebook feed with a moderate level of attention, allowing about 1-2 seconds to determine if a post is relevant and warrants further attention. The upper limit of 120 minutes was chosen to exclude people who were “multitasking” during the study or decided to abandon it after starting it.

As indicated in Tables 2 – 4, participants reported being skeptical of online news articles in general (measured with the news credibility scale), mostly received their news from social media, and used Instagram the most, followed by Snapchat, Facebook,

Twitter, and finally Reddit. Regarding social media engagement (Table 5), they unsurprisingly engaged in observation most often, with liking following close after, while commenting, sharing, and posting were less frequent.

36

Table 2. Perceived News Credibility in Diffusion Study

Item Disagree Unsure Agree Fair 40% 25% 35% Accurate 35% 18% 47% Unbiased 83% 9% 8% Tells whole story 75% 15% 10% Trustworthy 37% 30% 33% Note. Items were on a 7-point Likert-type scale, but for interpretability the responses on either side of the middle point were collapsed.

Table 3. News Exposure by Medium in Diffusion Study

Medium Never Once a Multiple Multiple Multiple Once a Multiple year times a times a times a day times a year month week day Social Media 4% 1% 1% 7% 18% 15% 54% TV 16% 5% 25% 29% 13% 7% 5% News App 20% 3% 8% 17% 17% 18% 17% Newspaper 50% 13% 21% 9% 2% 4% 2% Note. Media are in descending order of popularity determined by averaged exposure.

37

Table 4. Social Media Site Use in Diffusion Study

Platform Never Once a Multiple Multiple Multiple Once a Multiple year times a times a times a day times a year month week day Instagram 6% 1% 1% 3% 5% 8% 77% Snapchat 10% 0% 2% 4% 5% 9% 71% Facebook 8% 2% 2% 10% 13% 23% 42% Twitter 25% 3% 7% 7% 6% 6% 47% Reddit 56% 8% 9% 8% 5% 4% 10% Note. Platforms are in descending order of popularity determined by averaged use.

Table 5. Social Media Site Engagement in Diffusion Study

Behavior Never Once a Multiple Multiple Multiple Once a Multiple year times a times a times a day times a year month week day Observation 1% 1% 1% 2% 9% 4% 84% Liking 3% 0% 6% 8% 13% 8% 64% Comment 9% 5% 12% 23% 28% 10% 13% Sharing 20% 7% 11% 18% 21% 6% 16% Posting 19% 7% 24% 23% 16% 5% 6% Note. Behaviors are in descending order of popularity determined by averaged engagement.

Diffusion and News Features To test whether news features predict false news diffusion, several repeated measures logistic regressions were conducted. A generalized estimating equation (GEE) model was used to account for the interdependent responses of the repeated measures design where each participant saw 20 articles (Liang & Zeger, 1986), which a traditional

38 logistic regression cannot accommodate. The model was logistic with an independent correlation matrix.

Using participant responses to social media engagement, those who do not engage online (i.e., they chose Never for all behaviors) were excluded from the analysis. This resulted in a total sample size of 197, with only 4 excluded for lack of online engagement. False and true news articles were initially examined together, followed by two more tests with each one considered separately. Political congruence, belief congruence, moral-emotional language in the summary, and social consensus indicators

(i.e., article likes, shares, and comments) were entered as the predictors of interest, with social media news consumption, Facebook use, and perceived news credibility as control variables. Note that headline moral-emotional language was not entered because none of the articles had headlines that contained moral-emotional words. Additionally, there were no false news summaries that had any moral-emotional language, so the variable was excluded from the model that looked only at false news articles.

All four news features had at least some effect on intended article diffusion. In comparing the model fit for the different tests, the Quasi-likelihood under Independence

Model Criterion (QIC) was smaller for the models that looked at false and true news separately, as opposed to together, indicating better fit (Pan, 2001). For all fit statistics and odds ratios, refer to Table 6 for liking and 7 for sharing. Participants were more prone to liking and sharing an article, regardless of veracity, if it matched their political orientation (ORlike = 1.57, p < .01; ORshare = 2.92, p < .001) and their personal beliefs

39

(ORlike = 2.47, p < .001; ORshare 1.54, p < .05). What is interesting is that the effect belief congruence has on diffusion (both liking and sharing) seems to only apply to false news articles because the effect disappears when testing true news separately. Political congruence is consistent across false and true news for sharing behavior, but for liking it seems to only apply to false news articles. Additionally, if there was a moral-emotional word present in a true news article summary, there was a higher likelihood of participants reporting liking it (ORlike = 25.14, p < .001; ORshare = 4.35, p < .01).

Finally, indicators of social consensus had a significant, albeit small, effect on intended diffusion. The more likes an article had, the more likely it was for a participant to like and share a false news article (ORlike = 1.00, p < .01; ORshare = 1.00, p < .05). Note that the odds ratios are significantly different from 1.00, yet still reported as 1.00. This is because the software used to calculate the ratios (i.e., SPSS) rounded down. This also means that the actual estimated odds ratio is slightly above 1.00 (e.g., 1.0002), which translates to a very small effect. It appeared that participants noticed how many likes a false news article had, but did not put much emphasis on this factor when deciding whether to like or share it. On the other hand, the number of comments an article had negatively affected the likelihood it would be liked or shared, but only for true news articles (ORlike = 0.99, p < .001; ORshare = 0.95, p < .01). Number of comments did not have a significant effect on false news. Regarding the control variables, the more often a participant consumed news on social media, the higher the likelihood that they reported liking an article (ORlike = 1.20, p < .05), regardless of its veracity. Also, increased

40

Facebook use had a negative effect on liking, but only for true news (ORlike = 0.86, p <

.05), while perceived online news credibility had no significant effect on diffusion.

Overall, there was support for the hypothesis that news features influence intended diffusion of false news (H1a). Specifically, political and belief congruence exerted the strongest effects, while social consensus (i.e., article likes) produced the weakest effect. Using the odds ratio confidence intervals, it is safe to assume that both types of congruence were better predictors of diffusion than social consensus cues, but there was insufficient evidence to claim that belief congruence had a significantly larger effect than political congruence (RQ). When looking at the confidence intervals of these two variables in Table 6, the lower limit for belief congruence is 2.17, while the upper limit of political congruence is 2.43. As these overlap with 95% confidence, it cannot be claimed that one effect is stronger than the other. Moral-emotional words have a strong effect on true news diffusion, but unfortunately the effect cannot be investigated in false news due to the absence of such words in the articles selected.1

1 Because moral-emotional words were underrepresented in the stimuli, merely moral words were also tested, as they are more common in the article headlines and summaries. Moral words did not exhibit a discernible effect on diffusion. Moreover, including them in the model dampened its fit, so moral- emotional words were retained in the model. 41

Table 6. Liking GEE Logistic Regression Coefficients

Model Fit Overall False news only True news only QIC 2811.14 1476.82 1278.23

Variable Odds Ratio [95% CI] Odds Ratio [95% CI] Odds Ratio [95% CI] Political Congruence 1.57** [1.18, 2.09] 1.69** [1.18, 2.43] 1.21 [0.84, 1.75] Belief Congruence 2.47*** [1.88, 3.23] 2.95*** [2.17, 4.01] 1.30 [0.88, 1.91] Summary Moral- 2.52*** [1.68, 3.80] . 25.14*** [11.60, 54.48] Emotional Language Article Likes 1.00** [1.00, 1.01] 1.00** [1.00, 1.01] 1.00*** [1.00, 1.00] Article Shares 0.99** [0.99, 1.00] 1.00 [0.99, 1.00] 1.00 [1.00, 1.00] Article Comments 0.99*** [0.98, 0.99] 1.00 [0.99, 1.00] 0.95*** [0.94, 0.96] SMS News 1.20* [1.04, 1.40] 1.20* [1.02, 1.41] 1.23* [1.04, 1.44] Facebook Use 0.89* [0.80, 0.99] 0.92 [0.83, 1.02] 0.86* [0.76, 0.97] News Credibility 0.96 [0.79, 1.19] 0.95 [0.76, 1.19] 1.01 [0.80, 1.26] Note. If the odds ratio is above 1, then as the predictor variable increases (or dichotomous variable is present), the odds of liking an article increases. If the odds ratio is below 1, then as the predictor variable increases, the odds of liking an article decreases. For QIC, the lower the statistic, the better the model fits the data. For predictors that have a significant odds ratio of 1.00, this means the software rounded down, and should instead be considered slightly above 1. *p < .05. **p < .01. ***p < .001.

42

Table 7. Sharing GEE Logistic Regression Coefficients

Model Fit Overall False news only True news only QIC 1392.32 686.20 727.28

Variable Odds Ratio [95% CI] Odds Ratio [95% CI] Odds Ratio [95% CI] Political Congruence 2.92*** [1.96, 4.34] 2.40** [1.45, 4.00] 3.07*** [1.69, 5.57] Belief Congruence 1.54* [1.12, 2.13] 2.83*** [1.67, 4.80] 0.92 [0.57, 1.51] Summary Moral- 1.39 [0.68, 2.84] . 4.35** [1.73, 10.93] Emotional Language Article Likes 1.00 [1.00, 1.00] 1.00* [1.00, 1.00] 1.00 [1.00, 1.00] Article Shares 1.00 [1.00, 1.00] 1.00 [1.00, 1.00] 1.00 [1.00, 1.00] Article Comments 1.00 [0.99, 1.00] 1.01 [1.00, 1.00] 0.95** [0.97, 0.99] SMS News 1.03 [0.86, 1.23] 1.10 [0.87, 1.37] 0.99 [0.83, 1.19] Facebook Use 0.90 [0.76, 1.07] 0.89 [0.71, 1.11] 0.92 [0.78, 1.08] News Credibility 0.97 [0.67, 1.43] 1.05 [0.64, 1.74] 0.93 [0.66, 1.30] Note. If the odds ratio is above 1, then as the predictor variable increases (or dichotomous variable is present), the likelihood of sharing an article post increases. If the odds ratio is below 1, then as the predictor variable increases, the likelihood of sharing an article post decreases. For QIC, the lower the statistic, the better the model fits the data. For predictors that have a significant odds ratio of 1.00, this means the software rounded down, and should instead be considered slightly above 1.00. *p < .05. **p < .01. ***p < .001.

43

DETECTION OF FALSE NEWS

Participants Similar to the diffusion paradigm, participants were recruited from communication studies courses with the promise of earning extra credit. However, these participants were told that their task was to determine the veracity of the articles in a mock Facebook feed. The nature of the task could have induced heightened levels of suspicion in the participants, which may have affected their responses. A total of 322

(71.7% female) students, with an average age of 20.13 (SD = 1.98), participated in the study. This sample excludes three individuals who spent too little or too much time on the task. The same time cutoffs were used as the diffusion paradigm (lower limit = 30 seconds; upper limit = 120 minutes). The average time spent on the detection task was

5.52 minutes (SD = 4.42).

Participants were not as skeptical of online news as those in the diffusion paradigm (Tables 8 – 11), but these participants did believe that online news is biased (M

= 2.54, SD = 1.2) and does not tell the whole story (M = 2.75, SD = 1.36). Most received their news from social media, and used Instagram and Snapchat the most, followed by

Facebook, Twitter, and Reddit. As for social media engagement, the distribution of responses was identical to those in the diffusion paradigm. In descending order from most to least popular behavior: observing, liking, sharing, commenting, and posting.

44

Table 8. Perceived News Credibility in Detection Study

Item Disagree Unsure Agree Fair 39% 21% 40% Accurate 38% 17% 45% Unbiased 80% 10% 10% Tells whole story 77% 12% 11% Trustworthy 41% 27% 32% Note. Items were on a 7-point Likert-type scale, but for interpretability the responses on either side of the middle point were collapsed.

Table 9. News Exposure by Medium in Detection Study

Medium Never Once a Multiple Multiple Multiple Once a Multiple year times a times a times a day times a year month week day Social Media 4% 1% 3% 8% 18% 21% 46% News App 22% 3% 8% 16% 21% 16% 15% TV 22% 3% 24% 27% 16% 5% 2% Newspaper 58% 11% 16% 8% 3% 2% 3% Note. Media are in descending order of popularity determined by averaged exposure.

45

Table 10. Social Media Site Use in Detection Study

Platform Never Once a Multiple Multiple Multiple Once a Multiple year times a times a times a day times a year month week day Instagram 8% 0% 2% 4% 5% 10% 72% Snapchat 8% 1% 3% 4% 5% 9% 71% Facebook 9% 2% 6% 12% 16% 18% 36% Twitter 27% 3% 7% 6% 6% 8% 43% Reddit 59% 6% 12% 8% 3% 3% 9% Note. Platforms are in descending order of popularity determined by averaged use.

Table 11. Social Media Site Engagement in Detection Study

Behavior Never Once a Multiple Multiple Multiple Once a Multiple year times a times a times a day times a year month week day Observation 2% 0% 0% 3% 6% 4% 85% Liking 3% 0% 5% 8% 10% 7% 67% Comment 8% 5% 12% 26% 28% 7% 14% Sharing 23% 5% 16% 14% 20% 6% 17% Posting 20% 8% 24% 24% 13% 4% 8% Note. Behaviors are in descending order of popularity determined by averaged engagement.

For those in the offset control condition (n = 92) where participants saw half false and half true news articles, detection accuracies were calculated to allow for comparisons with previous deception detection research. Overall, participants were not very accurate in differentiating false from true news, with an average accuracy of 56.46% (SD =

10.53%). After separating false and true news, it became clear that they were 46 significantly more accurate at detecting false news (M = 64.49%, SD = 18.21%) than true news (M = 49.24%, SD = 19.40%), determined with a paired t-test, t(91) = 4.67, p < .001.

Additionally, contrary to previous findings related to the truth-bias (Levine et al., 2006), participants believed that false news was more common (M = 57.28%, SD = 15.71%) than true news (M = 42.72%, SD = 15.71%). A one-sample t-test shows that these estimates are significantly different from 50%, t(91) = 4.45, p < .001. These findings may be attributable to general skepticism of online media, which is explored in the base-rate section below.

Detection and News Features Just as they had an effect on news diffusion, news features had an effect on false news detection. Identical to the diffusion analysis, several logistic GEE models using an independent correlation matrix were tested. Note that compared to the diffusion paradigm, all 40 articles (20 false and 20 true) were used in at least one condition, so moral-emotional language was present in at least one false and one true article.

The GEE models that separated false from true news allowed for a more nuanced understanding of how news features relate to the detection of false news, as opposed to a general understanding of how features affect judgements of veracity. See Table 12 for all odds ratios and fit statistics. Political congruence (ORfalse news = 1.26, p < .01; ORtrue news =

1.33, p < .01) and belief congruence (ORfalse news = 1.37, p < .001; ORtrue news = 1.23, p <

.01) both made it more likely that an article would be judged as true, regardless of the article’s veracity. Interestingly, moral-emotional language in a headline made a false

47 article more likely to be judged as false (ORfalse news = 0.20, p < .001), thus making it easier to detect, while such an effect for true articles was nonsignificant (ORtrue news =

0.67, p = .11). On the other hand, moral-emotional language in the summary resulted in the article being judged as true, regardless of veracity (ORoverall = 1.75, p < .001). As for social consensus, article likes and shares both had positive effects on believing an article was true (ORs = 1.00, ps < .01), regardless of article veracity. Just as with the results of the diffusion paradigm, social consensus indicators seem to have exerted positive effects on the believability of an article, but the effects are small. Article comments, on the other hand, appeared to make false articles more likely to be judged as deceptive (ORfalse news =

0.99, p < .001), while making true articles more likely to be judged as accurate (ORtrue news = 1.00, p < .05).

48

Table 12. Veracity Judgements GEE Logistic Regression Coefficients

Model Fit Overall False news only True news only QIC 8419.39 4076.22 4171.73

Variable Odds Ratio [95% CI] Odds Ratio [95% CI] Odds Ratio [95% CI] Political Congruence 1.25*** [1.12, 1.39] 1.26** [1.08, 1.47] 1.33** [1.13, 1.56] Belief Congruence 1.28*** [1.15, 1.43] 1.37*** [1.15, 1.62] 1.23** [1.06, 1.42] Headline Moral- 0.42*** [0.33, 0.55] 0.20*** [0.14, 0.29] 0.67 [0.42, 1.10] Emotional Language Summary Moral- 1.75*** [1.34, 2.28] 0.85 [0.49, 1.47] 1.09 [0.77, 1.54] Emotional Language Article Likes 1.00*** [1.00, 1.00] 1.00 [1.00, 1.00] 1.00** [1.00, 1.00] Article Shares 1.00*** [1.00, 1.00] 1.00** [1.00, 1.00] 1.00 [1.00, 1.00] Article Comments 0.99*** [0.99, 0.99] 0.99*** [0.98, 0.99] 1.00* [1.00, 1.01] SMS News 1.07** [1.03, 1.12] 1.08* [1.01, 1.15] 1.07* [1.01, 1.12] Facebook Use 0.99 [0.95, 1.03] 1.00 [0.95, 1.06] 0.97 [0.93, 1.02] News Credibility 1.03 [0.96, 1.10] 1.06 [0.96, 1.16] 1.01 [0.93, 1.10] Note. If the odds ratio is above 1, then as the predictor variable increases (or dichotomous variable is present), the likelihood of judging an article post as true increases. If the odds ratio is below 1, then as the predictor variable increases, the likelihood of judging an article post as true decreases. For QIC, the lower the statistic, the better the model fits the data. For predictors that have a significant odds ratio of 1.00, this means the software rounded down, and should instead be considered slightly above 1.00. *p < .05. **p < .01. ***p < .001.

49

Base-Rate Effect on Detection To test whether true news proportions are related to detection accuracy, a one- way ANOVA with a linear contrast was conducted as suggested by Levine and colleagues (2006). Specifically, the prediction was that ratios of true to false news would have a positive and linear relationship with detection accuracy, such that as participants were presented with more true news in their mock Facebook feeds, the better they would be at correctly differentiating false from true news. The linear contrast was significant which supports the veracity effect, albeit in the opposite direction Levine and colleagues

(2006) found, F(1, 311) = 9.83, p < .01. Although there was some deviation from linearity, it was not significant (F[9, 311] = 0.44, p = .91), which rejects any curvilinear relationship. Additionally, the correlation between base-rate and detection accuracy was weakly negative, but significant, r = -.17, p < .01. See Table 13 for the different conditions and their respective detection accuracies.

50

Table 13. Mean Accuracy by Base-Rate Condition

Base-Rate Mean Accuracy SD 95% CI Min Max 0% .6165 .1589 .5442, .6889 .37 .89 10% .6000 .1586 .5345, .6655 .32 .95 20% .5811 .1227 .5293, .6329 .37 .74 30% .5373 .1308 .4821, .5925 .26 .79 40% .5558 .1385 .4986, .6130 .37 .79 50% .5646 .1053 .5428, .5865 .32 .84 60% .5717 .1126 .5230, .6204 .40 .80 70% .5435 .1343 .4854, .6015 .25 .85 80% .5333 .0796 .4971, .5696 .40 .70 90% .5348 .0775 .5013, .5683 .40 .75 100% .5214 .1393 .4580, .5848 .20 .75 Note. Base-rates indicate the proportion of the stimuli that are true news within that condition.

To determine if online news suspicion (measured by the news credibility scale) had an effect on this apparent lie-bias, the data were split between participants who reported being suspicious of online news and those who did not. Specifically, participants were split based on how they viewed news outlet fairness, accuracy, bias, trustworthiness, and willingness to tell the whole story. Across all four variables, the negative linear effect of base-rates was not significant (p > .10) for participants who view online news as credible, while the lie-bias remained intact for those suspicious of online news. When looking at how true and false news accuracy in the offset control condition (50-50) was

51 influenced by suspicion of online news, zero-order correlations were calculated between true news accuracy and the different facets of news credibility. None of the correlations were statistically significant. One final regression was conducted to determine if the different facets interact with the base-rates to predict detection accuracy. Only perceived fairness in the news had a significant interaction with base-rates. In the first step of the regression, detection accuracy was regressed on base-rates (centered) and perceived news fairness (centered), which was statistically significant, R2 = .035, F(2, 319) = 5.81, p <

.01. Base-rate was negatively related to detection accuracy ( = -0.17, p < .01), which is consistent with the ANOVA linear contrast reported earlier, while news fairness had no significant effect ( = -0.08, p = .18). In the next step, the interaction term (base-rate X fairness) was entered, which made a statistically significant change in variance explained

(R2 = .015, F[1, 318] = 4.93, p < .05). The interaction term was statistically significant

( = 0.12, p < .05), and after further investigating the scatterplot it seems that those participants that viewed online news as more fair did not experience the same negative effect of base-rate on detection accuracy as more skeptical participants. Overall, there seems to be fairly strong support that, in the context of false news detection, there is a lie- bias (i.e., a default perception that news is false) that resulted in an “inversed” veracity effect, which is in contrast to the predicted truth-bias.

52

Chapter 6: Discussion

Misinformation has become a major concern in today’s technology-dependent society. Information is accessed and shared at a rapid pace, creating an environment where information is rarely verified and false information is often accepted as truth. This environment not only affects the beliefs people hold about the world around them but can also exert detrimental effects on society and its institutions. Financial organizations must fend off misinformation that could damage potential earnings, health professionals fight the influence of medical hoaxes that could spread disease, and political leaders attempt to save face by disconfirming slander. The social institution that is arguably the most adversely affected is news media.

Skepticism of the news has made it difficult for journalists to retain the trust of their audiences, and due to malicious groups and an ambiguous standard for truth, their work is constantly under question. This skepticism is one of the key findings of the current project, illustrating a growing problem in mediated communication. However, skepticism is not the only mechanism that drives perceptions of the news. The stories themselves have certain features that cue heuristics social media users employ to decide whether they should trust what they are reading.

The current project focused on validating theoretical claims about why the public believes in misinformation, as well as testing theoretical propositions related to deception detection in the context of false news. Specifically, the studies reported here investigated the effects of news story features (political congruence, belief congruence, social 53 consensus, and moral-emotional language) on the detection and diffusion of false news.

Additionally, they attempted to identifying the presence of a truth-bias in false news detection and a potential relationship between base-rates and detection accuracies (Levine et al., 2006).

As predicted by several misinformation and online credibility theorists (Brady et al., 2017; Lewandowsky et al., 2012; Metzger & Flanagin, 2013), the four features exerted significant effects on both diffusion and detection. News stories that were politically consistent with the reader’s partisanship were more likely to motivate intentions of sharing as well as being identified as “true” news. Similarly, stories that were consistent with the reader’s specific personal beliefs (i.e., vaccination is important) had a higher likelihood of being perceived as true, but when it came to intended sharing, belief congruence had an effect only on false news. That is, compared to true news, false news was more likely to be diffused when it matched a person’s beliefs. Thus, it appears that false news interacts with personal beliefs in a unique way to promote diffusion on social media, perhaps by exploiting beliefs that are strongly held by the public.

Moral-emotional language and social consensus appeared to exert some influence on diffusion and detection, but their effects were inconsistent and small, respectively.

Moral-emotional language was fairly sparse in the stimuli, which could contribute to the inconsistency. Even investigating the border category of moral words yielded nonsignificant effects. However, even with its limited appearance, moral-emotional words induced perceptions of trustworthiness, increasing the likelihood that an article

54 would be shared and labeled as true. This effect depended on the location of these words, causing suspicion when present in the article headline, but inducing trust when in the summary.

Social consensus cues were more consistent and comport with prior findings on the bandwagon heuristic (Metzger & Flanagin, 2013). However, the magnitude of the effects produced by these different features has rarely been quantified and compared. Of the four cues investigated, social consensus cues exerted the smallest effect on diffusion and detection. The number of article likes appeared to increase the likelihood that an article will be shared or trusted, while article comments had the opposite effect. This discrepancy might have arisen because participants perceived likes on social media as endorsements, while comments may have been interpreted as an indication of public suspicion and further warrant for discussion. However, the observed effects were very small and would likely have no practical impact on false news detection or diffusion.

Of all the features investigated, political and belief congruence exerted the most consistent and largest effects on false news diffusion and detection. As suggested by misinformation scholars (e.g., Lewandowsky et al., 2012) and a longstanding body of literature on confirmation bias and selective attention, participants in this study tended to value information consistent with their views and had a difficult time separating fact from fiction as a result.

The final hypothesis tested pertained to the possibility of a truth-bias in the context of online misinformation. In contrast to previous research and theory on

55 deception detection (Levine et al., 2006), participants in the current study exhibited a lie- bias in false news detection. As false news increased in frequency, detection accuracies increased. The potential cause of this finding and its relation to skepticism online is discussed further below.

CONTRIBUTIONS TO MISINFORMATION LITERATURE Scholars have identified several reasons why people fall victim to misinformation.

The most commonly cited is that people are more inclined to believe information that confirms their own beliefs (Flanagin & Metzger, 2007; Lewandowsky et al., 2012;

Metzger & Flanagin, 2013). Confirmation bias has been studied and discussed for decades and has been of particular interest to those studying false news (Knobloch-

Westerwick et al., 2017).

Belief congruence generally renders all online news trustworthy, but in the current study it only predicted the sharing of false news. This is an important distinction because it means that false news in particular induces a feeling of trust that causes people to want to share it with others. Thus, it is possible that false news covers events and opinions that people have very strong beliefs about, as opposed to true news that simply reflects whatever events are truly occurring in the world. Consequently, when a false news story describes an event related to one’s strongest beliefs (and thus are relevant to one’s personal identity), it needs to be validated by sharing with one’s social network (Bosson

& Swann, 1999).

56

As for the relation between belief congruence and deception detection, processing fluency could be a driving mechanism for trusting false news. When information is easier to process, people assign more favorable attributes to it, such as truth or accuracy (e.g.,

Alter & Oppenheimer, 2009). In the context of false news, when people process information that is consistent with their own beliefs, they process that information more quickly and easily than inconsistent information (Ajzen & Sexton, 1999). Belief congruent articles, therefore, would potentially be processed more easily than incongruent articles, inducing trust and favoritism.

Political congruence also makes news seem trustworthy and worthy of being shared with one’s network. It has been well established that partisanship affects our acceptance of information, but more recently it has been related to acceptance of misinformation (Weeks, 2015). The current set of studies further supports the claim that people are more receptive to false news aligned with their own beliefs, deriving from partisanship or more specific worldviews. Research on partisan bias has found that politically congruent information is typically favored (Knobloch-Westerwick et al.,

2017), possibly because it allows people to feel more confident in their own partisanship, which in turn provides additional evidence for a confirmation bias (Nickerson, 1998).

Thus, it is important to consider the interaction between partisanship and online content when assessing the threat of misinformation. Notably, the effect political congruence had on false news in the current studies was irrespective of the actual news topic. Even articles focused on science or health were assessed in relation to which political group

57 they were aligned with. Some of those articles were unrelated to either party or related to both. Regardless of the article topic, as long as it was consistent with the beliefs of a particular party, it was considered trustworthy, which provides further evidence that partisanship is deeply ingrained in the acceptance of misinformation.

Moral-emotional language appeared to exhibit an effect on diffusion and detection, albeit inconsistent, perhaps because it is relatively rare in news headlines and summaries. However, when it was present, it appeared to induce trust while also promoting diffusion. In social media, moral-emotional language has been found to boost diffusion because it plays on emotions, as well as the human need to prioritize moral belief (Brady et al., 2017). Morality has been linked to emotional processing (Rozin,

Lowery, Imada, & Haidt, 1999), while emotions have been found to affect the spread of information throughout a social network (Kramer, Guillory, & Hancock, 2014). Both paired together play on human tendencies to react to emotionally charged stimuli and spread visceral reactions with others.

The current studies also found some evidence for the emotional contagion effect on news diffusion, such that participants were more willing to share articles containing moral-emotional words than others without such words. When moral-emotional language was present in the summary, articles were deemed more trustworthy, but its presence in a headline made the article seem more suspicious. This conditional effect could be attributable to the fact that this type of language is not very common in true or false news.

The lack of variation across news articles makes it difficult to claim the results are

58 generalizable. On the other hand, the effect could have occurred because this language is more salient in a headline, hinting to the reader the article’s intent to capture one’s attention. People typically do not welcome what they perceive as attempts to influence their attitudes and behaviors, resulting in counter-arguing and resistance to persuasion

(Banas & Rains, 2010).

Social consensus indicators, on the other hand, exerted consistent effects on diffusion and detection. These indicators seem to cue the bandwagon heuristic, a common rule of thumb when assessing credibility online (Metzger & Flanagin, 2013).

However, the size of the effects observed in the current studies were very small and thus unlikely to exhibit a discernible impact on practical predictions. Nevertheless, the effect was large enough to be measured consistently across two separate studies. Interestingly, comments have a negative effect on trustworthiness, putting the article’s accuracy into question. People may use the comments as an estimate of the amount of disagreement associated with an article. Therefore, it appears people not only use online social consensus indicators to determine credibility, but also to determine whether something is untrustworthy.

Using heuristics generally should be more likely when someone is not engaging with information systematically. When concerned with detecting false news, a more systematic approach would theoretically be unconcerned with one’s personal beliefs, because such beliefs should be unrelated to the article’s veracity. Instead, data show that both political and belief congruence influenced perceived article veracity, which should

59 not be the case because one’s personal beliefs cannot influence the outcome of the news, unless the person writing the news shapes it to coincide with his/her beliefs. Therefore, it seems that people are engaging in heuristic processing when oriented to detect false news.

On the other hand, personal beliefs can be used systematically when determining whether to share news with one’s social network. If one is motivated by impression management, then sharing news that matches the beliefs of one’s social network could strategically be seen as potential for increasing social capital (S. Chen & Chaiken, 1999).

In this instance, the individual could be engaged in systematic processing. When systematically processing information, people focus more on relevant information that would help make an informed decision, taking their time to adequately consider their choices. Although concern for one’s social capital could induce systematic processing, participants in the reported study of diffusion spent very little time on the mock Facebook feed, which is more indicative of heuristic processing (Vishwanath, Herath, Chen, Wang,

& Rao, 2011). The quick processing of stimuli paired with consistent findings across both types of studies (detection and diffusion) supports the possibility that participants were indeed processing online news heuristically when trying to determine credibility

(Metzger & Flanagin, 2013).

CONTRIBUTIONS TO TRUTH-DEFAULT THEORY In the context of unaided deception detection, truth-default theory (TDT) posits that humans tend to trust others in most interpersonal situations, which affects the ability

60 to accurately detect deception (Levine, 2014). Additionally, TDT claims that most deception detection studies that find low accuracies typically do so because the base-rate of deceptive stimuli used (50/50) overestimates the amount of deception people are accustomed to encountering. Thus, the increased frequency of deception leads to lower accuracy because people overestimate the amount of honest statements they see as they are biased towards perceiving honesty (i.e., the veracity effect; Levine et al., 1999).

However, unlike the stimulus statements people encounter in traditional deception detection studies that find evidence for a truth-bias (e.g., C. F. Bond & DePaulo, 2006;

Park & Levine, 2001), people seem to be fairly suspicious of online news. Participants in both current studies were generally suspicious of online news. They generally believed that online news was biased and did not tell the whole story. When examining veracity judgments, they tended to label news articles as fake (i.e., lie-bias) and were more accurate at detecting false news than true news (i.e., inversed veracity effect). This bias has previously been observed in special discourse circumstances, such as police interrogations and communication between prison inmates (G. D. Bond, Malloy, Arias,

Nunn, & Thompson, 2005; Meissner & Kassin, 2002). The reported studies provide some initial evidence that the lie-bias can operate during online news consumption.

Base-rate manipulation further supported a lie-bias. As false news increased in frequency in the mock Facebook feeds, the more accurate detection performance became.

One possibility is that suspicion of the media contributed to the lie-bias. The data partially support this possibility. Those who reported generally trusting online news did

61 not display a lie-bias, but there was no truth-bias for them either. On the other hand, indifferent and skeptical individuals exhibited a consistent lie-bias. Therefore, the lie-bias observed here derived at least in part from skepticism of online news. Interestingly, however, even after taking skepticism into account, the veracity effect that has been documented in deception detection studies did not emerge.

An implication of the project findings is that TDT should delineate the contexts in which a truth- or lie-bias is most likely to occur. Although truth-bias seems to be fairly consistent across deception detection studies (Levine, 2014), there is evidence it is not ubiquitous (G. D. Bond et al., 2005; Meissner & Kassin, 2002). The findings reported are of particular importance given the frequency with which people seek and receive their news information from online sources, especially social networks. One encouraging aspect of the study findings is that participants were more accurate with detecting false news than traditional deception detection research would lead us to believe. However, the problem then shifts to growing suspicion of online news, which can have unhealthy effects on trust online. The reason why people exhibit a truth-bias to begin with is that trustworthiness has adaptive significance for relationships and societies. When that trust starts to erode, society might suffer adverse consequences. At present, there are only a handful of contexts in which a lie-bias seems to prevail, such as communication between prison inmates or warring adversaries (Bond et al., 2005). However, if perceptions of news are starting to lean toward a default of doubt, then trust in the public sphere might suffer.

62

PRACTICAL CONTRIBUTIONS Understanding why people fall victim to false news is important practically as well as theoretically. Relying on political congruence to decide whether to trust online news creates an environment ripe for tribalism and polarization (DiResta et al., 2018). In a scenario where people shut themselves off from contradictory information, echo chambers emerge, and a cycle of in-group bias can lead to discrimination. There is some hope because several scholars argue that these echo chambers are not the problem, instead shifting the focus on disinformation campaigns, where malicious groups attempt to use partisan bias and out-group conflicts to create perceived divides in society (Garrett,

2017).

Much of the research on misinformation focuses on individual difference predictors or detection techniques. Understanding the effects news features have on false news perception can inform ways of defending against such deception. Knowing that false news is enticing because it agrees with one’s beliefs/partisanship can provide social media platforms with necessary information for curbing diffusion of misinformation. One such path involves creating forewarnings tailored to belief and political congruent articles. Instead of trying to change the minds of those exposed to misinformation, which sometimes has the unintended effect of strengthening false beliefs (Nyhan & Reifler,

2010; Walter & Murphy, 2018), it might be more fruitful to instead inform the public about the strategies disinformation campaigns use. By focusing educational campaigns on the deceptive strategies, as opposed to the arguments themselves, people’s personal beliefs might be less threatened and therefore reduce the chance of counter-arguing. One 63 recent study found initial support for this approach by testing a video game that teaches participants how to recognize misinformation (Roozenbeek & Linden, 2019).

Knowledge of a media lie-bias paints a different, more pessimistic picture. People are beginning to show general, active mistrust of news media, which needs to be addressed to ensure that credible sources are trusted. If this trend continues, people might be more inclined to seek fringe outlets that propagate false news stories that are nonetheless aligned with their personal and political beliefs. News outlets would be well advised to start taking information integrity more seriously, focusing less on viewership and more on accuracy. Although not a perfect solution, building trust with the public can hopefully return some trust to the media, while alienating fake news outlets. One promising approach could be a bi-partisan panel of experts brought together to identify credible and non-credible outlets. Having an independent group of people verify trustworthy outlets could increase trust in the media.

LIMITATIONS There are several limitations of the reported studies. First and foremost, participants were put in an artificial social media environment that might have affected how they perceived the news articles. It is possible that participants, when using their own social media profiles, would interact with the same online news articles differently.

The current approach attempted to mitigate this possibility by creating a mock newsfeed.

Doing so hopefully created a more naturalistic setting, but without access to participants’ actual social media profiles it is difficult to guarantee generalizable results.

64

Additionally, moral-emotional language was rare in the stimuli, which could have contributed to its effect on detection and diffusion. However, because the news articles were collected from fact-checking websites rather than being generated by the researcher, the language in the headlines and summaries reflected what appears in extant news stories. Also, even though the presence of moral-emotional language was limited, it was interesting to find that even the presence of one such word increased the likelihood that an article would be perceived as credible.

Finally, the most common concern with social scientific research is the representativeness of the sample. The current studies used college participants, most of whom were female and liberal. A nationally representative sample could potentially yield different results. However, it is not immediately obvious how a sample with different gender, educational and political orientation characteristics would contrast withthe current one in general diffusion and detection processes above and beyond the impact of belief congruence on these processes. Nevertheless, the limited generalizability of the sample must be acknowledged.

FUTURE DIRECTIONS Future research on deception detection should take contexts into account when testing various hypotheses. The current study is the first to show the presence of a lie-bias outside of limited contexts that only a small subset of the population is exposed to (e.g., police interrogations, prison, and war). There is some evidence that the truth-bias exists for computer-mediated interactions (Hancock, Woodworth, & Goorha, 2009), but the

65 context explored did not go beyond instant messaging in a laboratory setting. There is no shortage of computer-mediated-deception research that is being undertaken right now, but very few studies have addressed the issue of skepticism online.

An important next step for misinformation research is to experimentally manipulate all four features to establish a causal link to belief and diffusion, because there may have been confounding variables not accounted for in the current analyses.

That said, some causal connections can be inferred from the current studies, because causal order remains such that the different features of the news articles were established before the participants were exposed to them and are therefore independent of study responses. A future conceptual replication that manipulates the news features would reinforce the current findings.

Finally, as mentioned previously, future research should consider possible mitigation techniques that use the current findings as a starting point. Research on inoculation theory has been extensive but has mostly focused on counter-attitudinal arguments (Banas & Rains, 2010). Only limited research has been done on inoculation on misinformation, which is mostly effective because it is pro-attitudinal, and even those that have are limited in scope (Cook, Lewandowsky, & Ecker, 2017). Instead of focusing efforts on informing the public about faulty arguments, time might be better spent developing mitigation techniques that are more generalizable, such as feature-based news assessment education that informs people of the different techniques false news might use to misinform (e.g., Roozenbeek & Linden, 2019). Awareness of such tricks will better

66 equip the public to resist all types of false news stories rather than those about specific topics.

67

Appendices

APPENDIX A: FACEBOOK ARTICLE EXAMPLE

68

APPENDIX B: FACEBOOK MOCK FEED EXAMPLE

69

APPENDIX C: MEYER’S (1988) NEWS CREDIBILITY SCALE In general, when you think about the news articles online, do you think they: 1. “are fair,” 2. “are accurate,” 3. “are unbiased” 4. “tell the whole story” 5. “can be trusted”

70

APPENDIX D: INTERNET SCAVENGER HUNT 1. What kind of local precipitation can you expect today? Google weather for the answer. Provide the chance of precipitation.

2. World Holidays Select a holiday celebrated today somewhere in the world. (Not one in your country.)

3. Select a Quote for the Day. You may choose any that appear on the page and it does not have to be todays quote. Write the quote and source here.

4. Briefly describe the photo at Photo of the Day

5. Go to Google.com and search for “Johnny Depp movies”. Click on the link to IMDb and find which 1998 movie he appeared in. Write the name of the movie here.

6. Go to Kayak.com and find out what is the cheapest flight to New York City you can find that leaves 13 March and returns 27 March? Write the price here.

7. Go to Craigslist.com and find your local version. Click on the link for “Free” in the “For Sale” section. Write the name of the free item here.

Source: https://www.wssd.k12.pa.us/AlwaysNEWInternetScavengerHunt.aspx http://scavenger-hunt.org/internet-scavenger-hunt-for-adults/

71

REFERENCES Ajzen, I., & Sexton, J. (1999). Depth of processing, belief congruence, and attitude- behavior correspondence. In S. Chaiken & Y. Trope, Dual-Process Theories in Social Psychology (pp. 117–138). New York: Guilford Press.

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. doi:10.1257/jep.31.2.211

Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes of fluency to form a metacognitive nation. Personality and Social Psychology Review, 13(3), 219–235. doi:10.1177/1088868309341564

Applbaum, R., & Anatol, K. W. E. (1972). The factor structure of source credibility as a function of the speaking situation. Speech Monographs, 39(3), 216–222. doi:10.1080/03637757209375760

Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs General and Applied, 70(9), 1–70.

Banas, J. A., & Rains, S. A. (2010). A meta-analysis of research on inoculation theory. Communication Monographs, 77(3), 281–311. doi:10.1080/03637751003758193

Berger, J. A., & Milkman, K. L. (2012). What makes online content viral? Journal of Marketing Research, 49(2), 192–205. doi:10.1509/jmr.10.0353

Bond, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review, 10(3), 214–234.

Bond, G. D., & Speller, L. F. (2009). Gray area messages. In M. S. McGlone & M. Knapp, The Interplay of Truth and Deception (pp. 35–53). New York, NY: Routledge.

Bond, G. D., Malloy, D. M., Arias, E. A., Nunn, S. N., & Thompson, L. A. (2005). Lie‐biased decision making in prison. Communication Reports, 18, 9–19. doi:10.1080/08934210500084180

Bosson, J. K., & Swann, W. B. (1999). Self-liking, self-competence, and the quest for self-verification. Personality and Social Psychology Bulletin, 25(10), 1230–1241. doi:10.1177/0146167299258005

72

Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks, 114(28), 7313–7318. doi:10.1073/pnas.1618923114

Brenan, M. (2017, December 26). Nurses keep healthy lead as most honest, ethical profession. Retrieved July 5, 2018, from https://news.gallup.com/poll/224639/nurses-keep-healthy-lead-honest-ethical- profession.aspx

Burger, J. M., Messian, N., Patel, S., del Prado, A., & Anderson, C. (2004). What a coincidence! The effects of incidental similarity on compliance. Personality and Social Psychology Bulletin, 30(1), 35–43. doi:10.1177/0146167203258838

Burgoon, J. K., Stern, L. A., & Dillman, L. (1995). Interpersonal adaptation. Cambridge, U.K.: Cambridge University Press.

Chen, S., & Chaiken, S. (1999). The heuristic-systematic model in its broader context. In S. Chaiken & Y. Trope, Dual-Process Theories in Social Psychology (pp. 73–96). New York, NY.

Chen, X., & Sin, S. C. (2013). “Misinformation? What of it?” Motivations and individual differences in misinformation sharing on social media. Presented at the Association for Information Science and Technology, Montreal.

Colleoni, E., Rozza, A., & Arvidsson, A. (2014). Echo chamber or public sphere? Predicting political orientation and measuring political homophily in Twitter using big data. Journal of Communication, 64(2), 317–332. doi:10.1111/jcom.12084

Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Presented at the Association for Information Science and Technology, St. Louis, MO.

Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS ONE, 12(5), e0175799–21. doi:10.1371/journal.pone.0175799

Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology, 24(4), 349–354. doi:10.1037/h0047358

73

DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70(5), 1– 17.

DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., Matney, R., Fox, R., et al. (2018). The tactics & tropes of the (pp. 1–101). Retrieved from https://cdn2.hubspot.net/hubfs/4326998/ira-report-rebrand_FinalJ14.pdf

Ditto, P. H., Liu, B. S., Clark, C. J., Wojcik, S. P., Chen, E. E., Grady, R. H., et al. (2019). At Least Bias Is Bipartisan: A Meta-Analytic Comparison of Partisan Bias in Liberals and Conservatives. Perspectives on Psychological Science, 14(2), 273–291. doi:10.1177/1745691617746796

Eagly, A., Ashmore, R. D., Makhijani, M. G., & Longo, L. C. (1991). What is beautiful is good, but...: A meta-analytic review of research on the physical attractiveness stereotype. Psychological Bulletin, 110, 109–128.

Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford: Stanford University Press.

Fischer, P., Jonas, E., Frey, D., & Schulz-Hardt, S. (2005). Selective exposure to information: The impact of information limits. European Journal of Social Psychology, 35(4), 469–492. doi:10.1002/ejsp.264

Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user attributes, and information verification behaviors on the perceived credibility of web-based information. New Media & Society, 9(2), 319–342. doi:10.1177/1461444807075015

Flanagin, A. J., & Metzger, M. J. (2008). Digital media and youth: Unparalleled opportunity and unprecedented responsibility. In M. J. Metzger & A. J. Flanagin, Digital Media, Youth, and Credibility (pp. 5–28). Cambridge, MA: MIT Press. doi:10.1162/dmal.9780262562324.005

Fogg, B. J., Soohoo, C., Danielson, D. R., Marable, L., Stanford, J., & Tauber, E. R. (2003). How do users evaluate the credibility of Web sites? A study with over 2,500 participants (pp. 1–15). Presented at the Proceedings of the 2003 Conference on Designing for User Experiences, New York: ACM Press. doi:10.1145/997078.997097

Garrett, R. K. (2017). The “Echo Chamber” Distraction: Disinformation Campaigns are the Problem, Not Audience Fragmentation. Journal of Applied Research in Memory and Cognition, 6(4), 370–376. doi:10.1016/j.jarmac.2017.09.011

74

Gilbert, D. T. (1991). How mental systems believe. American Psychologist, 46(2), 1–13.

Goffman, E. (1959). The presentation of self in everyday life. New York: Anchor Books.

Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046. doi:10.1037/a0015141

Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5, 1–9.

Hancock, J. T., Woodworth, M. T., & Goorha, S. (2009). See no evil: The effect of communication medium and motivation on deception detection. Group Decision and Negotiation, 19(4), 327–343. doi:10.1007/s10726-009-9169-7

Hart, W., Albarracín, D., Eagly, A., Brechan, I., Lindberg, M., & Merrill, L. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135(4), 555–588. doi:10.1037/a0015701

Hauch, V., Blandon-Gitlin, I., Masip, J., & Sporer, S. L. (2014). Are computers effective lie detectors? A meta-analysis of linguistic cues to deception. Personality and Social Psychology Review, 19(4), 1088868314556539–342. doi:10.1177/1088868314556539

Hernon, P. (1995). Disinformation and misinformation through the Internet: Findings of an exploratory study. Government Information Quarterly, 12(2), 133–139.

Hogg, M. A., & Turner, J. C. (1987). Intergroup behaviour, self-stereotyping and the salience of social categories. British Journal of Social Psychology, 26(4), 325– 340. doi:10.1111/j.2044-8309.1987.tb00795.x

Holbert, R. L. (2005). A typology for the study of entertainment television and politics. American Behavioral Scientist, 49(3), 436–453. doi:10.1177/0002764205279419

Jamieson, K. H., & Cappella, J. N. (2008). Echo chamber: Rush Limbaugh and the conservative media establishment. New York, NY: .

Kim, K. J., & Sundar, S. S. (2016). Mobile persuasion: Can screen size and presentation mode make a difference to trust? Human Communication Research, 42, 45–70. doi:10.1111/hcre.12064

Knobloch-Westerwick, S., Mothes, C., & Polavin, N. (2017). Confirmation bias, ingroup bias, and negativity bias in selective exposure to political information. Communication Research, 1–21. doi:10.1177/0093650217719596 75

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks, 111(29), 8788–8790. doi:10.1073/pnas.1412469111

Kumar, K. K., & Geethakumari, G. (2014). Detecting misinformation in online social networks using cognitive psychology. Human-Centric Computing and Information Sciences, 4(14), 1–22.

Kumkale, G. T., & Albarracín, D. (2004). The sleeper effect in persuasion: A meta- analytic review. Psychological Bulletin, 130, 143–172. doi:10.1037/0033- 2909.130.1.143

Lapinski, M. K., Maloney, E. K., Braz, M., & Shulman, H. C. (2012). Testing the effects of social norms and behavioral privacy on hand washing: A field experiment. Human Communication Research, 39, 21–46. doi:10.1111/j.1468- 2958.2012.01441.x

Lee, E. (2007). Character-based team identification and referent informational influence in computer-mediated communication. Media Psychology, 9, 135–155. doi:10.1080/15213260709336806

Levine, T. R. (2014). Truth-Default Theory (TDT): A theory of human deception and deception detection. Journal of Language and Social Psychology, 33(4), 378–392. doi:10.1177/0261927X14535916

Levine, T. R., Blair, J. P., & Clare, D. D. (2013). Diagnostic utility: Experimental demonstrations and replications of powerful question effects in high-stakes deception detection. Human Communication Research, 40(2), 262–289. doi:10.1111/hcre.12021

Levine, T. R., Kim, R. K., & Blair, J. P. (2010). (In)accuracy at detecting true and false confessions and denials: An initial test of a projected motive model of veracity judgments. Human Communication Research, 36(1), 82–102. doi:10.1111/j.1468- 2958.2009.01369.x

Levine, T. R., Kim, R. K., Park, H. S., & Hughes, M. (2006). Deception detection accuracy is a predictable linear function of message veracity base-rate: A formal test of Park and Levine's Probability Model. Communication Monographs, 73(3), 243–260. doi:10.1080/03637750600873736

Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect.” Communication Monographs, 66(2), 125–144. doi:10.1080/03637759909376468

76

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. doi:10.1177/1529100612451018

Liang, K.-Y., & Zeger, S. L. (1986). Longitudinal data analysis using generalized models. Biometrika, 13–22.

McCornack, S. A., Morrison, K., Paik, J. E., Wisner, A. M., & Zhu, X. (2014). Information manipulation theory 2: A propositional theory of deceptive discourse production. Journal of Language and Social Psychology, 33(4), 348–377. doi:10.1177/0261927X14534656

McGinnies, E., & Ward, C. D. (1980). Better liked than right: Trustworthiness and expertise as factors in credibility. Personality and Social Psychology Bulletin, 6(3), 467–472.

McGlone, M. S. (2005). Contextomy: The art of quoting out of context. Media, Culture & Society, 27(4), 511–522. doi:10.1177/0163443705053974

Meissner, C. A., & Kassin, S. M. (2002). “He’s guilty!”: Investigator bias in judgments of truth and deception. Law and Human Behavior, 26(5), 469–480.

Metzger, M. J. (2007). Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), 2078–2091. doi:10.1002/asi.20672

Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210– 220. doi:10.1016/j.pragma.2013.07.012

Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413–439. doi:10.1111/j.1460-2466.2010.01488.x

Meyer, P. (1988). Defining and measuring credibility of newspapers: Developing an index. Journalism Quarterly, 65(3), 567–588.

Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67(4), 371–378.

77

Montoya, R. M., Horton, R. S., & Kirchner, J. (2008). Is actual similarity necessary for attraction? A meta-analysis of actual and perceived similarity. Journal of Social and Personal Relationships, 25(6), 889–922. doi:10.1177/0265407508096700

Munro, G. D. (2010). The scientific impotence excuse: Discounting belief-threatening scientific abstracts. Journal of Applied Social Psychology, 40(3), 579–600.

Nakashima, R. (2018, August 28). How Google search results work. Retrieved September 7, 2018, from https://apnews.com/693f55e3781a4c53a390a1e3b917c76e

Newport, F., & Dugan, A. (2017, August 3). Partisan Differences Growing on a Number of Issues. Retrieved September 5, 2018, from https://news.gallup.com/opinion/polling-matters/215210/partisan-differences- growing-number-issues.aspx

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.

Ott, M., Choi, Y., Cardie, C., & Hancock, J. T. (2011). Finding deceptive opinion spam by any stretch of the imagination (pp. 309–319). Presented at the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR.

Pan, W. (2001). Akaike's information criterion in generalized estimating equations. Biometrics, 57, 120–125.

Park, H. S., & Levine, T. (2001). A probability model of accuracy in deception detection experiments. Communication Monographs, 68(2), 201–210. doi:10.1080/03637750128059

Pennington, N., & Hastie, R. (1992). Explaining the evidence: Tests of the Story Model for juror decision making. Journal of Personality and Social Psychology, 62(2), 189–206. doi:10.1037/0022-3514.62.2.189

Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, 6, 201806781–6. doi:10.1073/pnas.1806781116

Pew Research Center. (2014). Political Polarization & Media Habits. Retrieved from http://www.journalism.org/2014/10/21/political-polarization-media-habits/

78

Rimal, R. N., & Real, K. (2005). How behaviors are influenced by perceived norms: A test of the theory of normative social behavior. Communication Research, 32(3), 389–414. doi:10.1177/0093650205275385

Roozenbeek, J., & Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 1–10. doi:10.1057/s41599-019-0279-9

Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD Triad Hypothesis: A mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity). Journal of Personality and Social Psychology, 76(4), 574–586.

Rubin, V. L., Chen, Y., & Conroy, N. J. (2015). Deception detection for news: Three types of fakes. Presented at the Association for Information Science and Technology, St. Louis, MO.

Serota, K. B., & Levine, T. R. (2015). A few prolific liars: Variation in the prevalence of lying. Journal of Language and Social Psychology, 34(2), 138–157. doi:10.1177/0261927X14528804

Serota, K. B., Levine, T. R., & Boster, F. J. (2010). The prevalence of lying in America: Three studies of self-reported lies. Human Communication Research, 36(1), 2–25. doi:10.1111/j.1468-2958.2009.01366.x

Smith, A. (2014, February 3). What people like and dislike about Facebook. Retrieved September 5, 2018, from http://www.pewresearch.org/fact-tank/2014/02/03/what- people-like-dislike-about-facebook/

Soll, J. (2016, December 18). The long and brutal history of fake news. Retrieved September 7, 2018, from https://www.politico.com/magazine/story/2016/12/fake- news-history-long-violent-214535

Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin, Digital Media, Youth, and Credibility (pp. 73–100). Cambridge, MA. doi:10.1162/dmal.9780262562324.073

Tausczik, Y. R., & Pennebaker, J. W. (2010). The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods. Journal of Language and Social Psychology, 29(1), 24–54. doi:10.1177/0261927X09351676

79

The Psychology of Attitudes. (1993). The Psychology of Attitudes. Fort Worth, TX: Harcourt Brace Jovanovich.

Thorson, E. A. (2015). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 33(3), 460–480. doi:10.1080/10584609.2015.1102187

Tsfati, Y. (2007). Hostile media perceptions, presumed media influence, and minority alienation: The case of Arabs in Israel. Journal of Communication, 57(4), 632– 651. doi:10.1111/j.1460-2466.2007.00361.x

Tsfati, Y., & Cohen, J. (2005). Democratic consequences of hostile media perceptions: The case of Gaza settlers. Harvard International Journal of Press/Politics, 10(4), 28–51. doi:10.1177/1081180X05280776

Varol, O., Ferrara, E., Davis, C., Menczer, F., & Flammini, A. (2017). Online human-bot interactions: Detection, estimation, and characterization (pp. 1–11). Presented at the 11th International Conference on Web and Social Media, Montreal.

Vishwanath, A., Harrison, B., & Ng, Y. J. (2018). Suspicion, cognition, and automaticity model of phishing susceptibility. Communication Research, 45(8), 1146–1166. doi:10.1177/0093650215627483

Vishwanath, A., Herath, T., Chen, R., Wang, J., & Rao, H. R. (2011). Why do people get phished? Testing individual differences in phishing vulnerability within an integrated, information processing model. Decision Support Systems, 51(3), 576– 586. doi:10.1016/j.dss.2011.03.002

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. doi:10.1126/science.aap9559

Walter, N., & Murphy, S. T. (2018). How to unring the bell: A meta-analytic approach to correction of misinformation. Communication Monographs, 85(3), 423–441. doi:10.1080/03637751.2018.1467564

Walther, J. B., & Parks, M. R. (2002). Cues filtered out, cues filtered in: Computer- mediated communication and relationships. In M. L. Knapp & J. A. Daly, Handbook of Interpersonal Communication (3rd ed., pp. 529–563). Thousand Oaks, CA: Sage.

Walther, J. B., Van Der Heide, B., Hamel, L. M., & Shulman, H. C. (2009). Self- generated versus other-generated statements and impressions in computer- mediated communication: A test of Warranting Theory using Facebook. Communication Research, 36(2), 229–253. doi:10.1177/0093650208330251

80

Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of Communication, 65(4), 699–719. doi:10.1111/jcom.12164

Wilson, E. J., & Sherrell, D. L. (1993). Source effects in communication and persuasion research: A meta-analysis of effect size. Journal of Academy of Marketing Science, 21(2), 101–112. doi:10.1007/BF02894421

Zhou, L., Burgoon, J. K., Twitchell, D. P., Qin, T., & Nunamaker, J. F., Jr. (2004). A comparison of classification methods for predicting deception in computer- mediated communication. Journal of Management Information Systems, 20(4), 139–165.

81