<<

Motivated Reasoning Without Partisanship? Fake News in the 2018 Brazilian Elections

Frederico Batista Pereira1, Natália S. Bueno2, Felipe Nunes3, and Nara Pavão4

1Assistant Professor, University of North Carolina at Charlotte 2Assistant Professor, Emory University 3Assistant Professor, Universidade Federal de Minas Gerais

4Assistant Professor, Universidade Federal de Pernambuco

Abstract

Studies suggest that rumor acceptance is driven by motivated reasoning and that people’s desire to conclude what is suggested by their partisanship undermines the effectiveness of corrective information. This paper explores this process in a context where party attachments are weaker and less stable than elsewhere. We conducted a survey experiment during the 2018 elections in Brazil to examine the extent of rumor acceptance and the effectiveness of fact-checking corrections to fake news stories disseminated about the country’s most important political group. We find that about a third of our respondents believe the rumors used in the study and that, just like what is found in developed nations, belief in misinformation in Brazil is correlated with (anti)partisan attitudes. However, fact-checking corrections are particularly weak in Brazil compared to the developed world. While weak parti- sanship reduces the rates of rumor acceptance in Brazil, ineffective corrections are unlikely to prevent the circulation of fake news in the country.

Please do not quote or cite without permission Fake news, understood as false information whose purpose is to generate and reinforce misperceptions of reality, is a growing concern in politics, due to its potential to distort public debate and disrupt elections (Lazer et al. 2018). The vast majority of the knowl- edge we currently have about political misperceptions comes from research conducted in the developed world, particularly in the United States (Walter and Murphy 2018; Nieminen and Rapeli 2019). This growing field shows that misinformation is pervasive, resilient, and largely fueled by partisanship (Nyhan and Reifler 2010; Flynn, Nyhan and Reifler 2017; Berinsky 2017b). Yet, despite growing scholarly interest in the dissemination of misinformation across the Global South, we have limited knowledge about the contours and dynamics that the spread of fake news about politics assumes in such settings. The recent epidemic of fake news in large democracies like Brazil, India, and Mexico has raised concerns regarding the particularly negative implications that this phenomenon may have in the developing world. At the same time, many developing democracies have low levels of partisanship (Dalton and Weldon 2007; Lupu 2015; Samuels and Zucco 2018). For years, many believed that fractured party systems and low levels of party identification loomed over developing democracies, making them unstable. However, with the rise of polarization and partisan- driven misinformation, shallow levels of party identification among the electorate may limit the spread of fake news. We examine the phenomenon of political misperceptions in the context of the 2018 elections in Brazil, which were particularly plagued by the dissemination of misinfor- mation (Resende, Melo, Sousa, Messias, Vasconcelos, Almeida and Benevenuto 2019; Machado et al. 2019). Brazil presents a different set of partisan attitudes and represents a novel setting to test and further develop old and new hypotheses regarding political misbeliefs and their persistence. The country has lower levels of party identification than most developed nations and a strong aversion to political parties, which has created a large contingent of self-identified nonpartisan voters. At the same time, the partisan attitudes that have indeed formed in Brazil are structured around the country’s main

2 political party—the Workers’ Party (PT) (Samuels 2006; Samuels and Zucco 2018). An important share of Brazilian voters cultivates either a psychological attachment to or a rejection of this political party (positive and negative partisanship, respectively). More- over, a growing number of studies show that many self-declared nonpartisans take sides and behave similarly to partisans during elections due to short-term electoral stimuli (Baker and Rennó 2019). In this paper, we examine how widespread beliefs in political fake news are in Brazil, as well as how resilient they are to third-party fact-checking corrections. We also explore the extent to which partisanship drives beliefs in political rumors and moderates the effectiveness of corrective information. To our knowledge, our study is the first of its kind in Latin America and one of the first in the developing world.1 As such, the study has the potential to improve our comparative understanding of the nature of beliefs in political fake news. Our design relies on the manipulation of existing fake news stories about the Workers’ Party and its main icon, former president Luís Inácio Lula da Silva, which represent the political group most targeted by the circulation of false information in Brazil. We conducted data collection using a face-to-face cross-sectional survey with an embedded experiment on a large representative sample of voters from the state of Minas Gerais—which has the country’s second-largest contingent of voters—just a few days before the first round of the national elections in 2018. We assess the prevalence of false beliefs about politics as well as the effectiveness of professional fact-checking corrections to six fake news stories that had large circulations in Brazil. We find that about a third of our sample accepts the rumors used in the study, and that rumor acceptance is correlated with partisan and antipartisan attitudes. Our results also show that fact-checking does not increase rumor rejection during the election,

1The study by (Carey et al. 2020) explores rumors about disease outbreaks in Brazil, which are treated by the literature as nonpolitical rumors. For other studies see the section Appendix K in Supporting Information (SI) for more details.

3 even among a large share of self-identified nonpartisan voters. We also show suggestive evidence that corrections may work among “true” nonpartisans, that is, a subset of self- declared nonpartisans who express neutral feelings toward the PT. Therefore, our contribution to the study of misbeliefs in politics is twofold. First, our findings suggest that the process of rumor acceptance operates similarly in less established democracies relative to more advanced ones, where most of the existing evidence has been produced. Second, this pattern suggests that although limited partisanship in Brazil may prevent higher levels of rumor acceptance among the public, fact-checking corrections are ineffective at debunking these rumors and preventing their dissemination The remainder of the paper is structured as follows. Next, we review the main contri- butions of the literature regarding the determinants of political misinformation as well as the ineffectiveness of corrective information. This section also discusses the configuration of partisan attitudes in Brazil and presents the main hypotheses tested. In the following section, we describe the study design and the variables used in the empirical analysis. The results are divided into two parts. The first presents the findings regarding the deter- minants of beliefs in false information in Brazil, and the second analyzes the effectiveness of corrective information as well as its correlates. Then we put our findings from Brazil in perspective by providing a meta-analysis of political science studies on fake news and discuss how features of our design do not explain the differences we find. The conclusion of the paper discusses the implications of our results for the study of misinformation and politics more broadly.

The Prevalence and Persistence of False Beliefs Fake news is false information— that is., “distorted signals uncorrelated with the truth” (Allcott and Gentzkow 2017, 212)—whose main purpose is to generate and re- inforce misperceptions of reality. Although fake news stories closely follow the shape and form of traditional news media content, they are not produced following the same standard rules and procedures that render accuracy and credibility to information (Lazer

4 et al. 2018). When manifested in politics, fake news assumes two main features: it is both false and politically motivated. Fake news represents a growing concern for political science. Although empirical evi- dence documenting its effects is still scarce, fake news is often associated with undesirable political outcomes. By promoting misinformation, fake news is thought to make voters “less informed about the costs and benefits of proposed policies and the performance of politicians in office” (Little 2018, 49), which could distort public debate and have dis- ruptive implications for democracy. Although the evidence is mixed, there is a growing concern that the dissemination of fake news may be tied to political polarization and generalized distrust (Tucker et al. 2018). Finally, despite existing evidence showing po- litical campaigns to have limited capacity to persuade voters, the endorsement of fake news pieces in social media may boost their effects on vote choice (Lazer et al. 2018). As a result of the potential negative effects of this phenomenon, considerable academic effort has been made to understand the factors that lead citizens to hold false beliefs about politics and also what can correct these beliefs. Although individuals’ political sophistication (Fridkin, Kenney and Wintersieck 2015) as well as levels of dogmatism and disengagement (Berinsky 2018) are associated with misbeliefs, motivated reasoning seems to be the main driving force behind false beliefs and their persistence (Flynn, Nyhan and Reifler 2017; Berinsky 2017b,a; Nyhan and Reifler 2010). Citizens have the tendency to seek out and accept information that is aligned with or confirms their political preferences, and to avoid and reject information that contradicts them (Taber and Lodge 2006). Partisanship is a key source of directional motivated reasoning behind political fake news (Bolsen, Druckman and Cook 2014; Flynn, Nyhan and Reifler 2017; Bullock et al. 2015; Allcott and Gentzkow 2017). Individuals tend to hold false beliefs about politics when such beliefs portray their preferred political group in a good light. Given the strength of motivated reasoning and relative stability of dogmatism, polit- ical sophistication, and partisanship at the individual level, one of the central questions in the literature is whether these political misperceptions can be corrected in light of

5 new information (Flynn, Nyhan and Reifler 2017). The overall result seems to be that corrections can indeed be effective against fake news, but there is substantial variation in the extent to which they work (Walter and Murphy 2018). For example, Nyhan and Reifler (2010) show that corrections are not only ineffective at debunking false beliefs, but they can also backfire and reinforce these misbeliefs by increasing individuals’ familiarity with false claims. Walter and Murphy (2018) find that pro-attitudinal corrections— fact-checking that corroborates preexisting convictions—are more effective than counter- attitudinal ones—fact-checking that challenges preexisting beliefs. The findings seem to converge in stating the importance of motivated reasoning, mainly grounded on indi- viduals’ political affiliation or party identification, as a key moderator in the process of correcting misbeliefs, since studies have found different reactions among, for example, Democrats and Republicans in the United States (Flynn, Nyhan and Reifler 2017). The empirical evidence comes almost exclusively from the United States and Western Europe. Out of 64 studies on corrections to misinformation examined by Walter and Murphy (2018), 45 studies are about the United States, 11 about Oceania, 6 about West- ern Europe, and 2 about Eastern Asia (China). None of them was about Latin America or Africa. With the exception of China, the countries covered in the study have compar- atively high levels of education and strong partisan identities. The United States, where the overwhelming majority of studies were conducted, has particularly strong partisan attachments to two main political parties. As the scholarship demonstrates, younger and less developed democracies tend to have lower levels of partisanship due to factors such as party system fragmentation, less party discipline, and timing of democratization (Huber, Kernell and Leoni 2005; Dalton and Weldon 2007; Lupu 2015). Because partisanship is a source of motivated reasoning, the lack of partisan attachments among citizens from less developed democracies could mean that fake news stories are less accepted and more easily corrected by fact-checkers in those contexts. However, there is also evidence that the amount of partisan motivated reasoning in less developed democracies could be similar to what is found among other democracies. While

6 standard measures of partisan attachments tend to identify fewer self-declared partisans in developing nations, research suggests that nonpartisans also include a large portion of individuals who take sides during the electoral process. First, many voters who do not identify with a party tend to develop antipartisan identities, that is, consistent feelings of rejection towards one or more parties in the political system. Samuels and Zucco (2018) show that the proportion of antipartisans is large even in the electorates of developed democracies and grows over time around the globe. While these antipartisans tend to be less politically engaged and supportive of democracy than partisans, they are similar to partisans in crucial aspects for understanding how they process political information. As Samuels and Zucco (2014) show for Brazil, since antipartisans have parties as reference points in their negative identity, they engage in motivated reasoning in the same way as partisans with respect to their positive identity toward parties. Second, some individuals who self-identify as nonpartisans in standard survey ques- tions may still have political orientations that lean in directions that are consistent with partisan cleavages. Evidence shows that many nonpartisans tend to occasionally identify with (bounded partisans) or vote systematically for the same party (Brader and Tucker 2001; Baker et al. 2015). Moreover, new studies show that using close-ended instead of filter questions to measure partisanship leads many nonpartisans to declare themselves to be partisans (Baker and Rennó 2019; Cornejo 2019). Partisan leaners differ from partisans in important ways, as their attitudes tend to be more fluid and affected by short-term factors during elections (Michelitch and Utych 2018; Baker and Dorr 2019). Nevertheless, partisan leaners may still engage in the same kind of motivated information processing as partisans during elections, even though the source of their alignment comes from short-term electoral stimuli. We expect that motivated reasoning will increase beliefs in fake news and decrease the effectiveness of fact-checking corrections even in younger democracies due to partisanship, antipartisanship, and partisan leaning.

Hypothesis 1 (H1): Beliefs in political rumors are largely driven by partisan attitudes.

7 Hypothesis 2 (H2): Third-party fact-checking corrections will be effective in reducing beliefs in political rumors.

Hypothesis 3 (H3): The effectiveness of third-party fact-checking corrections is sup- pressed by partisanship and partisan leaning for political rumors.

To better understand whether partisan motivated reasoning shapes the effectiveness of fact-checking corrections, we also include in our study nonpolitical fake news stories, which convey a type of information that is less likely to be perceived through partisan lenses. Since corrections of nonpolitical fake news are less likely to be interpreted as politically motivated, if Hypothesis 3 is correct, we should expect fact-checking corrections to be more effective in debunking nonpolitical compared with political fake news.

Fake News in the Brazilian 2018 Elections Fake news was a central theme during the 2018 elections in Brazil. Studies show that fake news flooded social media applications, such as Facebook and Twitter (Re- sende, Melo, Sousa, Messias, Vasconcelos, Almeida and Benevenuto 2019). These studies also reveal that the circulation of fake news pieces in Brazil followed a different pattern than that observed in the 2016 US elections, as they were propagated mainly through WhatsApp, used by 120 million people in the country (Resende, Melo, Reis, Vasconcelos, Almeida and Benevenuto 2019; Machado et al. 2019). Although precise estimates are difficult to come by, one survey reports that 67% of voters declared receiving fake news through WhatsApp during the election season, and extensive research on public What- sApp political groups argues that 36% of all news characterized as “factual” that circu- lated in those groups during the 2018 elections was misinformation, 53% was misleading or inconclusive, and about 10% was verified as true (Resende, Melo, Sousa, Messias, Vas- concelos, Almeida and Benevenuto 2019). The scholarship analyzing the 2014 elections in Brazil shows that the online environment in the country was prone to misinformation effects, since online discourse used in social media posts and comments on newspaper websites was monological and weak in reasoning strategies (Mendonça and Amaral 2016;

8 Massuchin, Mitozo and de Carvalho 2017; Mitozo, Massuchin and de Carvalho 2017). Po- litical figures have also noticed the importance of fake news in shaping public (Twitter, Facebook, and other media outlets) and private (WhatsApp, Google searches) conversa- tions during the 2018 elections in Brazil. Supreme Court justices—targets of fake news themselves—and members of Congress have called for investigations and started official inquiries into the funding and spreading of fake news during the elections. Two features of the spread of fake news in Brazil are noteworthy for our study. First, fake news was particularly salient during the campaign season and not simply a function of major social mobilization. From May 21 to June 2 2018, Brazil faced an unprecedented national truckers’ strike that affected millions of people. And, although most images circulated within WhatsApp groups during the truckers’ strike were political (Resende, Melo, Sousa, Messias, Vasconcelos, Almeida and Benevenuto 2019)2, a small part (1%) was fake news. Whereas, during the campaign season, from August 16 through October 28, most images were also political, and about 36% of those were identified as fake news. The search volume on Google also suggests the elections drew attention to fake news. Google searches volume shown in Figure 1 indicates private interest in fake news in Brazil grew over time but peaked in October 2018, when the first and second rounds of voting took place. Second, fake news often traveled through images shared on WhatsApp. WhatsApp allows for text, URLs, audio, video, and image sharing; images are the most shared form of media (about 10% of all messages).3 Furthermore, these images have the potential of “going viral”: in the monitored public WhatsApp groups, a small number of images with misinformation (85 images) appeared in 44% of monitored groups, nearly 6% of all users in those groups shared these images, and they propagate at a much faster rate than other

2Their content was about candidates and parties. 3See Resende, Melo, Sousa, Messias, Vasconcelos, Almeida and Benevenuto (2019); Machado et al. (2019).

9 Figure 1: Interest in Fake News Based on Google Trends Data (2010–2019), Brazil 100 80 60 40 20 Searches for 'Fake News' (Google Trends) News' 'Fake Searches for 0 1−2010 6−2010 6−2011 6−2012 6−2013 6−2014 6−2015 6−2016 6−2017 6−2018 6−2019 Election 12−2010 12−2011 12−2012 12−2013 12−2014 12−2015 12−2016 12−2017 11−2019

types of content (Resende, Melo, Sousa, Messias, Vasconcelos, Almeida and Benevenuto 2019). These two features make our study well suited to investigate fake news in the con- text of Brazilian politics. As described in detail in the next section, we conducted our study during the campaign season at a moment in which fake news was at the forefront of political discussion. Consistent with the main format through which fake news was disseminated in Brazil, we used images to present fake news to respondents in our study.

Data and Research Design We conducted a survey experiment between October 4 and 6, 2018, in the state of Minas Gerais, Brazil. The study was included at the end of the survey questionnaire conducted on a weekly basis by OMITTED about the national- and state-level elections. The study used tablet-questionnaires with face-to-face interviews and included 2,236 re-

10 spondents between the ages of 16 and 75. The sampling procedure used quotas based on age, sex, education, and income according to the proportions in the voting popula- tion of the state. The sample was also stratified based on all regions of the state of Minas Gerais. Minas Gerais is a southeastern state with the second-largest population, third-largest gross domestic product, and forth-largest in area in the country. For the first round of the presidential election, the state also had vote proportions that closely matched the national votes for the two main candidates in the race.4 The study consisted of presenting existing and relatively well-known false rumors to respondents and then experimentally manipulating whether respondents received third- party fact-checking information correcting the rumor or no correction at all. Interviewers were instructed to read all rumors out loud, while showing respondents the tablet screen displaying the rumors, as well as the image that accompanied that rumor on social media.5 Respondents were then randomly assigned to one of two experimental conditions. In the first condition, interviewers informed subjects that a fact-checking website specialized in investigating internet rumors had checked the rumor and confirmed that it was false. In the second condition, interviewers gave no corrective information.6 For respondents in both condition, we then measured their beliefs about the story presented in the false rumors. All subjects were debriefed at the end of the study.7 The study used a total of six false rumors—four political and two nonpolitical—that had been fact-checked by several agencies in the country. The four political rumors in- cluded information about Partido dos Trabalhadores (Workers’ Party, PT) and its politi- cians. We used multiple pieces of false rumors in the hope that rumor-specific character- istics would not drive our conclusions, even if having multiple rumors (and adjusting for

4See SI Appendix C for descriptive statistics. 5See SI Appendix F for the full list of rumors.

6These were true statements (no deception). 7Exposing subjects to fake news raises valid and important ethical concerns that we discuss in detail in SI Appendix A.

11 them) implied less precise within-rumor estimates. Most of the rumors that circulated before and during the electoral period targeted the PT (mostly anti-PT rumors), which validates using the party as the main theme of those rumors.8 To compare how respon- dents with distinct partisan preferences react to rumors and corrections, we include two false rumors that put the PT in a positive light (pro-PT) and two false rumors that cast a bad light on the PT (anti-PT). The nonpolitical rumors included false rumors about celebrities that are not directly related to Brazilian politics. As we explain in more detail in the following sections, the inclusion of nonpolitical fake news in the study allow us to assess whether fact-checking could be effective against beliefs in (nonpolitical) news that were less likely to be associated with partisan motivated reasoning.9 In addition to the statement that a fact-checking agency had found that story to be false, our vignette also contained a general statement saying that “false rumors like this one are fabricated by people with the intention of distorting facts and disseminating misinformation in social media.” The vignette was read by the interviewer. We relied on a simple strategy of presenting a plain and short fact-checking statement refuting the information displayed in the image.10 To measure whether or not respondents believed the rumors, we followed Berinsky (2018) in directly asking “[D]o you believe that [states the rumor in past tense]?” However, we diverged from citeberinsky2018 in the operationalization of the dependent variable. The variable differentiated positive (rumor acceptance) from negative (rumor rejection)

8Three fact-checking agencies (Lupa, Aos Fatos, and Fato ou Fake) reported checking and correcting 123 false rumors during the campaign. Of those, 104 were against the PT, while

19 were against the opponent Jair Bolsonaro (https://congressoemfoco.uol.com.br/eleicoes/ das-123-fake-news-encontradas-por-agencias-de-checagem-104-beneficiaram-bolsonaro/).

9An expanded version of this design was used in a study conducted in May, 2018. The study included a total of 11 rumors and other types of corrective information. However, due to evidence of failed randomization, we treated the data as a pilot study that informed the design of the October study.

10The vignette is in SI Appendix E.

12 responses, but we treated “don’t know” (DK) responses as missing. The rates of DK responses in our sample were never higher on than the ones in Berinsky (2018), who treats DKs as “failure to reject” a rumor. Additionally, a follow-up question to respondents who initially picked the DK option finds that the majority of those who moved away from the DK response when prompted tended to reject the rumor. Therefore, we followed Thorson (2016) in treating DK responses as indicating ambivalence towards the rumor and, since DKs were a residual category, we did not consider them in the analyses.11 The questionnaire12 also included pre-treatment items that tapped into other factors that were potentially related to the acceptance of false rumors and the effectiveness of corrections. Given the relevance of motivated reasoning and confirmation in the acceptance of false rumors, we used items that measure partisan attitudes in the Brazil- ian electorate. However, given Brazil’s highly fragmented party system, classifying and identifying partisans in such contexts can be difficult. We followed Samuels and Zucco (2018) in distinguishing partisans (respondents identified with a party), antipartisans (respondents who dislike a party without liking other parties), and nonpartisans (respon- dents who neither like nor dislike a party). This classification is crucial to unpack a large category of nonpartisans and identify those who dislike a party and behave in ways that reflect such a predisposition (Samuels and Zucco 2018). We measured petismo (Workers’ Party supporters) by selecting those who mention the PT in a standard question asking whether respondents like a party. Antipetistas are those who do not like a specific party and mention the PT to a standard question asking whether respondents dislike a party. However, pure nonpartisans are still a very large portion of the sample even after we separate antipartisans. According to Baker and Rennó (2019), this is partly due to

11We also included branching strategies to further assess belief among DK respondents, as well as the strength of belief among all respondents. The measure of strength of belief did not yield consistent results, while the measure that attempts to reduce DKs produces similar results to the one presented in the paper.

12See SI Appendix E for our questionnaire.

13 the standard survey questions about partisanship producing false negatives. To unpack the variation within nonpartisans, we relied on another pre-treatment item that asked respondents about their feelings toward the PT using a five-point scale. We distinguish between “true” nonpartisans – those who do not self-identify with the PT and declare being neutral in the feeling thermometer question – and leaners – those who did not self-identify with the PT but declared having either positive or negative feelings toward the party. Therefore, we measured partisanship by identifying the four largest groups of voters in Brazilian politics: petistas (those identified with the PT), antipetistas (those who dislike the PT), nonpartisan leaners and true nonpartisans. Petistas corresponded to 16% of the sample, antipetistas comprised 21%, leaners were 18%, and true nonpartisans comprised 34% of the sample in Minas Gerais. Other partisan groups (other partisans and other antipartisans) comprised approximately 11% of the sample and will not be discussed in the analyses that follow for the sake of simplicity. Other variables also included as pre-treatment questions were dogmatism, disengage- ment, political interest, education, and other socioeconomic variables. Dogmatism was measured by one of the items used by Berinsky (2018). The item asked respondents which is better: “to remain undecided” or “to take a stand on an issue even if it’s wrong.” The measure of disengagement also relied on an item from a battery on disengagement developed by Berinsky (2018) that asked the extent to which respondents agreed that “the people think they govern themselves, but they really don’t.” We measured political interest using an item asking respondents about how interested they were in politics, with four response options ranging from “very interested” to “not at all interested.” We used standard measures of education, income, and age.

What Drives Beliefs in Rumors? Before testing Hypothesis 1, we assessed the extent to which individuals accept false rumors in the Brazilian context. Figure 2 shows the rates of belief for the three types of rumors included in the experiment: nonpolitical, pro-PT (political), and anti-PT

14 (political). The estimates are predicted probabilities from probit model without controls. The estimates are separated for the full sample and the control group (with subjects who did not receive corrective information). The estimates are not statistically different from models that include pre-treatment covariates (partisanship, dogmatism, disengagement, college education, political interest, income, sex, and age).

Figure 2: Rates of Belief by Type of Fake News, October 2018 1 .8 .6 .4 ,2 0 Nonpolitical Anti-PT Pro-PT Rumor Rumor Rumor

Full Sample Control Group

Overall, a majority of respondents do not believe in fake news. Furthermore, there is sizable variability in the extent to which subjects believe in fake news — belief in false rumors ranges from 18% to 57% depending on the rumor.13 Respondents are more likely to believe in nonpolitical rumors than in political rumors. These rates do not include “don’t know” responses (about 13% of responses treated as missing), which denotes that overall rates of rumor acceptance are even lower than the ones presented in Figure 2. According to the scholarship, rumor acceptance is affected by a combination of fac- tors. One of the primary factors associated with rumor acceptance is “motivated rea- soning” (Taber and Lodge 2006), which refers to the tendency of individuals to process new information with the underlying motivation of confirming their existing political

13See SI Table G1 for rates of belief for specific rumors.

15 . In this sense, Hypothesis 1 states that partisanship is a key driver of rumor ac- ceptance in politics (Berinsky 2017b; Margolin, Hannak and Weber 2018). In addition, other individual dispositions such as dogmatism (intolerance of ambiguity in thinking) and disengagement (lack of political efficacy) would also make individuals more likely to accept false information—and negative information in particular—about politicians and groups (Berinsky 2018). Finally, more politically sophisticated individuals would be more capable of distinguishing accurate from misleading information about politics (Fridkin, Kenney and Wintersieck 2015). Figure 3 shows the determinants acceptance of the three types of rumors (nonpolitical, anti-PT, pro-PT) among respondents in our sample. The top panel of the figure shows the results for nonpolitical and political (anti and pro-PT combined) rumors, while the bottom panel presents the results for anti and pro-PT political rumors. We include our indicators of partisanship (petismo), antipartisanship (antipetismo), and partisan leaning14 (with true nonpartisans as the baseline) to assess the role of motivated partisan reasoning in rumor holding.15 We also include measures of dogmatism, disengagement, political interest, an indicator of college education, as well as income, sex, and age as controls. The figure shows the maximum change in the probability of accepting a rumor based on probit models.16 The results in Figure 3 show that partisanship is consistently associated with rumor acceptance, as proposed by Hypothesis 1. Regarding nonpolitical rumors, partisan leaners are more likely than true nonpartisans to accept rumors, but not more or less likely than partisans and antipartisans. No other variable is clearly associated with nonpolitical rumor acceptance. For pro-PT and anti-PT rumors combined, petistas and antipetistas

14We combine PT and anti-PT leaners because these groups are too small separately for meaningful analysis.

15The analyses of the paper exclude the small group of other partisans and other antipartisans for the sake of simplicity. The results do not change substantially with the inclusion of those respondents.

16See SI Table H1 for results.

16 Figure 3: Correlates of Belief by Type of Rumor (Maximum Change in Probability of Belief), October 2018

Nonpolitical Rumors Political Rumors

Petista Petista Antipetista Antipetista Leaner Leaner Dogmatism Dogmatism Disengagement Disengagement

College Degree College Degree Interest Interest Income Income Sex Sex Age Age -.4 -.2 0 .2 .4 -.4 -.2 0 .2 .4

Anti-PT Rumors Pro-PT Rumors

Petista Petista Antipetista Antipetista Leaner Leaner Dogmatism Dogmatism Disengagement Disengagement

College Degree College Degree Interest Interest Income Income Sex Sex Age Age -.4 -.2 0 .2 .4 -.4 -.2 0 .2 .4

are more likely than nonpartisans (but not than leaners) to accept false information. Notably, and corroborating Hypothesis 1, the bottom two plots in the figure show that petistas are more likely than antipetistas and nonpartisans to believe that a positive rumor about the PT is true, while antipetistas are more likely than petistas and nonpartisans to believe that a negative rumor about the PT is true. The remaining variables included in the models do not show consistent relationships with rumor acceptance, with only college degree and age having a negative relationship with believing in positive false information about the PT.

17 Do Corrections Work? What kills a rumor in the Brazilian electoral context? In this section, we test whether third-party fact-checking corrections increase rumor rejection among the pub- lic (Hypothesis 2) and whether partisan attitudes moderate those effects (Hypothesis 3). To further test Hypothesis 3 and better understand what is behind the effective- ness of third-party fact-checking corrections, we include the results for nonpolitical fake news. The comparison between political and nonpolitical fake news serves two purposes. First, it helps assess whether the effectiveness of fact-checking corrections is a function of perceived political biases. Fact-checking corrections are more likely to be perceived as politically motivated when they deal with political rather than with nonpolitical fake news. Therefore, finding that fact-checking corrections are more effective against the latter type of false information, which is politically neutral, would suggest that perceived political biases in fact-checking undermine their effectiveness. Second, to test Hypothesis 3 we rely on a nonexperimental comparison between treat- ment effects for partisan and nonpartisan respondents — a conditional average treatment effect. It is possible that other factors associated with partisanship, rather than partisan- ship itself, are driving any differences we might find. The comparison between political and nonpolitical fake news helps us interpret whether partisan attitudes are driving the effectiveness of third-party fact-checking corrections. If we find that partisanship changes fact-checking effectiveness for political but not for nonpolitical fake news, we could in- terpret it as suggestive evidence that partisanship—and not unobserved covariates—is driving this relationship. We present results from linear probability models estimating the effects of corrections on rumor rejection. All models include pre-treatment covariates that are typically predic- tive of belief in fake news: disengagement, partisanship, age, sex, education, and interest in politics and rumor fixed effects.17

17See SI Appendix G for balance tests. These tests suggest that randomization was successful. SI

18 Figure 4 examines whether third-party fact-checking increases rumor rejection in the Brazilian context. The figure shows estimates of the effect (marginal effects with Bon- ferroni correction) of fact-checking corrections separated by type of rumor (nonpolitical and political; anti-PT and pro-PT) for all respondents combined. As the results show, fact-checking corrections do not increase the rejection of misinformation – for neither nonpolitical nor political rumors. Our results do not support Hypothesis 2: fact-checking corrections do not decrease beliefs in political rumors, and they are also ineffective against nonpolitical ones.

Figure 4: Marginal Effect of Fact-Checking Corrections by Type of Rumor, October 2018 .3 .2 .1 0 -.1 -.2 -.3 Nonpolitical Political Anti-PT Pro-PT Rumor Rumor Rumor Rumor

Figure 5 tests whether the effectiveness of corrective information on political and nonpolitical rumors depends on partisan attitudes, according to Hypothesis 3. The panels show the marginal effects (with Bonferroni correction) of fact-checking corrections by type of rumor and by partisan group (nonpartisans, leaners, antipetistas, and petistas). Overall, corrections seem to be ineffective across the board. Regarding nonpolitical rumors, corrections do not work against fake news among all partisanship groups. The

Appendix I shows that models without controls present very similar results to models including pre- treatment covariates.

19 Figure 5: Marginal Effect of Fact-Checking Corrections by Type of Rumor and Partisan- ship, October 2018

Nonpolitical Rumor Political Rumor .4 .4 .3 .3 .2 .2 .1 .1

0 0 -.1 -.1 -.2 -.2 -.3 -.3 -.4 -.4 Nonpartisan Leaner Antipetista Petista Nonpartisan Leaner Antipetista Petista

Anti-PT Rumor Pro-PT Rumor .4 .4 .3 .3 .2 .2 .1 .1

0 0 -.1 -.1 -.2 -.2 -.3 -.3 -.4 -.4 Nonpartisan Leaner Antipetista Petista Nonpartisan Leaner Antipetista Petista

results for political rumors (aggregated) are only suggestive of the patterns expected in Hypothesis 3. Nonpartisans seem more receptive of corrective information than leaners, partisans, and antipartisans for political rumors in general. However, while the interaction coefficients are statistically significant in the regression model (p <.05), they are not when correcting for multiple comparisons. A similar pattern holds for positive rumors, while correction effects are null for negative rumors. In sum, support for Hypothesis 3 is weak. All in all, the results in Figure 5 point to a clear conclusion about the effectiveness of third-party fact-checking in the 2018 Brazilian elections: corrective information does not change rumor acceptance. The results are similar across different types of rumors and across partisan groups. Even though the marginal effects tend to be larger among nonpartisans than for leaners, antipetistas, and petistas for political rumors, they are

20 suggestive of moderation at best, since they are not statistically different when we correct for multiple comparisons and the estimates for each partisan group are not statistically different from one another.

Discussion The findings presented here are partially consistent with previous findings on the correlates of misperceptions. Partisans and antipartisans in Brazil are more likely to believe in rumors that confirm their partisan attitudes. Yet, in contrast to previous studies, we do not have evidence that dogmatism and disengagement as well as other demographic covariates—such as sex, age, and education—predict belief in fake news. We also find that corrections do not dissuade Brazilian voters from believing in false information. To better understand where these findings stand in the empirical literature, we conducted a meta-analysis of similar experimental studies published in leading political science journals18. We found that, on average, 43% of subjects in these studies believed in false rumors, while about 30% of our subjects believe in fake news.19 This finding is consistent with the claim that lower levels of partisanship in Brazil reduce belief in false news stories. Fact-checking corrections in our study are notably weak. Even corrections that con- firm subjects’ partisan attitudes are disregarded in our study. Additionally, while the distinction between pro-PT and anti-PT rumors in the experimental design is included to allow us to examine partisans and antipartisans symmetrically, literature from so- cial psychology suggests that negative stimuli (news) tend to be stronger than positive stimuli in shaping individuals’ perceptions and behaviors (Ito, Larsen and abd John T. Cacioppo 1998; Baumesiter, Bratslavsky and Kinkenauer 2001). Similar findings are often observed regarding political attitudes and information (Soroka and McAdams 2015; Lau et al. 2017). However, we do not observe systematic differences in the effectiveness

18See SI Appendix K for details. 19We find a similar share (32%) and statistically indistinguishable if we remove nonpolitical rumors.

21 Figure 6: Belief in Misinformation Across Studies

● ● 0.6 ●

0.4 ● ● ● ● Mean Belief in Fake News Mean Belief in Fake

● ●

0.2 Huang 2017 (1) Huang 2017 (2) AUTHORS 2020 AUTHORS Berinsky 2017 (1) Berinsky 2017 (2) Clayton et al. 2019 Clayton Nyhan et al. 2019 (1) Nyhan et al. 2019 (2) Wood and Porter 2019 and Porter Wood Nyhan and Reifler 2010 (1) Nyhan and Reifler 2010 (2)

Note: Proportion of subjects in the control group who believe in false stories. Studies that used non-dichotomous scales were recoded by splitting at mid-point. of fact-checking between negative (anti-PT) and positive (pro-PT) fake news. The results of the meta-analysis presented in Figure 7 show that, unlike what we find in our study, corrections in similar studies tend to work; the overall effect suggests that fact-checking leads to an increase in the rejection of misinformation (r = 0.654, CI [0.516,0.792]). Other meta-analyses contrasting political and nonpolitical rumors confirm that fact-checking can be effective (Walter et al. Forthcoming), but suggest that the effects of corrections are somewhat smaller (r = 0.29, CI [0.23,0.36]) than what has been published in leading Political Science journals. Our evidence suggests that fact-checking corrections are ineffective in Brazil and also statistically different from the combined

22 estimates from previous studies.20 The features of our design do not seem to explain the ineffectiveness of corrections. First, the sample size in our study (2,263) is larger than the average sample size of studies included in the meta-analysis (1,636). Second, although the literature on the effectiveness of fact-checking suggests features of corrections can maximize the chances of rumor rejec- tion (Nieminen and Rapeli 2019), a meta-analysis of 20 studies that use fact-checking to dispel misinformation finds that there “was no significant effect of message length on the efficacy of fact-checking” (Walter et al. Forthcoming, p. 16) and that lexical complexity (a series of indicators of complexity of the text) reduces the impact of corrections. Visual aids (logos) do not tend to make messages more effective (Walter et al. Forthcoming, p. 17). All in all, simpler corrective messages were, in fact, more effective or, at least, as effective as more complex corrections. Finally, while Young et al. (2018) find that corrections in the form of videos can be more effective than other forms of corrections, the literature discussed earlier relies primarily on text corrections (Tucker et al. 2018). Moreover, the combination of text and image was the most common type of media con- tent spread on social media during the 2018 elections in Brazil (Machado et al. 2019). Because we conduct a face-to-face survey in which the interviewer conveyed a simple cor- rective message directly to the subject, there is no reason to believe that our correction is a priori weaker than corrections used in other studies. Moreover, we do not believe that the absence of published studies documenting inef- fective corrections is a consequence of . Although null results are difficult to publish, which could overestimate the effects found in the meta-analysis, we find no systematic evidence of publication bias. We formally examine this possibility in SI Fig- ure K2 using a p-curve, and we find little evidence of p-hacking and publication bias (no clustering around 0.05) (Simonsohn, Nelson and Simmons 2014). Instead, the p-curve is

20In the SI we compare our meta-analysis to published meta-analyses.

23 Figure 7: Meta-Analysis of Fact-Checking in Political Science

Standardised Mean Study TE seTE Difference SMD 95%−CI Weight

Huang 2017 (China) 0.26 0.1220 0.26 [ 0.02; 0.50] 2.3% Huang 2017 (China) 0.27 0.1260 0.27 [ 0.02; 0.51] 2.3% Huang 2017 (China) 0.40 0.1100 0.40 [ 0.18; 0.61] 2.3% Huang 2017 (China) 0.39 0.1180 0.39 [ 0.16; 0.62] 2.3% Berinsky 2017 (US) 0.30 0.1220 0.30 [ 0.06; 0.53] 2.3% Berinsky 2017 (US) 0.50 0.1220 0.50 [ 0.26; 0.74] 2.3% Berinsky 2017 (US) 0.36 0.1220 0.36 [ 0.12; 0.59] 2.3% Berinsky 2017 (US) 0.24 0.0770 0.24 [ 0.09; 0.40] 2.4% Nyhan and Reifler 2010 (US) −0.05 0.1760 −0.05 [−0.40; 0.29] 2.1% Nyhan and Reifler 2010 (US) 0.05 0.1410 0.05 [−0.23; 0.32] 2.2% Nyhan and Reifler 2010 (US) −0.08 0.1410 −0.08 [−0.35; 0.20] 2.2% Nyhan and Reifler 2010 (US) 0.26 0.1410 0.26 [−0.01; 0.54] 2.2% Nyhan et al. 2019 (US) 0.66 0.0710 0.66 [ 0.52; 0.80] 2.4% Nyhan et al. 2019 (US) 0.71 0.0450 0.71 [ 0.62; 0.80] 2.4% Nyhan et al. 2019 (US) 0.42 0.0710 0.42 [ 0.28; 0.56] 2.4% Clayton et al. 2019 (US) 0.08 0.0320 0.08 [ 0.02; 0.15] 2.4% Clayton et al. 2019 (US) 0.25 0.0320 0.25 [ 0.19; 0.31] 2.4% Clayton et al. 2019 (US) 0.37 0.0320 0.37 [ 0.31; 0.43] 2.4% Wood and Porter 2019 (US) 1.05 0.0320 1.05 [ 0.99; 1.12] 2.4% Wood and Porter 2019 (US) 1.06 0.0320 1.06 [ 1.00; 1.12] 2.4% Wood and Porter 2019 (US) 0.69 0.0320 0.69 [ 0.63; 0.76] 2.4% Wood and Porter 2019 (US) 0.51 0.0320 0.51 [ 0.45; 0.57] 2.4% Wood and Porter 2019 (US) 0.98 0.0320 0.98 [ 0.91; 1.04] 2.4% Wood and Porter 2019 (US) 0.17 0.0320 0.17 [ 0.10; 0.23] 2.4% Wood and Porter 2019 (US) 0.58 0.0320 0.58 [ 0.52; 0.65] 2.4% Wood and Porter 2019 (US) 0.15 0.0320 0.15 [ 0.08; 0.21] 2.4% Wood and Porter 2019 (US) 1.17 0.0550 1.17 [ 1.06; 1.27] 2.4% Wood and Porter 2019 (US) 0.97 0.0550 0.97 [ 0.86; 1.07] 2.4% Wood and Porter 2019 (US) 1.17 0.0550 1.17 [ 1.06; 1.28] 2.4% Wood and Porter 2019 (US) 0.90 0.0550 0.90 [ 0.79; 1.01] 2.4% Wood and Porter 2019 (US) 1.23 0.0550 1.23 [ 1.12; 1.34] 2.4% Wood and Porter 2019 (US) 1.33 0.0550 1.33 [ 1.23; 1.44] 2.4% Wood and Porter 2019 (US) 1.37 0.0630 1.37 [ 1.25; 1.49] 2.4% Wood and Porter 2019 (US) 1.42 0.0630 1.42 [ 1.30; 1.55] 2.4% Wood and Porter 2019 (US) 1.10 0.0550 1.10 [ 0.99; 1.21] 2.4% Wood and Porter 2019 (US) 0.82 0.0550 0.82 [ 0.71; 0.92] 2.4% Wood and Porter 2019 (US) 1.09 0.0550 1.09 [ 0.99; 1.20] 2.4% Wood and Porter 2019 (US) 1.68 0.0630 1.68 [ 1.56; 1.81] 2.4% Wood and Porter 2019 (US) 0.42 0.0550 0.42 [ 0.31; 0.53] 2.4% Wood and Porter 2019 (US) 0.77 0.0550 0.77 [ 0.66; 0.88] 2.4% Wood and Porter 2019 (US) 0.55 0.0550 0.55 [ 0.44; 0.65] 2.4% Wood and Porter 2019 (US) 0.57 0.0550 0.57 [ 0.47; 0.68] 2.4% AUTHORS 2020 (Brazil) 0.05 0.0550 0.05 [−0.05; 0.16] 0.0%

Random effects model 0.65 [ 0.52; 0.79] 100.0% 95% PI [−0.24; 1.55] Heterogeneity: I 2 = 99%, τ2 = 0.1910, p = 0 −1.5 −1 −0.5 0 0.5 1 1.5

24 right-skewed, as one would expect when these are true effects.21 Therefore, the evidence documenting the ineffectiveness of corrections in Brazil systematically differs from the findings of the existing literature. Reassuringly, our results are consistent with the recent study by Carey et al. (2020) on rumors about disease outbreak in Brazil, which suggests that our findings are not a fluke. What then explains the ineffectiveness of third-party fact-checking corrections against fake news in the Brazilian electoral context? We systematically examine whether the effectiveness of fact-checking corrections varies by dogmatism, disengagement, education, political interest, age, and sex — factors identified in previous research as influencing subjects’ receptiveness to corrective information. However, the analyses included in SI Appendix J show neither substantive nor significant results, which suggests that the effectiveness of fact-checking corrections does not depend on any of these factors. Unlike most studies included in the meta-analysis, our study takes place just a few days before Election Day. Since elections tend to increase the strength of group identities and partisan attitudes (Huddy, Mson and Aaroe 2015), it is possible that these factors reduced the power of corrections. However, results from the our meta-analysis presented in SI Figure K3 suggest that the overall treatment effect of studies conducted during elections (r = 0.6, CI [0.22,0.98]) was larger than those found in our study.22 Therefore, the proximity of elections does not appear to completely explain our findings regarding the ineffectiveness of fact-checking corrections. Another possible explanation for these null findings could be a lack of trust in the media and in fact-checking agencies. Brazil’s highly politicized online environment may have undermined the credibility of fact-checking agencies, thus motivating individuals to disregard counter-attitudinal corrections to political fake news (Shin and Thorson 2017).

21See SI for additional tests of publication bias.

22Walter and Murphy (2018) find that the corrective effects of fact-checking are smaller during elec- tions, but not null.

25 Moreover, previous research seem to have underestimated the extent to which citizens— particularly those who identify as nonpartisans—in developing democracies are capable of taking sides and behaving like partisans during elections (Cornejo 2019; Baker and Rennó 2019). It is also possible that younger and less developed democracies have different online environments and lower levels of media literacy, which could reduce the power of corrections. Unfortunately, we do not have the appropriate data to test these explanations and to provide a clear answer to the lack of effectiveness of fact-checking corrections to fake news in Brazil. Future research should test these arguments empirically and foster a better understanding of the dynamics of political misinformation across contexts.

Conclusion Political rumors in the Brazilian context thrive among partisans. We find that partisan attachments are the main determinant of rumor acceptance, with petistas being the most likely to belief positive rumors and antipetistas being the most likely to be- lieve negative rumors about the Workers’ Party and Lula. Our findings also show that fact-checking corrections do not promote the rejection of fake news among Brazilians. Corrective information is unable to change beliefs about pro-PT and anti-PT fake news, as well as fake news unrelated to politics. Overall, we find that, just like what other studies conducted in developed nations show, belief in misinformation is higher among individuals with partisan attachments. However, unlike the overall combined estimate from previous studies, fact-checking cor- rections are notably weak in Brazil, as they appear to fail even when correcting infor- mation that confirms subjects’ political preferences. We find suggestive evidence that corrective information may work among true nonpartisans, a subset of nonpartisans who do not lean towards or against the PT, although results are mixed. Furthermore, we fail to find evidence that the effectiveness of fact-checking corrections depends on fac- tors such as dogmatism, disengagement, and political interest, or on socioeconomic and demographic characteristics.

26 It is possible that, under certain circumstances, fact-checking corrections could have larger effects among a larger group of voters even during the peak of the electoral period. For example, endorsements and unlikely sources may make corrective information more effective (Berinsky 2017b). However, endorsements and unlikely sources may not be easily available in competitive and polarized elections, when we tend to observe a surge in fake news (Resende, Melo, Sousa, Messias, Vasconcelos, Almeida and Benevenuto 2019; Machado et al. 2019). Our study shows a puzzling combination of beliefs in fake news that are less widespread than in other contexts but equally predicted by partisanship, all in a context typically described as having low levels of partisanship and intense party aversion. Furthermore, our study offers evidence that these beliefs are more resistant to corrections than in other settings. Our findings indicate that we still lack a good understanding of the role par- tisanship plays in shaping information processing and attitude formation in developing countries. Our findings also raise the possibility that factors related to the media envi- ronment and patterns of news consumption may influence the effectiveness of corrective information. We hope that future studies take up these questions.

27 References Allcott, Hunt and Matthew Gentzkow. 2017. “Social Media and Fake News in the 2016 Election.” The Journal of Economic Perspectives 31(2):211–235.

Baker, Andy, Barry Ames, Anand E. Sokhey and Lúcio Rennó. 2015. “The Dynamics of Partisan Identification When Party Brands Change: The Case of the Workers Party in Brazil.” The Journal of Politics 78(1):197–213.

Baker, Andy and Dalton Dorr. 2019. Mass Partisanship in Three Latin American Democ- racies. In Campaigns and Voters in Developing Democracies, ed. Noam Lupu, Virginia Oliveros and Luis Schiumerini. University of Michigan Press chapter 5.

Baker, Andy and Lúcio Rennó. 2019. “Nonpartisans as False Negatives: The Mismea- surement of Party Identification in Public Opinion Surveys.” The Journal of Politics 81(3):906–922.

Baumesiter, Roy F., Ellen Bratslavsky and Catrin Kinkenauer. 2001. “Bad Is Stronger Than Good.” Review of General Psychology 5(4):323–370.

Berinsky, Adam. 2017a. “Telling the Truth about Believing the Lies? Evidence for the Limited Prevalence of Expressive Survey Responding.” The Journal of Politics 80(1):211–224.

Berinsky, Adam. 2018. Rumors, Truths, and Reality: A Study of Political Misinformation. Unpublished manuscript. Under advance contract, Princeton University Press.

Berinsky, Adam J. 2017b. “Rumors and Health Care Reform: Experiments in Political Misinformation.” British Journal of Political Science 47(2):241–262.

Bolsen, Toby, James N. Druckman and Fay Lomax Cook. 2014. “The Influence of Partisan Motivated Reasoning on Public Opinion.” Political Behavior 36(2):235–262.

28 Brader, Ted and Joshua A. Tucker. 2001. “The Emergence of Mass Partisanship in Russia, 1993-1996.” American Journal of Political Science 45(1):69–83.

Bullock, John G., Alan S. Gerber, Seth J. Hill and Gregory A. Huber. 2015. “Partisan Bias in Factual Beliefs about Politics.” Quarterly Journal of Political Science 10(4):519–578.

Carey, John M., Victoria Chi, D. J. Flunn, Brendan Nyhan and Thomas Zeitzoff. 2020. “The effects of corrective information about disease epidemics and outbreaks: Evidence from Zika and yellow fever in Brazil.” Science Advances 6(5):1–10.

Cornejo, Rodrigo Castro. 2019. “Partisanship and Question-Wording Effects: Experi- mental Evidence from Latin America.” Public Opinion Quarterly 83(1):26–45.

Dalton, Russell J. and Steven Weldon. 2007. “Partisanship and Party System Institu- tionalization.” Party Politics 13(2):179–186.

Flynn, D.J., Brendan Nyhan and Jason Reifler. 2017. “The Nature and Origins of Mis- perceptions: Understanding False and Unsupported Beliefs About Politics.” Political Psychology 38(S1):127–150.

Fridkin, Kim, Patrick J. Kenney and Amanda Wintersieck. 2015. “Liar, Liar, Pants on Fire: How Fact-Checking Influences Citizens’ Reactions to Negative Advertising.” Political Communication 32(1):127–151.

Huber, John D., Georgia Kernell and Eduardo L. Leoni. 2005. “Institutional Context, Cognitive Resources and Party Attachments Across Democracies.” Political Analysis 13(4):365–386.

Huddy, Leonie, Lilliana Mson and Lene Aaroe. 2015. “Expressive Partisanship: Campaign Involvement, Political Emotion, and Partisan Identity.” American Political Science Review 109(1):1–17.

Ito, Tiffany A., Jeff T. Larsen and N. Kyle Smith abd John T. Cacioppo. 1998. “Negative

29 Information Weighs More Heavily on the Brain: The in Evaluative Categorization.” Journal of Personality and 75(4):887–900.

Lau, Richard R., David J. Andersen, Tessa M. Ditonto, Mona S. Kleinberg and David P. redlawsk. 2017. “Effect of Media Environment Diversity and Advertising Tone on Information Search, Selective Exposure, and Affective Polarization.” Political Behavior 39(1):231–255.

Lazer, David M. J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts and Jonathan L. Zittrain. 2018. “The science of fake news.” Science 359(6380):1094–1096.

Little, Andrew T. 2018. “Fake news, propaganda, and lies can be pervasive even if they aren’t persuasive.” Critique 11(1):21–34.

Lupu, Noam. 2015. Partisanship in Latin America. In The Latin American voter: Pur- suing representation and accountability in challenging contexts, ed. Ryan E. Carlin, Matthew M. Singer and Elizabeth J. Zechmeister. University of Michigan Press.

Machado, Caio, Beatriz Kira, Vidya Narayanan, Bence Kollanyi and Philip N. Howard. 2019. A Study of Misinformation in WhatsApp groups with a focus on the Brazilian Presidential Elections. In Proceedings of The Web Conference. WWW’19 San Francisco, USA: .

Margolin, Drew B., Aniko Hannak and Ingmar Weber. 2018. “Political Fact-Checking on Twitter: When Do Corrections Have an Effect?” Political Communication 35(2):196– 219.

Massuchin, Michele Goulart, Isabele Batista Mitozo and Fernanda Cavassana de Car- valho. 2017. “Eleições e debate político on-line em 2014: os comentários no Facebook do jornal O Estado de São Paulo.” Revista Brasileira de Ciência Política 23:295–320.

30 Mendonça, Ricardo Fabrino and Ernesto F. L. Amaral. 2016. “Racionalidade online: provimento de razões em discussões virtuais.” Opinião Pública 22(2):418–445.

Michelitch, Kristin and Stephen M. Utych. 2018. “Electoral Cycle Fluctuations in Partisanship: Global Evidence from Eighty-Six Countries.” The Journal of Politics 80(2):413–327.

Mitozo, Isabele Batista, Michele Goulart Massuchin and Fernanda Cavassana de Car- valho. 2017. “Debate político-eleitoral no Facebook: os comentários do público em posts jornalísticos na eleição presidencial de 2014.” Opinião Pública 23(2):459–484.

Nieminen, Sakari and Lauri Rapeli. 2019. “Fighting Misperceptions and Doubting Jour- nalists’ Objectivity: A Review of Fact-checking Literature.” Political Studies Review 17(3):296–309.

Nyhan, Brendan and Jason Reifler. 2010. “When Corrections Fail: The Persistence of Political Misperceptions.” Political Behavior 32(2):303–330.

Resende, Gustavo, Philipe Melo, Hugo Sousa, Johnnatan Messias, Marisa Vasconcelos, Jussara Almeida and Fabricio Benevenuto. 2019. (Mis)Information Dissemination in WhatsApp: Gathering, Analyzing and Countermeasures. In Proceedings of The Web Conference. WWW’19.

Resende, Gustavo, Philipe Melo, Julio C. S. Reis, Marisa Vasconcelos, Jussara Almeida and Fabricio Benevenuto. 2019. Analyzing Textual (Mis)Information Shared in What- sApp Groups. In Proceedings of the ACM Conference on Web Science. WebSci’19.

Samuels, David. 2006. “Sources of Mass Partisanship in Brazil.” Latin American Politics and Society 48(2):1–27.

Samuels, David and Cesar Zucco. 2014. “The Power of Partisanship in Brazil: Evidence from Survey Experiments.” American Journal of Political Science 58(1):212–225.

31 Samuels, David and Cesar Zucco. 2018. Partisans, Antipartisans, and Nonpartisans: Voting Behavior in Brazil. Cambridge University Press.

Shin, Jieun and Kjerstin Thorson. 2017. “Partisan Selective Sharing: The Biased Diffusion of Fact-Checking Messages on Social Media.” Journal of Communication 67(2):233– 255.

Simonsohn, Uri, Leif D. Nelson and Joseph P. Simmons. 2014. “p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results.” Perspectives on Psychological Science 9(6):666–681. PMID: 26186117.

Soroka, Stuart and Stephen McAdams. 2015. “News, Politics, and Negativity.” Political Communication 32:1–22.

Taber, Charles S. and Milton Lodge. 2006. “Motivated Skepticism in the Evaluation of Political Beliefs.” American Journal of Political Science 50(3):755–769.

Thorson, Emily. 2016. “Belief Echoes: The Persistent Effects of Corrected Misinforma- tion.” Political Communication 33(3):460–480.

Tucker, Joshua A., Andrew Guess, Pablo Barbera, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal and Brendan Nyhan. 2018. “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature.” Un- published manuscript. URL: https://www.hewlett.org/wp-content/uploads/2018/03/Social-Media-Political- Polarization-and-Political-Disinformation-Literature-Review.pdf

Walter, Nathan, Jonathan Cohen, R. Lance Holbert and Yasmin Morag. Forthcoming. “Fact-Checking: A Meta-Analysis of What Works and for Whom.” Political Commu- nication 37(3):1–26.

Walter, Nathan and Sheila T. Murphy. 2018. “How to unring the bell: A meta-analytic approach to correction of misinformation.” Communication Monographs 85(3):423–441.

32 Young, Dannagal G., Kathleen Hall Jamieson, Shannon Poulsen and Abigail Goldring. 2018. “Fact-Checking Effectiveness as a Function of Format and Tone: Evaluating FactCheck.org and FlackCheck.org.” Journalism & Mass Communication Quarterly 95(1):49–75.

33 Supporting Information (SI)

Appendix A Addressing Ethical Concerns

Our study exposed subjects to fake news, which raises valid and important ethical concerns. We believe that participating in the study did not cause any harm to subjects for the following reasons: first, the fake news stories used in the study had large circulation in Brazil and were not fabricated by the researchers; second, we did not share with subjects any leaflets or messages that would allow them to further share these stories, and third, all participants were debriefed upon completion of the survey and received the following message: “During this interview, we asked you about a fake news piece about Lula or the PT that circulated on the internet and on social media. We want to make it clear that this news story is false, according to reliable sources that present convincing evidence of such. We asked your opinion on this fake news as part of a study that seeks to understand how people form their opinions.” We also instructed interviewers to repeat this explanation to participants, if necessary, to ensure they understand that the news piece presented to them was false. This type of research design in which subjects are exposed to either existing or hypothetical false information is common in studies on misinformation and, as far as we know, there are no documented harmful consequences to subjects. The authors of this paper worked as consultants for the survey company omitted in the design and implementation of the study. Because we did not hire a polling firm to conduct our study and did not have full control over the timing of the implementation of the survey, we did not have enough time to request IRB approval in Brazil. Ethics reviews- —particularly about research in the humanities—often take very long to be processed and granted at Brazilian public universities. We did submit our research to the appreciation of both omitted and omitted’s IRB. They deemed that the project did not require IRB review due to the partnership between the researchers and the survey company omitted, to the fact that our experiment was embedded in a larger survey conducted by omitted, and also to the lack of access to identifiable data by the researchers.

34 Appendix B Study Design

Table B1: Study Design (Simple Randomization)

Nonpolitical Pro-PT Anti-PT No correction 373 347 374 Correction 363 370 409

35 Appendix C Descriptive Statistics for Samples

Table C1: Descriptive Statistics for Variables Used in Study

Independent Variable Mean SD Min. Max. n Dogmatism .60 .49 0 1 2,087 Disengagement .64 .38 0 1 2,134 Petista .16 .37 0 1 2,236 Antipetista .21 .41 0 1 2,236 Leaner .18 .38 0 1 2,236 Nonpartisan .34 .47 0 1 2,236 College Degree* .17 .37 0 1 2,236 Political Interest .49 .37 0 1 2,226 Income** .45 .32 0 1 2,236 Sex (male) .48 .50 0 1 2,236 Age*** .44 .26 0 1 2,236 *College degree is measured by an ordinal item asking respon- dents about the highest level of formal education completed, with 4 response options including below primary, primary, secondary, and college. **Income is measured by an ordinal item asking respondents about their monthly family income, with 8 response categories including income ranges. ***Age is measured by a standard question asking respondents about how old they are.

36 Appendix D Rates of Belief in Specific Rumors

Table D1: Rumor Acceptance in the Survey

Rumor Type True False DK Belief (DK respondents) Messi and other soccer play- Nonpolitical .21 .72 .07 .40 ers spent 37 thousand Euros in a single party night Actress and model Daniella Nonpolitical .29 .52 .19 .44 Cicarelli has six toes in her foot English magazine chooses Negative .32 .57 .11 .14 Lula as the world’s most corrupt President Senator Fátima Bezerra Negative .18 .71 .11 .20 (PT) wants to allow in- ternet Wi-Fi in Brazilian penitentiaries Pope Francis declared that Positive .57 .30 .13 .62 the biggest crime of Lula was to fight hunger in the world A child attempts to wipe Positive .22 .61 .17 .21 Lula’s tears in a TV set

37 Appendix E Instrumentation (in Portuguese)

Pre-Treatment [Q-interesse]: Quanto você se interessa por política? Você diria que é muito interessado, interessado, pouco interessado, ou nada interessado? Muito interessado 2 Interessado 2 Pouco interessado 2 Nada interessado 2 Não sabe (espontânea) 2 Não respondeu (espontânea) 2 [Q-simpatia]: Você tem SIMPATIA por algum partido poliítico? (Se sim) Qual? (open- ended)

[Q-rejeição]: E tem algum partido poliítico que voceê NÃO GOSTA de jeito nenhum? (Se sim) Qual? (open-ended)

[Q-pt2]: Qual das seguintes frases descreve melhor o seu sentimento em relação ao PT: Eu detesto o PT 2 Não gosto do PT, mas não chego a detestar 2 Eu não gosto nem desgosto do PT 2 Gosto, mas não me sinto um(a) petista OU 2 Sou um(a) petista? 2 [Q-dogmatism1]: Algumas pessoas dizem que, quando se trata de política, é importante lembrar que existe mais de um lado na maioria dos assuntos. Outras pessoas dizem que é melhor ter opinões contra ou a favor, sem ficar em cima do muro. O que você acha? Sempre lembrar que existe mais de um lado na maioria dos assuntos, ou 2 Ser a favor ou contra as posições sem ficar em cima do muro 2 Não sabe (espontânea) 2 Não respondeu (espontânea) 2 [Q-dogmatism2]: Ainda falando de assuntos importantes para o Brasil, você acredita que é melhor “permanecer indeciso” ou “Tomar uma posição sobre o assunto mesmo que esteja errada”? Permanecer indeciso, ou 2 38 Tomar uma posição sobre o assunto mesmo que esteja errada 2 Não sabe (espontânea) 2 Não respondeu (espontânea) 2 [Q-dogmatism3]: E com relação a questões importantes sobre religião e filosofia, você diria que “Não se incomoda em ficar indeciso(a) em relação a elas”, ou você acha que “a pessoa tem que se decidir, de um jeito ou de outro”?

Não se incomoda em ficar indeciso(a) em relação a elas, ou 2 A pessoa tem que se decidir, de um jeito ou de outro 2 Não sabe (espontânea) 2 Não respondeu (espontânea) 2 Gostaria que você dissesse se discorda ou concorda com cada uma das frases que vou ler: (SE CONCORDA, PERGUNTAR): Concorda muito ou concorda pouco? (SE DISCORDA PERGUNTAR): Discorda pouco um discorda muito? (NEM CONCORDA NEM DISCORDA É RESPOSTA ESPONTÂNEA)

[Q-disengagement1]: Políticos só estão interessados em ganhar eleições e permanecer no poder. [Q-disengagement2]: Políticos são desconectados do mundo real. [Q-disengagement3]: As políticas do governo vão ser as mesmas independente de quem esteja no poder. Appendix E.1 Experimental Manipulation Entrevistador: Recentemente circulou a seguinte notícia no Facebook e no WhatsApp. [MOSTRAR TELA DO TABLET AO ENTREVISTADO. ASSEGURAR QUE O EN- TREVISTADO CONSEGUE VER O CONTEÚDO DA TELA. LER O TÍTULO DA NOTÍCIA EM VOZ ALTA].

Correction Vignette:

1. Fact-Checking: Sites como o uol tem se especializado em investigar se notícias que circulam na internet e em redes sociais como o Facebook são de fato verdadeiras ou não. Pelo menos um desses sites afirmou que essa notícia é falsa. Notícias falsas como essas são fabricadas por pessoas que querem distorcer a realidade e disseminar informações erradas em redes sociais. Appendix E.2 Post-Treatment (distractors) [Q-read] Com que frequência você costuma ler notícias via WhatsApp e Facebook?

Sempre 2 Muitas vezes 2 Às vezes 2 Raramente 2 39 Nunca 2 Não sabe (espontânea) 2 Não respondeu (espontânea) 2 [Q-share] Com que frequência você costuma repassar ou compartilhar notícias via What- sApp e Facebook?

Sempre 2 Muitas vezes 2 Às vezes 2 Raramente 2 Nunca 2 Não sabe (espontânea) 2 Não respondeu (espontânea) 2 Appendix E.3 Post-Treatment - Belief [Q-belief-np] Você acredita que [...]?

Sim 2 Não 2 Não sei 2 Não respondeu 2

40 Appendix F Fake News Used in The Study

Figure F1: Nonpolitical Rumor I “Messi and other soccer players spend 37,000 euros in one night”

Figure F2: Nonpolitical Rumor II “Actress and model Daniella Cicarelli has six toes”

41 Figure F3: Anti-PT Rumor I “British magazine elects Lula as the world’s most corrupt president”

Figure F4: Anti-PT Rumor II “Senator Fátima Bezerra (PT) wants to authorize the use of Wi-Fi internet in Brazilian prisons”

42 Figure F5: Pro-PT Rumor I “Pope Francis said that Lula’s biggest crime was attempting to fight hunger in the world”

Figure F6: Pro-PT Rumor II “A child wipes Lula’s tears on TV”

43 Appendix G Balance Checks

Table G1: Mean Comparisons (OLS regression models) of Pre-Treatment Covariates Between Treatment Groups by Type of Rumor

All Pre-Treatment Covariate NonpoliticalNegative Positive Rumors Income .00 -.02 -.00 -.01 (.02) (.02) (.02) (.01) Education -.00 -.15 .03 -.04 (.08) (.08) (.07) (.04) Sex -.02 .02 .01 .00 (.04) (.04) (.04) (.02) Age -.02 .02 -.02 -.01 (.02) (.02) (.02) (.01) Political Interest .01 -.04 .02 -.00 (.03) (.03) (.03) (.01) Dogmatism -.03 -.02 .01 -.01 (.04) (.04) (.04) (.02) Disengagement .06* .02 .00 .03 (.03) (.03) (.03) (.02) Petista -.03 .04 .08* .03* (.03) (.03) (.03) (.02) Antipetista -.07* .04 -.02 -.02 (.03) (.03) (.03) (.02) Leaner .05 .00 -.04 .00 (.03) (.03) (.03) (.02) Nonpartisan .07 -.05 -.09* -.02 (.04) (.04) (.04) (.02) *p<0.05. Robust standard errors in parentheses.

44 Table G2: Probit Models of Correlates of Treatment by Type of Rumor (Joint Significance Tests)

All Pre-Treatment Covariate NonpoliticalNegative Positive Rumors Income .00 .00 .03 -.01 (.17) (.17) (.16) (.10) Education .02 -.10 .03 -.01 (.06) (.06) (.06) (.03) Sex -.03 .08 .05 .03 (.11) (.11) (.10) (.06) Age -.25 -.15 -.22 -.17 (.23) (.23) (.22) (.13) Political Interest .14 -.20 .00 -.03 (.15) (.15) (.15) (.09) Dogmatism .01 -.07 -.01 -.01 (.11) (.11) (.11) (.06) Disengagement .35* -.03 .11 .15 (.15) (.14) (.14) (.08) Petista -.22 .21 .36* .13 (.15) (.16) (.15) (.08) Antipetista -.31* .30* -.00 -.01 (.14) (.14) (.14) (.08) Leaner .05 .17 -.13 .01 (.15) (.15) (.14) (.08) Constant -.20 .23 -.09 -.03 (.22) (.23) (.22) (.13) n 579 581 614 1,774 Pseudo R2 .02 .01 .01 .00 Chi-Square (joint sig.) 15.19 9.53 11.86 8.38 *p<0.05. Standard errors in parentheses.

45 Appendix H Model Specifications

Table H1: Probit Models of Effect of Type of Rumor on Rumor Rejection (Figure 2)

Control Treatment Full Sample Group Anti-PT Rumor .50* .55* (.07) (.10) Pro-PT Rumor .48* .50* (.07) (.10) Constant .07 0.1 (.05) (.07) n 1,922 960 Pseudo R2 .02 .03 *p<0.05. Standard errors in parentheses.

46 Table H2: Probit Models of Correlates of Rumor Acceptance by Type of Rumor (Figure 3

Pre-treatment Covariates NonpoliticalPolitical Negative Positive Age -.39 -.30 -.20 -.50* (.23) (.16) (.24) (.23) Sex (male) -.10 -.16 -.09 -.22 (.12) (.08) (.12) (.12) Income -.08 .17 .25 .12 (.19) (.13) (.19) (.19) Political Interest .02 .00 -.09 .17 (.17) (.12) (.17) (.17) College Education .07 -.23* -.04 -.41* (.16) (.11) (.17) (.17) Disengagement -.00 -.01 -.06 .09 (.15) (.11) (.16) (.16) Dogmatism .04 -.11 -.19 -.03 (.12) (.09) (.13) (.12) Leaner .49* .05 .04 .05 (.16) (.12) (.17) (.17) Antipetista .28 .28* .60* -.12 (.15) (.11) (.15) (.16) Petista .18 .39* -.03 .65* (.16) (.12) (.18) (.16) Constant -.12 -.46* -.46* -.50* (.21) (.15) (.21) (.22) n 498 1,063 520 543 Pseudo R2 .02 .01 .01 .05 *p<0.05. Standard errors in parentheses.

47 Table H3: Probit Model of Effect of Corrections on Rumor Rejection by Type of Rumor (Figure 4)

Independent Variable (1) (2) (3) (4) Fact-checking Correction .06 .06 .06 .06 (.04) (.04) (.04) (.04) Anti-PT Rumor .09* .04 (.04) (.05) Pro-PT Rumor -.08 -.11* (.04) (.05) Political Rumor .01 -.04 (.04) (.04) Correction*Anti-PT Rumor -.04 -.03 (.05) (.06) Correction*Pro-PT Rumor -.03 -.01 (.05) (.06) Correction*Political Rumor -.03 -.02 (.04) (.05) Disengagement -.01 -.01 (.03) (.03) Leaner -.05 -.06 (.03) (.03) Antipetista -.09* -.09* (.03) (.03) Petista -.11* -.12* (.03) (.03) Dogmatism .02 .03 (.02) (.02) Age .13* .13* (.04) (.05) Sex .04 .04 (.02) (.02) Income -.01 -.01 (.04) (.04) Interest .02 .02 (.03) (.03) College .03 .03 (.03) (.03) Constant .71* .70* .71* .70* (.03) (.05) (.03) (.05) n 1,922 1,561 1,922 1,561 R2 .11 .12 .10 .11 *p<0.05. Standard errors in parentheses. Rumor fixed-effects included but not shown.

48 Table H4: Probit Model of Effect of Corrections on Rumor Rejection by Type of Rumor and Partisan Attitudes (Figure 5)

Independent Variable NonpoliticalPolitical Negative Positive Correction .07 .12* .07 .17* (.07) (.04) (.06) (.06) Leaner -.12 .07 .03 .13 (.08) (.05) (.08) (.07) Antipetista -.12 -.04 -.22* .14* (.07) (.05) (.08) (.07) Petista -.01 -.09 .03 -.19* (.08) (.06) (.08) (.09) Correction*Leaner -.06 -.15* -.05 -.26* (.12) (.08) (.11) (.11) Correction*Antipetista .11 -.12 -.03 -.16 (.10) (.07) (.10) (.09) Correction*Petista -.05 -.12 -.09 -.10 (.12) (.08) (.11) (.11) Disengagement -.04 -.00 -.01 -.04 (.06) (.04) (.05) (.05) Dogmatism .00 .02 .05 -.01 (.04) (.03) (.04) (.04) Age .18* .11* .08 .16* (.08) (.05) (.08) (.08) Sex .03 .05 .03 .06 (.04) (.03) (.04) (.04) Income .06 -.06 -.08 -.03 (.07) (.04) (.06) (.06) Interest .04 .01 .04 -.05 (.06) (.04) (.06) (.05) College -.06 .08* .03 .12* (.05) (.04) (.06) (.05) Constant .28* .69* .56* .67 (.08) (.06) (.08) (.08) n 498 1,063 520 543 Pseudo R2 .20 .06 .09 .11 *p<0.05. Standard errors in parentheses. Rumor fixed-effects included but not shown.

49 Table H5: Probit Model of Effect of Corrections on Rumor Rejection by Type of Rumor and Partisan Attitudes (Figure 5 without controls)

Independent Variable NonpoliticalPolitical Negative Positive Correction .05 .11* .07 .16* (.07) (.04) (.05) (.06) Leaner -.15* .08 .05 .12 (.07) (.05) (.07) (.07) Antipetista -.11 -.02 -.22* .15* (.07) (.05) (.07) (.07) Petista .02 -.06 .05 -.17* (.08) (.06) (.08) (.08) Correction*Leaner .05 -.17* -.09 -.28* (.11) (.07) (.10) (.11) Correction*Antipetista .10 -.12 -.02 -.17 (.10) (.07) (.10) (.09) Correction*Petista -.07 -.11 -.09 -.11 (.11) (.08) (.11) (.11) Constant .38* .75* .64* .73 (.05) (.04) (.05) (.05) n 549 1,144 559 585 Pseudo R2 .19 .04 .08 .08 *p<0.05. Standard errors in parentheses. Rumor fixed-effects included but not shown.

Appendix I Robustness Checks

Figure I1: Marginal Effect of Fact-Checking Corrections by Type of Rumor, October 2018 (without Bonferroni corrections) .3 .2 .1 0 -.1 -.2 -.3 Nonpolitical Political Anti-PT Pro-PT Rumor Rumor Rumor Rumor

50 Figure I2: Marginal Effect of Fact-Checking Corrections by Type of Rumor and Parti- sanship, October 2018 (without Bonferroni corrections)

Nonpolitical Rumor Political Rumor .4 .4 .3 .3 .2 .2 .1 .1

0 0 -.1 -.1 -.2 -.2 -.3 -.3 -.4 -.4 Nonpartisan Leaner Antipetista Petista Nonpartisan Leaner Antipetista Petista

Anti-PT Rumor Pro-PT Rumor .4 .4 .3 .3 .2 .2 .1 .1

0 0 -.1 -.1 -.2 -.2 -.3 -.3 -.4 -.4 Nonpartisan Leaner Antipetista Petista Nonpartisan Leaner Antipetista Petista

Figure I3: Marginal Effect of Fact-Checking Corrections by Type of Rumor, October 2018 (without controls and Bonferroni corrections) .3 .2 .1 0 -.1 -.2 -.3 Non Political Political Anti-PT Pro-PT Rumor Rumor Rumor Rumor

51 Figure I4: Marginal Effect of Fact-Checking Corrections by Type of Rumor and Parti- sanship, October 2018 (without controls and Bonferroni corrections)

Non Political Rumor Political Rumor .4 .4 .3 .3 .2 .2 .1 .1

0 0 -.1 -.1 -.2 -.2 -.3 -.3 -.4 -.4 Nonpartisan Leaner Antipetista Petista Nonpartisan Leaner Antipetista Petista

Anti-PT Rumor Pro-PT Rumor .4 .4 .3 .3 .2 .2 .1 .1

0 0 -.1 -.1 -.2 -.2 -.3 -.3 -.4 -.4 Nonpartisan Leaner Antipetista Petista Nonpartisan Leaner Antipetista Petista

Figure I5: Marginal Effect of Fact-Checking Corrections for All Rumors, October 2018 .3 .2 .1 0 -.1 -.2 -.3 Nonpolitical Political Anti-PT Pro-PT Rumor Rumor Rumor Rumor

52 Appendix J Additional Analyses

Table J1: Probit Model of Effect of Corrections on Rumor Rejection by Type of Rumor and Disengagement

Non Independent Variable Political Negative Positive Political Correction .14 .06 .03 .08 (.09) (.06) (.08) (.08) Disengagement -.00 .01 .01 -.02 (.07) (.07) (.07) (.07) Correction*Disengagement -.09 -.02 .02 -.02 (.12) (.07) (.10) (.10) Leaner -.16* .00 -.00 .012 (.06) (.04) (.05) (.05) Antipetista -.07 -.09* -.23* .07 (.05) (.04) (.05) (.05) Petista -.03 -.15* -.02 -.24* (.06) (.04) (.06) (.06) Dogmatism .01 .02 .05 -.01 (.06) (.03) (.04) (.04) Age .17* .11* .09 .17* (.08) (.04) (.08) (.07) Sex .03 .05 .03 .06 (.04) (.03) (.04) (.04) Income .07 -.05 -.08 -.03 (.07) (.04) (.06) (.06) Interest .04 .01 .04 .05 (.06) (.04) (.06) (.06) College -.06 .08* .03 .12* (.04) (.04) (.06) (.05) Constant .24* .72* .58* .71* (.08) (.04) (.09) (.08) n 498 1,063 520 543 Pseudo R2 .20 .05 .09 .10 *p<0.05. Standard errors in parentheses. Rumor fixed-effects included but not shown.

53 Table J2: Probit Model of Effect of Corrections on Rumor Rejection by Type of Rumor and Dogmatism

Non Independent Variable Political Negative Positive Political Correction .04 .05 .08 .06 (.06) (.05) (.07) (.06) Dogmatism -.02 .03 .08 -.01 (.06) (.04) (.06) (.06) Correction*Dogmatism .07 -.02 -.06 .01 (.08) (.06) (.09) (.08) Disengagement -.04 -.00 .01 -.03 (.05) (.04) (.05) (.05) Leaner -.15* .00 -.00 .02 (.06) (.04) (.05) (.05) Antipetista -.08 -.09* -.23* .07 (.05) (.04) (.05) (.05) Petista -.04 -.15* -.01 -.24* (.06) (.04) (.06) (.06) Age .18* .11* .08 .17* (.08) (.05) (.08) (.07) Sex .03 .05 .03 .06 (.04) (.03) (.04) (.04) Income .07 -.05 -.08 -.03 (.07) (.04) (.06) (.06) Interest .04 .01 .04 -.05 (.06) (.04) (.06) (.06) College -.05 .08* .03 .12* (.06) (.04) (.06) (.05) Constant .29* .72* .55* .72* (.08) (.06) (.09) (.08) n 498 1,063 520 543 Pseudo R2 .20 .05 .09 .09 *p<0.05. Standard errors in parentheses. Rumor fixed-effects included but not shown.

54 Table J3: Probit Model of Effect of Corrections on Rumor Rejection by Type of Rumor and College Education

Non Independent Variable Political Negative Positive Political Correction .09* .04 .03 .07 (.05) (.03) (.04) (.04) College .01 .07 .02 .13 (.08) (.06) (.08) (.08) Correction*College -.13 .02 .03 -.03 (.11) (.07) (.12) (.10) Disengagement -.03 -.00 .01 -.03 (.06) (.04) (.05) (.05) Leaner -.15* .00 .00 .02 (.06) (.04) (.06) (.05) Antipetista -.07 -.09* -.23* .07 (.05) (.03) (.05) (.05) Petista -.03 -.15* -.02 -.24* (.06) (.04) (.06) (.06) Dogmatism .01 .02 .05 -.01 (.04) (.03) (.04) (.04) Age .17* .11* .09 .17* (.08) (.05) (.08) (.07) Sex .03 .05 .03 .06 (.04) (.03) (.04) (.04) Income .07 -.06 -.08 -.03 (.07) (.04) (.06) (.06) Interest .04 .01 .04 -.05 (.06) (.04) (.06) (.06) Constant .26* .72* .58* .71* (.08) (.06) (.08) (.07) n 498 1,063 520 543 Pseudo R2 .20 .05 .09 .10 *p<0.05. Standard errors in parentheses. Rumor fixed-effects included but not shown.

55 Table J4: Probit Model of Effect of Corrections on Rumor Rejection by Type of Rumor and Age

Non Independent Variable Political Negative Positive Political Correction .01 .01 -.01 .04 (.08) (.05) (.08) (.08) Age .10 .08 .03 .14 (.11) (.08) (.11) (.11) Correction*Age .16 .07 .11 .07 (.16) (.11) (.15) (.15) Disengagement -.04 -.00 .01 -.04 (.06) (.04) (.05) (.05) Leaner -.15* .00 -.00 .02 (.06) (.04) (.05) (.05) Antipetista -.07 -.09* -.24* .07 (.05) (.04) (.05) (.05) Petista -.03 -.15* -.02 -.24* (.06) (.04) (.06) (.06) Dogmatism .01 .02 .05 -.01 (.04) (.03) (.04) (.04) Sex .03 .05 .03 .06 (.04) (.03) (.04) (.04) Income .07 -.06 -.08 -.03 (.07) (.04) (.06) (.06) Interest .04 .01 .04 -.05 (.06) (.04) (.06) (.06) College -.06 .08* .03 .12* (.05) (.04) (.06) (.05) Constant .30* .74* .60* .73* (.08) (.06) (.09) (.08) n 498 1,063 520 543 Pseudo R2 .20 .06 .09 .10 *p<0.05. Standard errors in parentheses. Rumor fixed-effects included but not shown.

56 Appendix K Meta-Analysis This meta-analysis includes all published papers at American Political Science Re- view, American Journal of Political Science, Journal of Politics, Electoral Studies, Public Opinion Quarterly, Political Behavior, Comparative Political Studies, British Journal of Political Science that contained the terms “Fake News,” “Rumors,” “Misinformation” and “Conspiracy Theories.” In addition to that, these papers were filtered according to the following criteria (see Figure K1 for further information): • Random assignment of correction to fake news, rumor, misinformation (the correc- tion may be fact checking or not) • In studies with multiple treatment arms, we included estimates for each treatment arm (we clustered our standard errors). • In studies with multiple outcomes, choose the one the author/authors focused pri- marily. • In studies that have multiple estimators, we chose the linear estimators. • In studies with multiple specifications, we chose the one with the smallest standard error.

Figure K1: Search Tree

The meta-analysis shows estimates that reflect respondents’ change in belief in correct information (or disbelief in false information) as a result of corrections. We converted all effects of corrections to Glass’ D standardized mean difference using pooled standard deviation and Hedge’s G correction factor. After rejecting the null on Cochran’s Q test for cross-study heterogeneity, we implemented random effects meta-analysis.

57 Figure K2: Publication Bias: p-curve

Tables K1 and K2 compare our meta-analysis to published meta-analysis published by Walter and Murphy (2018) and Walter et al. (Forthcoming).

58 Table K1: Studies in Walter and Murphy (2018)

Study In AUTHORS 2020 Latin America? Aikin et al. 2017 No No Amazeen et al. 2016 No No Armstrong et al. 1979 No No Berinsky 2012 No No Berinsky 2015 Yes No Bernhardt et al. 1986 No No Biener et al. 2007 No No Christiaansen and Ochalek 1983 No No Clark et al. 2013 No No Cook et al. 2017 No No Darke et al. 2008 No No Davies 1997 No No Dixon et al. 2015 No No Dyer and Kuehl 1978 No No Ecker et al. 2010 No No Ecker et al. 2011 No No Ecker et al. 2014 No No Ecker et al. 2015 No No Garrett and Weeks 2013 No No Garrett et al. 2013 No No Greitemeyer No No Huang 2017 Yes No Johnson and Seifert 1994 No No Kortenkamp and Basten 2015 No No Mazis and Adkinson 1976 No No Misra 1992 No No Morgan and Stolman 2002 No No Nyhan and Reifler 2010 Yes No Nyhan and Reifler 2015 No No Nyhan et al. 2016 No No Peter and Koch 2016 No No Pingree et al. 2014 No No Rapp and Kendeou 2007 No No Rich and Zaragoza 2016 No No Rich 2013 No No Sawyer and Semenik 1978 No No Tangari et al. 2010 No No Thorson 2016 No No Weeks 2015 No No Wilkes 1999 No No Wilkes and Leatherbarrow 1988 No No Wood 2014 No No Wright 1993 No No

59 Table K2: Studies in Walter et. al. (2019)

Study In AUTHORS 2020 Latin America? Agadjanian et al. 2019 No No Amazeen et al. 2018 No No Carnahan et al. 2018 No No Flynn et al. 2017 No No Fridkin et al. 2015 No No Fridkin et al. 2016 No No Garrett and Weeks 2013 No No Garrett et al. 2013 No No Graves et al. 2018 No No Jarman 2014 No No Jarman 2015 No No Jarman 2016 No No Nyhan et al. 2017 Yes No Pennycook et al. 2017 No No Swire et al. 2017 No No Thorson 2016 No No Weeks 2015 No No Wintersieck 2015 No No York et al. 2018 No No Young et al. 2018 No No

Figure K3: Forest Plot - Studies conducted during elections

Standardised Mean Study TE seTE Difference SMD 95%−CI Weight

Nyhan et al. 2019 (US) 0.66 0.0710 0.66 [ 0.52; 0.80] 31.8% Nyhan et al. 2019 (US) 0.71 0.0450 0.71 [ 0.62; 0.80] 36.4% Nyhan et al. 2019 (US) 0.42 0.0710 0.42 [ 0.28; 0.56] 31.8% AUTHORS 2020 (Brazil) 0.05 0.0550 0.05 [−0.05; 0.16] 0.0%

Random effects model 0.60 [ 0.22; 0.98] 100.0% 95% PI [−1.47; 2.68] Heterogeneity: I 2 = 83%, τ2 = 0.0188, p < 0.01 −2 −1 0 1 2

60