<<

Medical Misinformation in the Covid-19 Pandemic

Sarah Kreps and Doug Kriner Department of Government,

Abstract

The World Health Organization has labeled the omnipresence of misinformation about Covid-19 an “infodemic” that threatens efforts to battle the public health emergency. However, we know surprisingly little about the level of public uptake of medical misinformation and whether and how it affects public preferences and assessments. We conduct a pair of studies that examine the pervasiveness and persuasiveness of misinformation about the novel coronavirus’ origins, effective treatments, and the efficacy of government response. Across categories, we find relatively low levels of true recall of even the prominent fake claims. However, many Americans struggle to distinguish fact from fiction, with many believing false claims and even more failing to believe factual information. An experiment offers some evidence that corrections may succeed in reducing misperceptions, at least in some contexts. Finally, we find little evidence that exposure to misinformation significantly affected a range of policy beliefs and political judgments.

One of the challenging public health aspects of the Covid-19 epidemic has been the misinformation surrounding the virus. Misinformation in the midst of a pandemic has a long history, dating back at least to the Plague of Athens, as the local population tried to shift blame onto an adversary or far-flung land rather than the local government. What makes the current misinformation context new and potentially threatening is that the facilitates the transfer of misinformation—defined as “false or misleading information”1—further and faster than either traditional forms of media or than accurate information.2 How pervasive and persuasive is the spread of medical misinformation? Prior studies offer few clues. Rumors about death panels surrounded the Affordable Care Act, showing that the high stakes of public health is not inoculated from misinformation and may be even more susceptible because the life-and-death consequences make people prone to fear and anxiety.3 Beyond the case- specific study, however, the pervasiveness and persuasiveness of medical misinformation is understudied compared to the focus on political misinformation, especially since the 2016 election. Recent research hints that medical misinformation may be less ubiquitous than political misinformation. Confronted with rapidly spreading false claims about Covid-19, social media platforms, the major vehicle for the diffusion of misinformation, have enacted unprecedented moderation policies, removing content and users that the platforms deem a public health risk. Because of the exigent threat to public health, the public has tacitly endorsed these draconian measures and entrusted platforms to act as a private regulator of the public information domain.4 However, the sheer volume of Covid-19 related content means that misinformation continues to propagate,5 although the degree of public exposure and impact remains unclear. According to one recent study, a small sample of fake claims on Facebook was shared 1.7 million times and viewed an estimated 117 million times as of mid-April 2020.6 In this research, we investigate the extent to which misinformation has percolated into the salient considerations on which Americans draw when thinking about the novel coronavirus. Can Americans faithfully recall Covid-19 misinformation? Can they distinguish factual information from misinformation? How does the spread of affect public attitudes about the pandemic, trust in government, and international adversaries? We answer these questions with a pair of studies focusing on misinformation about Covid- 19, the disease caused by the novel coronavirus. First, we measure public uptake and perceived credibility of misinformation, comparing recall rates and accuracy perceptions across factual information, prominent misinformation about Covid-19, and placebo misinformation that has not appeared widely on social media. Second, we measure the impact of false claims on a range of attitudes, including public policy preferences, evaluation of and trust in government leaders and institutions, and perceptions of foreign competitors. These studies are the first to measure recall of

1 David Lazer, Matthew Baum, Yochai Benkler, Adam Berinsky, Kelly Greenhill, Filippo Menczer, Miriam Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven Sloman, Cass Sunstein, Emily Thorson, Duncan Watts, and Jonathan Zittrain, “The Science of Fake News,” Science, 359 no. 6380, 9 March 2018; 1094- 1096; 1094. 2 Soroush Vosoughi, Deb Roy, and Sinan Aral, “The spread of true and false news online,” Science, 9 March 2018, 359, No 6380, 1146-1151. 3 Adam Berinsky, “Rumors and Health Care Reform: Experiments in Political Misinformation,” British Journal of Political Science, Vol 47, No. 2 (April 2017), 241-262. 4 Sarah Kreps and Brendan Nyhan, “Coronavirus Fake News Isn’t Like Other Fake News,” Foreign Affairs, 30 March 2020. 5 Ramez Kouzy, Joseph Abi Joaode, and Khalil Baddour, “Coronavirus Goes Viral: Quantifying the Covid-19 Misinformation Epidemic on ,” Cureus, March 2020 12 (3): e7255. 6 Avaaz, How Facebook Can Flatten the Curve of the Coronavirus Infodemic, 15 April 2020, https://secure.avaaz.org/campaign/en/facebook_coronavirus_misinformation/

1 Covid misinformation as well as the first to assess the efficacy of corrections in combatting accuracy perceptions and propensity to distribute misinformation online. We report five main findings. First, while misinformation concerning the pandemic is ubiquitous, our data suggest that uptake and retention of misinformation overall is modest, though it varies by category. We find that true recall of fake headlines about the origins of Covid-19 is modest. However, misinformation about alleged treatments and the effectiveness of the government response to the virus have gained some traction. Second, we find that many Americans fail to correctly identify fake news as false. Perhaps equally if not even more troubling, even more Americans failed to correctly identify factual information as true. This suggests a more indirect, but potentially more dangerous mechanism through which misinformation threatens public health – not by causing majorities to believe erroneous claims, but by saturating the information environment to an extent that it drowns out accurate information.7 Third, these problems are particularly acute among certain partisan subgroups and among heavy consumers of social media. Fourth, corrections to fake news can counter beliefs in misinformation and reduce Americans’ propensity to contribute to its spread; however, these effects are variable across categories of fake claims. Finally, exposure to misinformation had little direct effect on Americans’ policy preferences for responding to the pandemic and on their political judgements.

Medical Misinformation The democratic dilemma suggests that sound democratic governance hinges on a well- informed citizenry that can meaningfully weigh tradeoffs between policy proposals. Yet most individuals are underinformed about the very policies that they are meant to adjudicate. 8 Citizens could become more informed if they took measures to acquire policy-relevant information, but increasingly the marketplace of ideas is crowded and indeed fraught with misinformation that can impede the acquisition of accurate information. Research on the spread, uptake, and persuasiveness of misinformation has tended to focus on political misinformation, especially since the 2016 election. Some scholars have found that exposure to misinformation does not translate into persuasion, in part because those most exposed are partisans seeking pro-attitudinal information. 9 Other studies, however, have shown that individuals do fall prey to misinformation, although not for directionally-motivated, partisan reasons. Rather, scholars suggest that individuals are cognitively “lazy” and judge accuracy on the basis of plausibility, which requires some sort of prior about what is reasonable or not.10 Research on medical misinformation has similarly suggested that individuals believe rumors on the basis of cognitive fluency. The more prevalent a rumor, which can arise from partisan political actors frequently trafficking in particular narratives, the more credible it becomes and the harder it is to upend.11

7 The mechanism is similar to the arguments of Berinsky as well as Pennycook and Rand that suggests that fluency of information, which comes from repeat exposure, increases the plausibility of claims. 8 Arthur Lupia and Mathew McCubbins, The Democratic Dilemma: Can Citizens Learn what they need to know? Cambridge University Press, 1998. 9 Andrew Guess, Brendan Nyhan, and Jason Reifler, “Exposure to untrustworthy website in the 2016 US election,” Nature Human Behavior (2020), https://www.nature.com/articles/s41562-020-0833-x?proof=trueMay%252F 10 Gordon Pennycook and David Rand, “Lazy, not Biased: Susceptibility to Partisan Fake News is Better Explained by Lack of Reasoning than by Motivated Reasoning,” Cognition (2018). 11 Adam Berinsky, “Rumors and Health Care Reform: Experiments in Political Misinformation,” British Journal of Political Science, Vol 47, no 2 (April 2017), 241-262.

2 We investigate the applicability of these findings in the Covid-19 context. Previous research suggests that political misinformation travels faster or farther than false claims related to science.12 One reason may be that unlike political misinformation, medical misinformation may be more verifiably accurate or inaccurate, making it easier to discern accuracy. In the area of medicine and health, with life-and-death stakes, we also expect individuals to have keen incentives not to engage in the types of cognitive short-cuts that could lead to mere plausibility serving as the guide for accuracy assessments. We organize our study around the three main categories of false claims about Covid-19 that have proliferated on social media. The first category comprises conspiracy or at least unsubstantiated claims about the origins of the virus. The second focuses on false claims about treatments or even cures for the coronavirus. The prevalence of such claims has even led the World Health Organization to create a separate page of “mythbusters” that is primarily devoted to debunking fake cures and providing readers with accurate information about scientific efforts to combat the virus. The third is the governments’ response to the public health crisis. The second and third categories correspond at least partially to the misinformation campaigns of countries such as China that have increasingly seized on the pandemic to exploit social and political division within the and countries in the European Union,13 induce panic, and undermine trust in government institutions. In the first study, we consider belief and recall of a large sample of Covid-related headlines about the three categories of misinformation—treatments, origins, and government response. Within each category, subjects evaluated three types of headlines: factual headlines, prominent misinformation headlines that received widespread fact-checking in high profile US media outlets, and invented placebo headlines that have not featured prominently on social or mainstream media. A complete list of headlines by category is presented in SI Table 1. In the second study, we examine the efficacy of corrections to misinformation as well as the effects of exposure to misinformation on political assessments. Using an experiment, we randomly expose subjects to either true or false headlines, the latter with and without corrections, to gauge both the perceived credibility of each type of information and their effects on Americans’ policy preferences and political beliefs.

Results

Misinformation concerning Covid-19 is ubiquitous but uptake is neither uniform across the population, nor does it necessarily change attitudes when individuals are exposed to misinformation.

Study 1. Our first study randomly assigned each subject to read 12 headlines about the origins of Covid-19, treatments for the disease, and the nature of the government response to the pandemic. Within each category, subjects evaluated factual headlines that appeared in major media outlets; prominent misinformation headlines discussed extensively on major fact-checking websites and prominent media outlets; and invented “placebo” headlines reporting claims that have not appeared prominently on traditional or social media. Subjects were first asked whether they recalled seeing each headline and then to evaluate its accuracy.

12 Soroush Vosoughi, Deb Roy, and Sinan Aral, “The spread of true and false news online,” Science, 9 March 2018, 359, No 6380, 1146-1151. 13 Edward Wong, Matthew Rosenberg, and Julian Barnes, “Chinese Agents Helped Spread Messages that Sowed Virus Panic,” New York Times, 22 April 2020.

3 Figure 1 presents the average percentage of our survey respondents who claimed to remember and to remember and believe the headlines in each of our nine categories. A superficial assessment suggests considerable penetration of prominent fake news into the national psyche. Reported recall rates varied from just under 30% in the fake treatments category to 36% in the fake origins headline category. In the context of a pandemic, the 29.6% who reported recalling headlines describing debunked treatments for the disease may be particularly alarming given the clear adverse health effects of false medical information about methods of treatment. However, comparisons of reported recall in each category of fake headlines with reported recall of the corresponding placebo headlines – which never actively circulated on traditional or social media – suggest lower estimates of “true” recall. For example, just over 21% of our sample reported recalling the “placebo” fake treatment headlines. Thus, our best estimate of the true recall of fake news about Covid-19 treatments is the difference between the two figures, approximately 8%. Similarly, we estimate that true recall of fake news about governmental response to the pandemic was about 9% in our survey. Finally, estimated true recall of fake news was lowest in the origins category, just 4%. << Figure 1 About Here >> Taken together, these results suggest that the “true recall” of even some of the most prominent fake news claims about treatments for coronavirus is limited. Of course, this does not mean that many Americans were not exposed to misinformation about fake treatments for Covid-19 and that this did not indirectly influence their beliefs and opinions.14 However, our data suggests that most of this false information has been forgotten and is no longer readily accessible and salient in most Americans’ minds.15 Nevertheless, our data suggests that a sizeable percentage of Americans have indeed seen and can truly recall fake stories about debunked treatments and the efficacy of the response to the coronavirus. Moreover, across categories significantly more Americans can truly recall fake information about Covid-19 than were able to recall prominent political misinformation during the 2016 presidential election.16 Finally, the average recall rates in both categories of factual information were significantly higher than those for any category of misinformation. Yet perhaps most troubling, less than half of our sample reported recalling the average factual headline about treatments for Covid-19. The lower panel of Figure 1 shows the average percentage of respondents who both reported recalling a headline and who believed it was true across the nine categories. This metric suggests further limitations on the reach of misinformation prevalent on social media. The percentage recalling and believing fake headlines ranged from 14% in the fake treatments category to 19% in the fake government response category. Moreover, these percentages were statistically indistinguishable from the corresponding figure in the relevant placebo group, except in the government response category (13.9% recalled and believed in the placebo vs. 19.4% in the actual fake news headlines).17 More surprising was the failure of individuals to identify factual information, particularly about the efficacy of treatments (or lack thereof). On average only one in three respondents recalled and believed factual headlines conveying Covid-19 treatment information.

14 Milton Lodge, Marco Steenbergen, and Shawn Brau, “The responsive voter: campaign information and the dynamics of candidate evaluation,” American Political Science Review, Vol 89, No. 2 (1995), 309-326. 15 John Zaller, The Nature and Origins of Mass Opinion. New York: Cambridge University Press, 1992. 16 Hunt Allcott and Matthew Gentzkow, “Social media and fake news in the 2016 election,” Journal of Economic Perspectives, Vol 31, No. 2 (Spring 2017), 211-36; 227. 17 This difference in means is statistically significant, p < .01 (two-tailed test).

4 Distinguishing Fact from Fiction Figure 2 presents the average percentage of all subjects (including those who did not claim to recall seeing the headline) who correctly identified each type of headline as true or false. With respect to claims about the virus’ origins and effective treatments, respondents were much better at identifying fake news as false than factual headlines as true. For example, more than 60% correctly identified prominent fake claims about effective treatments for Covid-19 as false. More than 50% also correctly judged our placebo misinformation about treatments false. However, many subjects struggled at identifying factual information about treatments for the virus as true. Just over 40%, on average, judged headlines in this category to be true, and the remaining 60% were almost evenly divided between identifying the headline as false or acknowledging that they were unsure. << Figure 2 About Here >> Most Americans struggled to correctly identify both real and fake headlines accurately. In the aggregate, our respondents rarely performed much better than a coin flip (across all headlines, subjects selected the correct answer just 54% of the time). However, not all subjects struggled equally. To identify the factors influencing Americans’ ability to separate Covid-19 fact from fiction we estimate a series of logistic regressions. We use these analyses to examine the effect of three main factors identified in prior work on misinformation generally in the context of Covid-19: political partisanship; social media use vs. reliance on other types of media; and education. In the first model we analyze all headlines in our data. In the final pair of models, we analyze the factors influencing the likelihood of a correct assessment of only fake headlines and only real headlines, respectively. Table 1 presents the results. << Table 1 About Here >> In all three models, we found evidence of a significant partisan divide. Democrats were consistently more likely to correctly identify a Covid-related headline as true or false than were independents or Republicans.18 In the models for which the coefficients for both the Democratic and Republican indicators are positive, wald tests confirm that the coefficient for Democrats is statistically larger, p < .05 (two-tailed test). This finding contrasts with work by Pennycook et al19 that finds no evidence of a partisan split in accuracy perceptions of Covid-related information. The results are at least partially consistent with prior work on partisanship and accuracy perceptions of real and fake news in the 2016 presidential election. Allcott and Gentkow20 found that Democrats were more likely than Republicans to correctly identify fake news as false. However, Republicans were more likely than Democrats to correctly identify accurate articles about the election as true. In the context of Covid-19, Democrats were better able to correctly identify both factual and false claims. Misinformation about the pandemic has spread most prolifically on social media. As a result, greater reliance on social media as a source of news might logically decrease one’s likelihood of correctly identifying a claim as true or false. Alternately, Americans who are strong consumers of social media might be better equipped to distinguish between real and fake stories.21 Model 1 shows

18 In models 1 and 3 where the coefficients for both the Democrat and Republican indicators are positive (though the coefficient for Republicans in these models is small and statistically insignificant), wald tests confirm that the Democratic coefficient is statistically larger, p < .01 (two-tailed test). SI Figure 1 shows partisan differences in correctly identifying real versus fake headlines across the origins, treatments, and government response categories. 19 Gordon Pennycook, Jonathon McPhetres, Yunhao Zhang, and David Rand. N.d. “Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy Nudge Intervention.” 20 Hunt Allcott and Matthew Gentzkow, “Social media and fake news in the 2016 election,” Journal of Economic Perspectives, Vol 31, No. 2 (Spring 2017), 211-36; 227. 21 Sarah Kreps, Miles McCain, and Miles Brundage, “All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation,” https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3525002

5 that Americans who rely heavily on social media for news were significantly less likely to correctly identify Covid-19 headlines as accurate in the aggregate. However, models 2 and 3 show that where heavy social media users most struggled was in correctly identifying fake news as fake. Simulations show that the median respondent who does not use social media at all to get their news had an almost two in three chance of correctly identifying the average fake news article as fake. By contrast, if that same median subject heavily relied on social media for news, her probability of correctly identifying a fake story as false was almost no better than 50-50. Social media use did not affect accuracy perceptions of real headlines. By contrast, Americans’ reliance on television and newspapers for news was positively correlated with correctly identifying factual headlines as true. Perhaps somewhat surprisingly reliance on newspapers for news was negatively associated with correctly identifying misinformation as fake. However, the magnitude of the estimated effect is half that of social media. Given the prominence of fact-checking coverage in most major newspapers, this finding may be consistent with research warning about possible boomerang effects in corrections, in which corrections either backfire or at least increase the plausibility of misinformation by increasing the fluency of that information.22 Finally, we found some evidence that more educated Americans were better equipped to accurately identify claims as true or false. In the aggregate, the effect is positive and statistically significant. However, as shown in models 2 and 3, this relationship is only statistically significant for correct accuracy assessments of factually true headlines. Education does not appear to better equip individuals to correctly identify Covid-19 misinformation as false. Interestingly, this pattern is opposite that observed in Allcott and Gentkow’s23 analysis of political misinformation in which more educated Americans were better at identifying fake news during the 2016 election, but not at judging real news to be true.

Study 2. To examine the persuasiveness of medical misinformation and the efficacy of corrections, we employed a 3x3 experimental design that manipulated three categories of Covid-19 headlines: headlines about the pandemic’s origins; treatments for the disease; and the efficacy of the US response. Across each category we manipulated the nature of the information reported. In each category, the first is a headline verified as true, the second is a headline that has been debunked as false, and the third is a correction of the fake headline. In the correction treatments, the fake headline is specifically called out as incorrect with a bold “Fake News Headline Correction” preceding the article headline and a red “X” next to the headline itself (see SI Figure 2). As a control, we also included a fake headline debunked on prominent fact-checking websites that was completely unrelated to Covid-19 and instead discussed the prevalence of derelict windmills in the US.

The Efficacy of Corrections Subjects were randomly assigned to one of the resulting ten experimental conditions and asked to evaluate the accuracy of the headline they read as well as whether they would “like” or

22 Brendan Nyhan and Jason Reifler, “When Corrections Fail,” Political Behavior, Vol 32 (2010), 303-330; Berinsky 2017, 2. 23 Hunt Allcott and Matthew Gentzkow, “Social media and fake news in the 2016 election,” Journal of Economic Perspectives, Vol 31, No. 2 (Spring 2017), 211-36.

6 “report” it on social media. Figure 3 presents the percentage of respondents judging each headline “very” or “somewhat accurate” across conditions. << Figure 3 About Here >> The data provides modest evidence for the efficacy of corrections in two categories and stronger evidence in a third. A little more than a quarter (27%) of subjects who saw the fake claim that the coronavirus originated in a US Army lab judged it very or somewhat accurate. The percentage judging this claim accurate in the correction treatment was only slightly lower (24%). Any correction effect is even smaller in the treatment category as just over one in five (21%) judged the claim that essential oils are an effective treatment for severely ill Covid-19 patients accurate versus 20% in the correction treatment. However, the third fake headline concerning the efficacy of the US response – that the US has the highest coronavirus death rate in the industrialized world – was widely believed (81%) by subjects in this condition. In the corresponding correction treatment, the percentage believing the headline was significantly lower, 67%.24 However, a super-majority continued to believe the false headline even after a strong correction. Figure 4 presents two alternate metrics on which to evaluate the efficacy of corrections: the percentage who would be most likely to “like” or “report” each headline on social media. For all three categories, the correction reduced the percentage who said they would be most likely to “like” the false headline on social media.25 Similarly, across categories the correction increased the percentage of subjects saying they would be most likely to report the false headline, with the biggest effect again coming on the fake efficacy headline.26 While the effects of corrections were variable across types of Covid-19 related misinformation, the pattern of results is broadly consistent with studies asserting the potential power of corrections.27 << Figure 4 About Here >>

Evaluating the Consequences of Misinformation Finally, to assess the effects of misinformation on Americans’ attitudes and policy preferences concerning Covid-19, we estimated an additional set of statistical models. The models examine the effect of exposure to real and misinformation on Americans’ policy preferences for responding to the pandemic, assessments of political leaders and the federal government as a whole, and attitudes toward China. Specifically, we examine the effects of the three experimental conditions concerning medical treatments for Covid-19 on support for lifting stay-at-home orders; the effects of the three experimental conditions concerning the efficacy of the US response on approval of President Trump’s handling of the pandemic and trust in the federal and each respondent’s home state government; and the effect of the three origins treatments on beliefs that China has concealed information about Covid-19 and favorability toward China more generally. Table 2 presents the results. << Table 2 About Here >> This final analysis finds no evidence that experimental exposure to misinformation, with or without corrections, systematically affected Americans’ broader political beliefs about the pandemic. Fake news touting essential oils as a cure did not increase support for lifting government imposed

24 This difference in means is statistically significant, p < .01 (two-tailed test). 25 Difference in means tests show that the effect of the origins correction is statistically significant, p < .05 (two-tailed test), and the effect of the efficacy correction is statistically significant, p < .10 (two-tailed test). 26 Difference in means tests show that the effect of the efficacy corrections on reporting is statistically significant, p < .01 (two-tailed test). 27 Katherine Clayton, et al. 2019. “Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Checking Tags in Reducing False Belief in False Stories on Social Media.” Political Behavior https://doi.org/10.1007/s11109-019-09533-0.

7 stay-at-home orders. Misinformation denouncing the US response to Covid-19 as resulting in the highest death rate in the industrialized world did not decrease support for President Trump’s handling of the crisis or immediate assessments of respondents’ trust in the federal government or their state government. The false claim that the novel coronavirus originated in a US Army lab and not in Wuhan China had no effect on attitudes toward China.

Discussion

Our analysis suggests that the proportion of the population that both recalls and believes some misinformation about Covid-19 is higher than that observed in previous studies of political misinformation. Many individuals are able to discern the type of misinformation that is prominent in social media and factchecking sites and to recognize it as patently wrong, but sizable numbers nonetheless believe inaccurate headlines. Evidence points to inundation of information, particularly on social media, as a reason why many individuals lack discernment. The more individuals rely on social media for news, the less likely they are to identify fake news as misinformation. Perhaps more surprising than the percentage of people that believes fake news is the proportion of those do not believe accurate information, although there is no correlation between social media usage and the ability to correctly identify factual information. The analysis has important implications for both theory and policy. In terms of theory, our research speaks directly to the democratic dilemma that affects meaningful public engagement with public policy. The premise of the dilemma is that sound democratic governance hinges on a well- informed citizenry that can meaningfully weigh tradeoffs between policy proposals, yet most individuals are underinformed about the very policies that they are meant to adjudicate. 28 Citizens could become more informed if they took measures to acquire policy-relevant information but the marketplace of ideas is crowded and indeed fraught with misinformation that can impede the information acquisition process. Rumors, conspiracies, and other forms of unsubstantiated information can act as an “insidious political force.”29 Instead of finding accurate information, citizens are confronted with cascades of false information that may or may not percolate into their thinking or be corrected by facts. Our results suggest that corrections may have some success in combatting public misperceptions; however, their efficacy is highly variables. Similarly, while the impact of misinformation has generated attention since the 2016 election, studies of misinformation have tended to focus on political misinformation. Scholars have found that misinformation abounded, but its sheer existence does not mean that individuals are persuaded. Indeed, scholars have shown that even if they are exposed to misinformation, which many were in the context of the 2016 election, misinformation did not fundamentally change individuals’ political beliefs in part because those most exposed were partisans seeking pro- attitudinal information. 30 Here again, scholars have sidestepped investigations into whether similar dynamics hold for medical misinformation, despite the life-and-death impact plausibly creating more existential stakes and attitudinal consequences. Lastly, the analysis has important implications for public policy. Coherent policy responses require the ability to mobilize public opinion, and that the public be able to find and trust factual information in the marketplace of ideas. We show that inaccurate information may be diluting

28 Arthur Lupia and Mathew McCubbins, The Democratic Dilemma: Can Citizens Learn what they need to know? Cambridge University Press, 1998. 29 Berinsky, “Health Care Reform and Political Misinformation,” 3. 30 Andrew Guess, Brendan Nyhan, and Jason Reifler, “Exposure to untrustworthy website in the 2016 US election,” Nature Human Behavior (2020), https://www.nature.com/articles/s41562-020-0833-x?proof=trueMay%252F

8 correct information to the point that people believe neither. The prospect plays into the strategies deployed by foreign actors seeking to influence domestic politics, which is to inundate the public sphere with cacophonous content so that individuals cannot discern accuracy of either fake or real content. Instead, they operate in a veritable fog of information overload, and rather than sift through it, many Americans simply tune it out.31

Methods Our research followed relevant ethical regulations. The Cornell University institutional review board approved all study protocols (Protocol ID 2004009569). In both studies, after providing their informed consent and agreeing to participate in the study, subjects were asked to evaluate a series of article headlines about Covid-19. Subjects were told that while much of the news about the pandemic is true, some articles contain false information. We studied three main themes of misinformation that correspond to the most prominent bins of online misinformation: claims about the origins of the novel coronavirus, treatment or antidotes for Covid-19, and government effectiveness in handling the crisis.

Study 1. In the first study, we investigated exposure to and uptake of misinformation across each of these three categories. In the first week of May 2020, we conducted an online survey of 1,050 adult Americans recruited through the online marketplace Lucid. Lucid employs quota sampling to produce samples matched to the US population on age, gender, ethnicity, and geographic region.32 The demographic composition of our sample and comparisons to those of prominent social science surveys and U.S. Census American Community Survey statistics are provided in SI Table 2.

Headline Selection: To identify the headlines, we focused on the three categories of misinformation, origins, treatments, and government response. From a search of fact-checking sites, we identified two prominent headlines involving false claims about the origins of the virus (e.g. Covid-19 is caused by 5G technology) and four prominent headlines involving fake treatment for Covid-19 (e.g. Breathing hot air cures Covid-19; Chlorine dioxide is an effective treatment). Each of these headlines was covered in , with five of the seven also receiving coverage in or USA Today. Each headline was also debunked by either Politifact or Snopes, two of the leading independent fact-checking websites, with five of the seven headlines being debunked on both sites. Finally, three of the claims in these misinformation headlines also featured on the World Health Organization’s Covid-19 “Mythbusters” webpage.33 To provide points of comparison, we conducted similar searches of major news outlets to identify a parallel set of four headlines describing factual information about treatments for the virus (e.g. clinical trials found no evidence that hydroxychlroquine is an effective treatment; a drug originally developed to combat ebola has shown some promise in treating Covid-19 in early results)

31 Sabrina Tavernise and Aidan Gardiner, “’No one believes anything’: Voters worn out by a fog of political news,” New York Times, 18 November 2019. 32 Alexander Coppock and Oliver McClellan, “Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents,” Research and Politics (2019), 1-14. 33 Schwarz et al employ a similar approach, assessing whether individuals could distinguish between the facts and myths on a Center for Disease Control flyer about the flu vaccine. See Norbert Schwarz, Lawrence Sanna, Ian Skurnik, and Carolyn Yoon, “Metacognitive experiences and the intricacies of setting people straight: implications for debiasing and public information campaigns,” Advances in Experimental Social Psychology, Vol 39 (2007), 147-148. https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters

9 as well as a headline presenting factual information about the origins of the virus (i.e. that scientists have strong evidence that the novel coronavirus originated naturally and was not man-made). The core information in each of these headlines featured in both the New York Times and Washington Post, in addition to other major US news outlets. We also identified a third category of prominent misinformation concerning Covid-19 having to do with the government’s handling of the public health crisis. A number of foreign actors have spread inaccuracies about the institutional response in order to promote discord across the United States, undermine trust in public institutions, and make it more difficult for the United States to mobilize a coherent response.34 All of the headlines in this category received coverage in major news outlets and most of the fake stories were also debunked by major fact-checking websites. To account for the potential over-reporting in self-reported recall of misinformation, we followed prior research35 and constructed three parallel sets of “placebo” fake news headlines that we invented. Media searches confirm that the claims advanced in these placebo headlines did not receive widespread media attention in early 2020. Comparing reported recall of prominent misinformation arguments and fabricated fake news arguments in the placebo group provides a measure of “true recall,” allowing us to generate a more precise estimate of how many prominent fake news stories about Covid-19 subjects have actually seen and remembered.

Measuring recall: Each subject was randomly assigned to read twelve headlines drawn from the various categories. For each headline, subjects were asked whether they recalled seeing the claim about Covid-19 reported or discussed in recent months. Subjects could respond yes, no, or unsure. For each category of headline, we measure “true recall” as the percentage recalling the fake news headlines in that category minus the percentage recalling the corresponding “placebo” stories that did not receive wide media circulation.

Measuring credibility: Immediately after being asked if they recalled seeing each headline, subjects were also asked whether they thought each statement was true. Subjects could respond yes, no, or unsure.

Regression analysis: To assess the factors that affect Americans’ ability to correctly identify Covid-19 headlines as true or false, we constructed a series of logistic regressions. Because the same individuals evaluated multiple headlines, all models report robust standard errors clustered on the respondent.

Dependent variable: The dependent variable is coded 1 if the respondent correctly identified a headline as true or false and 0 if they erred or were unsure.

Independent variables: Political partisanship is measured by the standard Gallup question asking subjects whether they identified as a Democrat, a Republican, or an independent. Media usage was measured by three questions asking subjects how much they relied on social media, newspapers (in print or online), and television to stay up-to-date on the news. All responses were measured on a

34 For example, see: Edward Wong, Matthew Rosenberg, and Julian Barnes. “Chinese Agents Helped Spread Messages that Sowed Virus Panic in U.S., Officials Say.” New York Times April 22, 2020; Julian Barnes, Matthew Rosenberg, and Edward Wong. “As Virus Spreads, China and Russia See Openings for Disinformation.” New York Times March 28, 2020. 35 Allcott and Gektzkow 2017 and J. Eric Oliver and Thomas Wood, “Conspiracy theories and the paranoid style of mass opinion,” American Journal of Political Science, Vol 58, No. 4 (2014), 952-966.

10 four-point likert scale ranging from very much to not at all. Educational attainment was measured on an eight-point scale ranging from less than high school to professional degree. Finally, the models also controlled for subjects’ age, gender, and race/ethnicity.

Study 2. In the second week of May 2020, we conducted an online survey of 2,050 adult Americans recruited through the online marketplace Lucid. Comparative demographics are presented in SI Table 2. In the second study, we carried out a 3x3 experimental design that varied the veracity of information (true headlines; false headlines; and false headlines with a correction explicitly labeling it as false) and topic (origins, treatment, and government effectiveness). This generated nine experimental treatment groups with the following number of observations in each group: Real: origins (n=207); Fake origins (n=207); Correction: origins (n=208); Real: treatment (n=194); Fake: treatment (n=203); Correction: treatment (n=199); Real: efficacy (n=204); Fake: efficacy (n=209); Correction: efficacy (n=208). We also added a tenth condition, a control group (n=204), in which subjects were asked to read a widely debunked fake headline unrelated to Covid-19 asserting that thousands of abandoned wind turbines litter the United States. Each subject was randomly assigned to one of the ten experimental conditions.

Measuring credibility: Immediately after reading the assigned article, each subject was asked to assess the accuracy of the claim in the headline. Perceived accuracy was measured on a four-point likert scale: not at all accurate; not very accurate; somewhat accurate; very accurate.

Measuring social media response: After assessing the headline’s accuracy, respondents were asked if they saw this story on social media, which of the following would they be most likely to do: share it; like it; or report it. From this question we created two indicator variables coded 1 if the subject selected “like it” or “report it”.

Regression analysis: To assess the impact of exposure to misinformation on Americans’ policy preferences and political assessments, we constructed a series of logistic regression (models 1 and 5) and ordered logit regression (modes 2, 3, 4 and 6) models. Each model focuses on a subset of our sample to examine the effects of a single relevant group of experimental treatments (origins; treatments; or efficacy). In each case, subjects in the control group serve as the omitted baseline category.

Dependent variables: After assessing the accuracy of their assigned headline and answering how they might respond to it on social media, all respondents were asked the same set of questions about the novel coronavirus and the government response to it. These questions provide the dependent variables for the statistical models reported in Table 2. In model 1, the dependent variable is support for ending government-mandated lockdowns to combat the virus. Subjects were asked to read two statements and indicate which comes closer to their own views, even if neither is exactly right. The first dependent variable is coded 1 for those who selected “stay-at-home orders should be lifted to get the economy going again” and 0 for those who selected “most people should stay home until the doctors and public health officials say it is safe.” In model 2, the dependent variable is approval of President Trump’s handling of the coronavirus. This variable was measured on a four-point likert scale ranging from strongly disapprove to strongly approve.

11 In models 3 and 4, the dependent variables are measures of how much subjects said they trusted either the federal or their state government to look out for their best interests during the coronavirus outbreak. These variables were measured on a four-point likert scale ranging from not at all to a great deal. Finally, models 5 and 6 examined Americans’ attitudes toward China, where the outbreak originated and which has engaged in an extensive misinformation campaign to influence world opinion. Subjects were asked whether China hid the coronavirus from the rest of the world after its outbreak in Wuhan and for their overall opinion of China. The order of these questions was randomized. In model 5, the dependent variable is coded 1 if a respondent said yes, China hid the outbreak, and 0 if they replied no or that they were unsure. In model 6, the dependent variable is reported favorability toward China, which was measured on a four-point likert scale ranging from very unfavorable to very favorable.

Independent variables: Model 1 examines the effect of the real, fake, and correction virus treatment conditions on support for lifting stay-at-home orders. The independent variables of interest are three dummy variables indicating assignment to these three experimental conditions. Model 2, 3, and 4 examine the effect of the real, fake, and correction government efficacy conditions on approval of President Trump’s handling of the pandemic, trust in the federal government, and a respondent’s trust in his or her state government, respectively. The relevant treatment assignment indicator variables are the independent variables of interest. Finally, models 5 and 6 examine the effect of the real, fake, and correction origins conditions on beliefs that China hid information about the virus and favorability toward China more generally. The relevant treatment assignment indicator variables are the independent variables of interest. In each model, the control condition serves as the omitted baseline. In addition to the relevant treatment indicator variables, all models control for respondents’ political partisanship, media usage, educational attainment, age, gender, and race/ethnicity. These variables are defined as described previously for Study 1. Finally, all of the analyses in Table 2 also control for subjects’ factual scientific knowledge. This measure was constructed as an additive index of the number of correct responses to eight true/false science knowledge questions (option choices were true, false, and don’t know) commonly included on the General Social Survey.

Data Availability Statement All data files and statistical code to produce the tables and figures reported in the manuscript will be published on the Harvard Dataverse upon acceptance for publication at: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/ZADLCQ.

12 Figure 1: Percentage that Recalled and Believed Fake and Real Headlines

Note: I-bars indicate 95% confidence intervals around each mean.

13 Figure 2: Percentage Correctly Identifying Headline as True or False by Category

Note: I-bars indicate 95% confidence intervals around each mean.

14

Figure 3: Percentage Believing Very or Somewhat Accurate by Treatment

Note: I-bars indicate 95% confidence intervals around each mean.

15 Figure 4: Percentage Who Would “Like” or Report Headline by Treatment

Note: I-bars indicate 95% confidence intervals around each mean.

16 Table 1: Factors Predicting Correct Belief about Covid-19 Headlines

All Fake Real

Democrat 0.32*** 0.30*** 0.37*** (0.07) (0.10) (0.10) Republican 0.03 -0.02 0.11 (0.07) (0.10) (0.10) Social media use -0.11*** -0.19*** 0.03 (0.03) (0.04) (0.04) TV news use 0.05 -0.04 0.22*** (0.03) (0.04) (0.05) Newspaper use 0.01 -0.09** 0.20*** (0.03) (0.04) (0.04) Education 0.04** 0.03 0.07*** (0.02) (0.03) (0.03) Age 0.01*** 0.02*** 0.00 (0.00) (0.00) (0.00) Female -0.14** -0.06 -0.31*** (0.06) (0.08) (0.08) Black -0.43*** -0.58*** -0.15 (0.09) (0.12) (0.13) Latino -0.16 -0.09 -0.29* (0.10) (0.14) (0.16) Constant -0.50*** 0.07 -1.67*** (0.17) (0.23) (0.24)

Observations 12,468 8,308 4,160

Logistic regressions; robust standard errors clustered on respondent in parentheses; all significance tests are two-tailed. *** p<0.01, ** p<0.05, * p<0.10

17 Table 2: Effects of Misinformation Exposure on Covid-19 Beliefs and Preferences Lift Trump Trust Trust China China Stay-at-home order Covid approval Fed govt State govt Hid info Favorability

Real Treatment -0.01 (0.24) Fake Treatment -0.16 (0.24) Corrected Treatment 0.22 (0.24) Real Efficacy -0.24 -0.26 -0.28 (0.19) (0.18) (0.19) Fake Efficacy -0.09 0.17 0.11 (0.19) (0.19) (0.19) Corrected Efficacy -0.08 0.14 -0.18 (0.19) (0.18) (0.19) Real Origin -0.18 -0.06 (0.22) (0.18) Fake Origin -0.29 0.03 (0.22) (0.18) Corrected Origin -0.08 -0.27 (0.22) (0.19) Democrat -0.13 -1.35*** -0.35** 0.57*** -0.26 -0.01 (0.24) (0.17) (0.17) (0.17) (0.19) (0.17) Republican 1.00*** 1.87*** 1.49*** 0.79*** 0.89*** -0.46*** (0.21) (0.18) (0.18) (0.17) (0.21) (0.17) Social media use -0.00 0.27*** 0.21*** 0.12* 0.19** -0.00 (0.09) (0.07) (0.07) (0.07) (0.08) (0.07) TV news use -0.37*** 0.12 0.25*** 0.34*** 0.07 0.09 (0.10) (0.08) (0.08) (0.08) (0.09) (0.07) Newspaper use -0.12 -0.01 0.18*** 0.20*** -0.01 0.26*** (0.09) (0.07) (0.07) (0.07) (0.08) (0.07) Science knowledge -0.11* 0.05 0.10** 0.10** 0.20*** 0.18*** (0.06) (0.04) (0.04) (0.04) (0.05) (0.04) Education -0.06 -0.12*** -0.09** 0.00 0.02 0.08* (0.06) (0.04) (0.04) (0.04) (0.05) (0.04) Age 0.00 -0.00 -0.01*** 0.01** 0.01 -0.02*** (0.01) (0.00) (0.00) (0.00) (0.01) (0.00) Female -0.41** -0.48*** -0.19 -0.03 -0.32** 0.03 (0.17) (0.14) (0.13) (0.13) (0.16) (0.13) Black -0.23 0.25 0.39* 0.16 -0.19 0.05 (0.31) (0.21) (0.20) (0.21) (0.23) (0.20) Latino -0.50 -0.25 0.24 -0.20 -0.23 0.67*** (0.35) (0.23) (0.24) (0.24) (0.27) (0.23) Constant 0.66 -1.15** (0.51) (0.49)

Observations 800 825 825 825 821 821 Note: Models 1 and 5 are logistic regressions; models 2, 3, 4 and 6 are ordered logit regressions. The control condition is the omitted baseline category in each model. Standard errors in parentheses. All significance tests are two-tailed. *** p<0.01, ** p<0.05, * p<0.10

18 SI Figure 1: Percentage Correctly Identifying Headlines as True or False by Party

Note: I-bars indicate 95% confidence intervals around each mean.

19

SI Figure 2: Example of Correction Story

20 SI Table 1: Headlines from Study 1 by Category and Veracity

Category Veracity Headline Origins True Scientists Have Strong Evidence Coronavirus Originated Naturally: Nothing suggests the virus was ‘man-made,’ experts say Fake 5G Syndrome Maps Perfectly with Coronavirus Outbreaks Fake Bill Gates May Have Created Coronavirus to Microchip People Fake COVID-19: Further Evidence the Virus Originated in the US Fake US Army Brought Coronavirus Epidemic to Wuhan Placebo Coronavirus Was a Bioweapon Created by Iran to Punish the West for Crippling Economic Sanctions Placebo Corona Beer Consumption has been Linked to the Spread of Coronavirus in the Southwest

Treatments True No Benefit, Higher Death Rate in Patients Taking Hydroxychloroquine for COVID-19 True “Such a Simple Thing to Do”: Why Positioning COVID-19 Patients on their Stomachs Can Save Lives True Drug Used to Treat Ebola May Help COVID-19 Patients, Preliminary Results Suggest True Plasma Treatment Being Tested in New York May be Coronavirus “Game Changer Fake* Advice from Japanese Doctors Treating Coronavirus Cases: Drinking water every 15 minutes reduces your risk of contracting the virus Fake Using a Hair Dryer to Breathe in Hot Air Can Cure COVID-19 and Stop its Spread Fake There is an Expired Patent on the Coronavirus that Causes COVID-19, as well as on a Vaccine that Cures It Fake Good News: Coronavirus Destroyed By Chlorine Dioxide Placebo Acupuncture is Surprisingly Effective at Treating Those with Severe Coronavirus Symptoms Placebo Pharmaceutical Companies are Slowing Clinical Trials to Increase Price of COVID-19 Treatment

Response True Apple and Google are Building a Coronavirus Tracking System into iOS and Android True Restrictions Are Slowing Coronavirus Infections, New Data Suggest True Cities That Went All In on Social Distancing in 1918 Emerged Stronger for It Fake HHS Document Released Instructing MN Senator To Overcount COVID-19 Deaths Fake China Accused of Major Coronavirus Cover-up as Chilling Satellite Pics "Show Extent of Corpse Burning in Wuhan" Fake The Chinese Method of Combatting Coronavirus is the Only One that has Proved Successful Placebo is Including Hidden Devices in Select Products to Measure Social Distancing Placebo Wildly Inaccurate Coronavirus Models were Created by Climate Change Activists to Reduce Greenhouse Gas Emissions

*An earlier draft of the manuscript incorrectly stated that this headline was true.

21 SI Table 2: Comparative Sample Demographics

Study 1 Study 2 2017 ANES 2018 GSS US Census

Demographics Black 13% 13% 9% 16% 13% Latino 8% 9% 11% 6% 18% Female 50% 51% 52% 55% 51% % College degree 46% 43% 39% 33% 32% Median age 44 years 44 years 49 years 48 years 38 years

Political Characteristics Republican 34% 33% 29% 23% Democrat 37% 36% 34% 32% Ideology (% moderates) 34% 32% 21% 38%

Note: All Census figures taken from the 2018 American Community Survey.

22