The Liar's Dividend

The Liar's Dividend

The Liar's Dividend: How Misinformation About Misinformation Affects Politician Support and Trust in Media Kaylyn Jackson Schiff,∗ Daniel Schiff,† and Nat´aliaS. Bueno‡ This version: October 23, 2020 Abstract This study addresses the phenomenon of misinformation about misinformation, or politicians \crying wolf" over fake news. While previous work has addressed the direct effects of misinformation, we focus on indirect effects and argue that strategic and false allegations that stories are fake news or deepfakes benefit politicians by helping them maintain support in the face of information damaging to their reputation. We posit that this concept, known as the \liar's dividend," works through two theoretical chan- nels: by injecting informational uncertainty into the media environment that upwardly biases evaluations of the politician, or by providing rhetorical cover which supports motivated reasoning by core supporters. To evaluate these potential impacts of the liar's dividend, we use a survey experiment to randomly assign vignette treatments detailing hypothetical politician responses to real embarrassing or scandalous stories. We employ a 2x2x3 factorial design (politician partisanship x media format x politician response) and assess impacts on belief in the stories and support for the politicians. Our results reveal the extent to which misinformation about misinformation pays off. Keywords: misinformation, survey experiment, deepfakes, fake news, trust, media ∗Ph.D. Student, Department of Political Science, Emory University, kaylyn.jackson.schiff@emory.edu †Ph.D. Student, School of Public Policy, Georgia Institute of Technology, schiff@gatech.edu ‡Assistant Professor, Department of Political Science, Emory University, [email protected] Contents 1 Introduction3 2 A Theory of the Liar's Dividend5 2.1 Hypotheses .................................... 10 3 Pilot Study 11 3.1 Pilot Results.................................... 13 3.2 Implications for Main Study........................... 17 4 Experimental Design 18 4.1 Study Population and Survey Platform..................... 18 4.2 Randomization and Treatment Assignment................... 18 4.3 Outcome Measures and Additional Demographic Items............ 20 4.4 MDE Calculations ................................ 21 5 Analysis Strategy 22 5.1 Covariates and Balance.............................. 27 5.2 Limitations and Threats to Inference...................... 29 6 Conclusion 30 References 31 7 Appendix A: Main Study Survey Items 34 8 Appendix B: Pilot Study Survey Items 39 The Liar's Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno 1 Introduction Misinformation in political discourse can have clear and direct harms on political account- ability, trust, and social cohesion. Further complicating this issue is the emergence of new methods to produce falsified media, methods that are transforming and extending tradi- tional strategies of producing misinformation. For example, publicly-available algorithms now support the semi-autonomous and rapid generation of new text, which can make the creation of fake news stories easier (Schuster et al. 2019). Perhaps of even greater concern are new sophisticated methods to produce digitally-altered or altogether fabricated audio, images, or videos, known as \deepfakes." Deepfakes are the result of technological advances in artificial intelligence (AI) that decrease the cost of producing such content through the use of Generative Adversarial Networks (GANs). While these capabilities were previously restricted to professional artists and media studios through time-consuming and expensive efforts, it is increasingly possible for non-sophisticated actors to generate fake video and audio (Schwartz 2018). For example, deepfake videos of both former president Barack Obama and current president Donald Trump have surfaced, in part serving as public service announcements to convey concerns about the risks of mis- information and election interference. Moreover, during the 2020 election cycle, several digitally-altered videos of Joe Biden have circulated and have even been shared by President Trump, representing increased penetration of deepfake-based misinformation into critical political arenas (Mak and Temple-Raston 2020). While misinformation is a growing concern among the public (Mitchell et al. 2019), there is disagreement about the consequences of misinformation. For example, the 2016 American presidential election was marked by Russian-financed fabricated news stories to manipulate public opinion in favor of Donald Trump. On one hand, this false information was accessed and shared by millions of American adults, and found credible by a majority of Americans with aligned partisan views (Allcott and Gentzkow 2017). On the other hand, consumption 3 The Liar's Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno of misinformation in this case and more generally may be limited depending on individuals' media diets and restricted to those with strong partisan preferences (Guess, Nyhan and Reifler 2020). The more subtle indirect effects of misinformation could be even more concerning. Following the 2016 election, the phrase \fake news" has been employed to discredit information critical of public figures and political leaders, even when the information is not false. For example, former Spanish Foreign Minister Alfonso Dastis claimed that images of police violence in Catalonia in 2017 were \fake photos" (Oppenheim 2017) and American Mayor Jim Fouts called audio tapes of him making derogatory comments toward women and black people \phony, engineered tapes" (Wang 2017), despite expert confirmation. Authoritarian leaders in Russia, Turkey, Poland, Thailand, China, and elsewhere have adopted this strategy to deny critical media coverage, even when objective observers and experts find the coverage to be credible (Erlanger 2017). This strategy of exploiting a general environment of misinformation and lack of trust has taken a prominent role in shaping partisan polarization and in voiding efforts to maintain a common basis of agreed-upon truth (Spohr 2017). This study seeks to evaluate these indirect effects of misinformation, or how politicians can leverage an environment of misinformation and distrust in their self-interest by falsely claiming that damaging true information about themselves (e.g., criticisms, scandals) is fake. This concept, known as the \liar's dividend," posits that public figures and politicians can maintain support by falsely claiming that true events and stories are fake news or deepfakes (Chesney and Citron 2018). If such a lie is used successfully, it may provide a benefit (or dividend) to the liar, increasing their reputational standing, authority, reelection prospects, and so on. However, it does so through deception, and risks further undermining public trust in the media and in the informational environment altogether. Therefore, the primary goal of this study is to understand whether and how the liar's dividend might benefit a politician. We will employ a survey experiment with a 2x2x3 factorial design 4 The Liar's Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno that randomly assigns treatments to American citizens to assess how strategic allegations of fake news and fake video (i.e., deepfakes) affect evaluations of politicians and the media. The experimental design uses real text and video of political scandals so that the only deception stems from researcher-supplied allegations of misinformation by the politicians. The factorial design also allows us to assess whether deepfakes represent a more severe (or just more novel) threat than fake news, as well as whether any impacts are concentrated on core supporters versus independents or the opposition. Using outcome measures related to belief in the story, support for the politician, and trust in media, we will provide causal evidence as to whether the liar's dividend can successfully serve as a reputational buffer for unscrupulous politicians and we will observationally consider the influence of potentially important moderators, such as media and digital literacy. In what follows, we describe our theory and hypotheses. We then move to evidence from a pilot survey implemented to test our instruments, choices of treatment, videos, and mea- surement approach. Finally, we present our experimental design and analysis plan for the main study. 2 A Theory of the Liar's Dividend Analogous to the minimal effects hypothesis in the context of political campaigns (Kalla and Broockman 2018), some scholars have argued that concerns surrounding the impact of fake news may be overstated (Lazer et al. 2018, Little 2018). According to this perspective, individuals may consume news that merely aligns with prior opinions and can account for and adjust to bias of news sources (Taber and Lodge 2006). Moreover, isolated fake messages may not be especially persuasive on their own. For all of these reasons, persuasion is often difficult. This minimal effects finding has held even in the case of deepfakes, or sophisticated fake video and audio created by new artificial intelligence techniques. While deepfakes are thought so realistic that their emergence has garnered major concern from governments, 5 The Liar's Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno researchers, and the public (Agarwal et al. 2019, Chesney and Citron 2019), recent work has found that deepfake political content is not more persuasive than its text-based counterpart (Wittenberg et al. 2020). There is, however, another

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    44 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us