The Liar’s Dividend: How Misinformation About Misinformation Affects Politician Support and Trust in Media

Kaylyn Jackson Schiff,∗ Daniel Schiff,† and Nat´aliaS. Bueno‡

This version: October 23, 2020

Abstract

This study addresses the phenomenon of misinformation about misinformation, or politicians “crying wolf” over fake news. While previous work has addressed the direct effects of misinformation, we focus on indirect effects and argue that strategic and false allegations that stories are fake news or deepfakes benefit politicians by helping them maintain support in the face of information damaging to their reputation. We posit that this concept, known as the “liar’s dividend,” works through two theoretical chan- nels: by injecting informational uncertainty into the media environment that upwardly biases evaluations of the politician, or by providing rhetorical cover which supports motivated reasoning by core supporters. To evaluate these potential impacts of the liar’s dividend, we use a survey experiment to randomly assign vignette treatments detailing hypothetical politician responses to real embarrassing or scandalous stories. We employ a 2x2x3 factorial design (politician partisanship x media format x politician response) and assess impacts on belief in the stories and support for the politicians. Our results reveal the extent to which misinformation about misinformation pays off.

Keywords: misinformation, survey experiment, deepfakes, fake news, trust, media

∗Ph.D. Student, Department of Political Science, Emory University, kaylyn.jackson.schiff@emory.edu †Ph.D. Student, School of Public Policy, Georgia Institute of Technology, schiff@gatech.edu ‡Assistant Professor, Department of Political Science, Emory University, [email protected] Contents

1 Introduction3

2 A Theory of the Liar’s Dividend5 2.1 Hypotheses ...... 10

3 Pilot Study 11 3.1 Pilot Results...... 13 3.2 Implications for Main Study...... 17

4 Experimental Design 18 4.1 Study Population and Survey Platform...... 18 4.2 Randomization and Treatment Assignment...... 18 4.3 Outcome Measures and Additional Demographic Items...... 20 4.4 MDE Calculations ...... 21

5 Analysis Strategy 22 5.1 Covariates and Balance...... 27 5.2 Limitations and Threats to Inference...... 29

6 Conclusion 30

References 31

7 Appendix A: Main Study Survey Items 34

8 Appendix B: Pilot Study Survey Items 39 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

1 Introduction

Misinformation in political discourse can have clear and direct harms on political account- ability, trust, and social cohesion. Further complicating this issue is the emergence of new methods to produce falsified media, methods that are transforming and extending tradi- tional strategies of producing misinformation. For example, publicly-available algorithms now support the semi-autonomous and rapid generation of new text, which can make the creation of fake news stories easier (Schuster et al. 2019). Perhaps of even greater concern are new sophisticated methods to produce digitally-altered or altogether fabricated audio, images, or videos, known as “deepfakes.” Deepfakes are the result of technological advances in artificial intelligence (AI) that decrease the cost of producing such content through the use of Generative Adversarial Networks (GANs).

While these capabilities were previously restricted to professional artists and media studios through time-consuming and expensive efforts, it is increasingly possible for non-sophisticated actors to generate fake video and audio (Schwartz 2018). For example, deepfake videos of both former president and current president Donald Trump have surfaced, in part serving as public service announcements to convey concerns about the risks of mis- information and election interference. Moreover, during the 2020 election cycle, several digitally-altered videos of Joe Biden have circulated and have even been shared by President Trump, representing increased penetration of deepfake-based misinformation into critical political arenas (Mak and Temple-Raston 2020).

While misinformation is a growing concern among the public (Mitchell et al. 2019), there is disagreement about the consequences of misinformation. For example, the 2016 American presidential election was marked by Russian-financed fabricated news stories to manipulate public opinion in favor of Donald Trump. On one hand, this false information was accessed and shared by millions of American adults, and found credible by a majority of Americans with aligned partisan views (Allcott and Gentzkow 2017). On the other hand, consumption

3 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno of misinformation in this case and more generally may be limited depending on individuals’ media diets and restricted to those with strong partisan preferences (Guess, Nyhan and Reifler 2020).

The more subtle indirect effects of misinformation could be even more concerning. Following the 2016 election, the phrase “fake news” has been employed to discredit information critical of public figures and political leaders, even when the information is not false. For example, former Spanish Foreign Minister Alfonso Dastis claimed that images of police violence in Catalonia in 2017 were “fake photos” (Oppenheim 2017) and American Mayor Jim Fouts called audio tapes of him making derogatory comments toward women and black people “phony, engineered tapes” (Wang 2017), despite expert confirmation. Authoritarian leaders in Russia, Turkey, Poland, Thailand, China, and elsewhere have adopted this strategy to deny critical media coverage, even when objective observers and experts find the coverage to be credible (Erlanger 2017). This strategy of exploiting a general environment of misinformation and lack of trust has taken a prominent role in shaping partisan polarization and in voiding efforts to maintain a common basis of agreed-upon truth (Spohr 2017).

This study seeks to evaluate these indirect effects of misinformation, or how politicians can leverage an environment of misinformation and distrust in their self-interest by falsely claiming that damaging true information about themselves (e.g., criticisms, scandals) is fake. This concept, known as the “liar’s dividend,” posits that public figures and politicians can maintain support by falsely claiming that true events and stories are fake news or deepfakes (Chesney and Citron 2018). If such a lie is used successfully, it may provide a benefit (or dividend) to the liar, increasing their reputational standing, authority, reelection prospects, and so on. However, it does so through deception, and risks further undermining public trust in the media and in the informational environment altogether.

Therefore, the primary goal of this study is to understand whether and how the liar’s dividend might benefit a politician. We will employ a survey experiment with a 2x2x3 factorial design

4 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno that randomly assigns treatments to American citizens to assess how strategic allegations of fake news and fake video (i.e., deepfakes) affect evaluations of politicians and the media. The experimental design uses real text and video of political scandals so that the only deception stems from researcher-supplied allegations of misinformation by the politicians. The factorial design also allows us to assess whether deepfakes represent a more severe (or just more novel) threat than fake news, as well as whether any impacts are concentrated on core supporters versus independents or the opposition. Using outcome measures related to belief in the story, support for the politician, and trust in media, we will provide causal evidence as to whether the liar’s dividend can successfully serve as a reputational buffer for unscrupulous politicians and we will observationally consider the influence of potentially important moderators, such as media and digital literacy.

In what follows, we describe our theory and hypotheses. We then move to evidence from a pilot survey implemented to test our instruments, choices of treatment, videos, and mea- surement approach. Finally, we present our experimental design and analysis plan for the main study.

2 A Theory of the Liar’s Dividend

Analogous to the minimal effects hypothesis in the context of political campaigns (Kalla and Broockman 2018), some scholars have argued that concerns surrounding the impact of fake news may be overstated (Lazer et al. 2018, Little 2018). According to this perspective, individuals may consume news that merely aligns with prior opinions and can account for and adjust to bias of news sources (Taber and Lodge 2006). Moreover, isolated fake messages may not be especially persuasive on their own. For all of these reasons, persuasion is often difficult. This minimal effects finding has held even in the case of deepfakes, or sophisticated fake video and audio created by new artificial intelligence techniques. While deepfakes are thought so realistic that their emergence has garnered major concern from governments,

5 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno researchers, and the public (Agarwal et al. 2019, Chesney and Citron 2019), recent work has found that deepfake political content is not more persuasive than its text-based counterpart (Wittenberg et al. 2020).

There is, however, another pernicious and more subtle form of misinformation that owes its existence (in part) to fake news. On one hand, there are indeed real instances of fake news, such as false stories propagated by Russian state actors in the 2016 U.S. election. On the other hand, the existence of fake news has also opened the door to a modern (but not altogether new1) type of misinformation —false allegations of fake news, whereby politicians or other public figures claim that real news stories are “fake news” (Chesney and Citron 2018). This strategy can be employed even against mainstream credible news sources, mak- ing it all the more valuable for those who wish to avoid scrutiny. While this modern version of alleging fake news has been made prominent by U.S. President Donald Trump, calls of “fake news” have now been echoed by politicians in Russia, China, Turkey, Libya, Poland, Hungary, Thailand, Somalia, Myanmar, Syria, Malaysia, and others. Importantly, this form of misinformation is especially prominent in authoritarian countries that restrict press free- doms, and has been used to target political opponents and the media, as well as to avoid accountability for political abuses (Erlanger 2017).

The systematic usage of this form of misinformation—alleging “fake news” in response to real stories– suggests that certain unscrupulous public figures find it to be persuasive or otherwise beneficial, against the expectations of a minimal effects hypothesis. Of note, such a strategy is employed in an environment already laden by significant misinformation and growing distrust in media sources (Lee 2010). Moreover, it is made more effective by the lowering of barriers to entry for ‘news-making,’ leading to numerous ways in which individuals can create and distribute seemingly authentic and authoritative content online (Meraz 2009). Finally, strategic allegations of fake news may serve to further undermine trust in the informational

1See Tandoc Jr, Lim and Ling(2018) for a discussion of a variety of scholarly definitions and usages of “fake news.”

6 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno environment generally beyond challenging the truth of individual stories (O’Shaughnessy 2004). In sum, this strategy coupled with the current informational environment allows politicians to plausibly cry wolf over fake news. This study thus investigates the following paradox: might allegations of fake news be even more persuasive than fake news itself?

We posit that the answer is yes. The public may find allegations of “fake news” to be persua- sive due to uncertainty regarding the truth of signals in a distorted media environment—a channel we term “informational uncertainty.” In addition, individuals may be persuaded by allegations of “fake news” because of a willingness to disregard unflattering information when evaluating their preferred politicians—a channel we term “rhetorical cover.”

We posit that the benefits of the liar’s dividend are concentrated on unscrupulous, or “bad type,” politicians for whom it is especially important to avoid reputational damage and maintain support from the public despite real embarrassing or scandalous news stories. These politicians intentionally and strategically play a role in creating and perpetuating an aura of doubt and confusion regarding the truthfulness of media coverage, including (or especially) coverage from prominent mainstream media outlets. In this way, they seek personal political gain from avoiding scandal despite potentially long-term societal losses in the form of reduced trust in the media, decreased political accountability, and increased partisan polarization. We propose that an allegation of a deepfake or fake news might improve politician support through two potential causal pathways.

First, the allegation of a deepfake or fake news can produce informational uncertainty. After learning of an embarrassing moment or political scandal, a member of the public will be more likely to downgrade their evaluation of the politician or to think that the politician is a “bad type.” However, if the politician then issues a statement disclaiming the story as a deepfake or fake news, then some members of the public may be more uncertain about what to believe. Is the story true, or is the politician’s allegation true and the story merely fake news or a deepfake? As an example of how politicians intentionally and even explicitly

7 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

attempt to induce informational uncertainty, Spanish Minister Alfonso Dastis discredited photos of violence in Catalonia by saying, “I’m sure you have seen what you have seen, but I have seen fake photos that date back to 2012. So, I think we have got to be patient, and look at the situation” (Oppenheim 2017).

This uncertainty leaves the citizen unclear about how to update their evaluation of the politician. Compared to a counterfactual where the politician makes no so such allegation, we think claims of a deepfake or fake news will result in a unidirectional shift in average evaluations of the politician in the positive direction, although with an associated increased variance (a reflection of increased uncertainty). For those who already greatly dislike the politician and view them as dishonest, an allegation of a deepfake or fake news may not move their decreased evaluation any lower. Inversely, for strong supporters, it is unlikely that any information would cause them to drop their already elevated support for the politician, including the original scandalous or embarrassing story. However, for some citizens in the middle, such as swing voters or those less devoted to a particular candidate, the allegation may produce enough uncertainty in their mind to justify a less negative evaluation of the politician.

Overall, this produces a net benefit for the politician. An example is illustrated below using

a unidimensional type space for the politician from τmin, representing a purely “bad type”

politician, to τmax, representing a purely “good type” politician. In addition, τpre represents the average evaluation of the politician before the damaging story about the politician is published, τ¬A represents the average evaluation of the politician after the damaging story is published and there is no allegation of a deepfake or fake news, and τA represents the average evaluation of the politician after the damaging story is published and the politician makes an allegation of a deepfake or fake news. The difference between τA and τ¬A represents the boost to evaluations of the politician’s type due to the informational uncertainty channel behind the liar’s dividend.

8 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

τmin τ¬A τA τpre τmax

Second, an allegation of a deepfake or fake news can provide rhetorical cover. To avoid cog- nitive dissonance (Nisbet, Cooper and Garrett 2015), core supporters or strong co-partisans may be looking for an “out” or a motivated reason (Taber and Lodge 2006) to maintain support for their preferred politician or party in the face of a damaging news story. The alle- gation of a deepfake or fake news can provide just this sort of cover—an excuse or reason for supporters to disregard the negative coverage and preserve their positive evaluations of the politician. A politician who employs this strategy may be explicitly signaling to supporters for these reasons (Huang 2015). We therefore expect this mechanism to be most influential when individuals have strong positive associations with a specific politician, though strong party identification alone may be sufficient to drive these effects, given the connection be- tween partisanship and identity (West and Iyengar 2020).

We also expect this effect to be particularly strong when individuals feel that their preferred politician or party is the target of unfair and hostile treatment by the opposition. In response, allegations of misinformation may strategically make use of a “devil shift” (Sabatier, Hunter and McLaughlin 1987) where politicians signal not only their own innocence, but also the guilt of political opponents and media, allowing supporters to rally against the opposition. As an example of this strategy, American Mayor Jim Fouts alleged that his opponents were attempting to “hijack [the annual MLK Day] ceremony by releasing more vile, vitriolic, phony tapes against me” and that such an “effort...is designed to distract from my efforts of inclusion for all” (Wang 2017). Finally, as core supporters may echo allegations of misinformation and criticisms of mainstream media outlets and political opponents, the rhetorical channel of the liar’s dividend may also have network and feed-forward effects. By reinforcing distrust and

9 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno uncertainty in the broader informational environment, the rhetorical strategy creates the conditions for even more false allegations and subsequent dividends.

2.1 Hypotheses

In our experimental tests of these theoretical predictions, we will evaluate three main hy- potheses and a couple of subsidiary hypotheses. We expect that when compared to a control group, respondents treated with an allegation of misinformation will exhibit...

H1 Liar’s Dividend Hypothesis: Increased average support for the politician. Simply, we expect that an allegation of a deepfake or fake news will result in less negative evaluations of the politician overall.

H1.1 Informational Uncertainty Hypothesis: Decreased average belief in the story about the politician when primed to think about informational uncertainty. We also expect these effects to be concentrated amongst individuals in the middle of the political spec- trum, representing individuals less likely to be strong supporters or opponents of partisan politicians. Moreover, we expect these effects to manifest in higher variance in belief.

H1.2 Rhetorical Cover Hypothesis: Increased average support for the politician when primed to think about politician support in terms of their political friends and foes. We also expect effects to be stronger for strong co-partisans. Compared to other respondents, core supporters are more likely to reward allegations that employ the rhetorical cover mechanism with greater support.

H2 Deepfakes Hypothesis: Smaller improvements in average support for the politician in response to an allegation of a deepfake. That is, we expect that respondents will believe that video is harder to fake than text, so allegations of deepfakes will likely be perceived as less credible, translating into a smaller payoffs for politicians.

10 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

H3 Trust in Media Hypothesis: Decreased average trust in the media. We expect that this result might be driven by both mechanisms behind the liar’s dividend. For the in- formational uncertainty mechanism, politicians explicitly invoke distrust and confusion in the informational environment, likely driving individuals to increase their uncertainty over the accuracy of news coverage. For the rhetorical cover mechanism, individuals might be prompted to view the media as a biased, oppositional actor itself or as simply a tool for transmitting opinion-laden attacks by political opponents.

3 Pilot Study

We administered a pilot study in August 2020 to 916 adult, American Amazon Mechanical Turk workers. The purpose of the study was to test a set of candidate videos for inclusion in the main study, to evaluate potential wordings of the politician response treatments, and to perform basic manipulation checks (i.e., whether respondents could see and hear the videos and whether they could correctly recall the stated political party of the politician).

First, we wanted to compare six candidate videos of politicians, two Democrat and four Republican, in terms of whether participants found the videos similarly embarrassing, plau- sibly faked, and familiar. Identifying comparable videos both within and across politician party helps to minimize the possibility that we pick up on effects driven by outliers in terms of video content (particularly egregious scandals) or quality (particularly fuzzy audio or images), rather than the theoretical mechanisms. We also wanted to assess whether respon- dents were more familiar with some of the politicians or events in our videos. In order to aid in our selection of comparable videos, we compared respondent reactions across videos, across politician party, and across respondent party. See Appendix B for the wording of the relevant outcome questions.

Second, in regards to the hypothetical politician allegation responses, we wanted to evaluate

11 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

whether more explicit priming regarding our two theoretical mechanisms of interest, infor- mational uncertainty (IU) and rhetorical cover (RC), leads to stronger treatment effects than a more neutrally-worded politician allegation. The purpose was to help plan the factorial design of our main study: whether to have two distinct politician allegation treatments for each theoretical mechanism along with a politician nonresponse, or a single politician alle- gation treatment along with a nonresponse where the theoretical mechanisms are evaluated based on differential subgroup effects. See Appendix B for the wordings of the politician allegations that we tested. To assess responses to these potential versions of the politician allegations, we used outcome questions similar to the belief and support measures in our main study.

Finally, we were concerned that the label “fake news” might carry a partisan connotation due to the fact that it has been made mainstream by Donald Trump and used by right- leaning politicians. This could make politician allegations more or less plausible depending on politician party. As an alternative label for misinformation, we considered the phrase “false and misleading” and produced two sets of hypothetical politician allegations that used either “fake news” or “false and misleading.”2 We asked respondents to guess whether the hypothetical politicians making the allegations were Republican or Democrat, thinking that use of the term “fake news” might be an important signal. We also used an open-ended question so that respondents could provide us with their associations with the term “fake news.” 2For reference, in response to a digitally-altered video, Speaker of the House Nancy Pelosi’s chief of staff stated, “The latest fake video of Speaker Pelosi is deliberately designed to mislead and lie to the American people, and every day that these platforms refuse to take it down is another reminder that they care more about their shareholders’ interests than the public’s interests” (Bekiempis 2020).

12 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

3.1 Pilot Results

Manipulation Checks

About 98% of the respondents in our pilot study were able to see and hear the videos, likely aided by our inclusion of subtitles on the videos and prompting that respondents could watch the videos multiple times before moving on (alterations based on a preliminary convenience sample for our pilot study). Moreover, across the politician videos, between 77% and 88% of respondents correctly identified the politician’s party. Based on this result, we decided to include an additional mention of the politician’s party in the video description for our main study. This is especially important, as respondents’ understanding of the politician’s party in reference to their own partisan identity is critical for assessing our theoretical mechanisms and hypotheses.

Reactions to Videos

Table1 shows that all of the videos were deemed embarrassing (average scores of 2.7 to 4.0 corresponding to “Moderately embarrassing” or “Very embarrassing”). It also reveals that respondents were unsure about whether the videos had been digitally altered and found it plausible that the videos could have been faked (average scores of 4.0 to 4.9 corresponding to “Neither believable nor unbelievable” and “Slightly believable”). This means that any of the candidate videos could invoke negative reactions toward the politician (were embarrassing) and could also generate some uncertainty over authenticity if prompted (were plausibly faked). The videos appear to strike the right balance between being not too obviously faked such that the story itself is not believed in the first place and being not too obviously real such that the politician allegation would not be believed.

Next, Table2 shows that more than half of respondents were familiar with Jesse Jackson and with the scandal involving Todd Akin. For the other videos, only a minority of respondents recognized the politicians or scandals, and an even smaller subset of those respondents could

13 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

Embarrassing Score Faked Score Christine 3.0 4.3 George 2.9 4.2 Jesse 3.5 4.9 John 2.7 4.2 Tim 3.3 4.4 Todd 4.0 4.0 We used a 5-point Likert scale (middle value of 3) to assess whether the videos were deemed embarrassing and a 7-point Likert scale (middle value of 4) to assess whether the videos were deemed plausibly faked. Table 1: Perceptions that Candidate Videos are Embarrassing and Plausibly Faked correctly provide the politicians’ names. This informed our selection of four videos for our main study: Jesse Jackson (D) and Todd Akin (R) as more familiar videos and John Murtha (D) and Tim James (R) as less familiar videos.

Christine George Jesse John Tim Todd Familiarity with Politician (%) 33 30 57 20 23 29 Familiarity with Scandal (%) 18 26 20 15 14 51

Table 2: Familiarity with Politicians and Scandals in Candidate Videos

Reactions to Allegation Wordings

Table3 shows the results of our investigation into whether the explicit primings regarding informational uncertainty and rhetorical cover in the politician allegations had meaningful impacts on the belief and support measures, in comparison to a group of respondents that received allegations without such overt primings. The results reveal that the informational uncertainty priming had a substantially negative impact on belief in the story, as expected, while the rhetorical cover priming did not. Moreover, the rhetorical cover priming had a much more significant positive impact on support for the politician than the informational uncertainty priming.

That these differences are visible and generally in the anticipated directions suggest that the theoretical mechanisms do operate in distinct ways. For these reasons, we plan to use the separate mechanism primings in the main study. Though the pilot results are not

14 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno statistically significant, the fact that we observed notable differences in line with theoretical expectations with such a conservative control group (respondents still received an allegation of misinformation but with a different wording) suggests the potential for picking up on larger impacts in the main study when we use a “true” control group (no response at all from the politician).

Belief in Story Support for Politician (1) (2) Informational Uncertainty −0.127 0.012 (0.118) (0.112)

Rhetorical Cover −0.016 0.148 (0.118) (0.112)

Observations 880 880 Controls Yes Yes

Reference group is respondents who received politician allegations without explicit informational uncertainty or rhetorical cover primings. Table 3: Responses to Explicit Mechanism Primings in Politician Allegations

In order to assess whether the term “fake news” carries a partisan connotation that could complicate analysis and interpretation and our main study, we performed regressions to assess whether the use of the terms “fake news” and “false and misleading” influenced participant perception of the hypothetical politicians as Republican or Democrat along a 7-point scale. Additionally, we performed text analysis on open-ended responses to the question, “What do you think of when you hear the term fake news?”

Table4 Indicates that respondents were statistically significantly more likely to associate the phrase “fake news” with more right-leaning politicians. In contrast, respondents were not more likely to associate the informational uncertainty or rhetorical cover treatments with partisanship, which suggests that politician use of these strategies isn’t perceived as inherently partisan.

15 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

Perceived Politician Partisanship “Fake News” Wording 0.266∗∗ (0.125)

Observations 880 Controls Yes Note: ∗p<0.1; ∗∗p<0.05; ∗∗∗p<0.01 Reference group is respondents who received politician allegations stating story was “false and misleading”. Table 4: Effect of “Fake News” on Perceptions of Partisanship

Based on text analysis of 881 open-ended responses, we also observe a clear partisan asso- ciation between the term fake news and right-leaning politicians and sources. In particular, while only 6.4% of responses mention terms like Clinton, Obama, Democrats, CNN, MSNBC, etc., a full 33.3% of responses mention Trump, Republican, or Fox and 88% of those responses explicitly mention “Trump.” Most mentions of ‘left’ and ‘right’ keywords are critical in tone, though some are positive. Regardless of direction, however, these findings reinforce the re- gression results and indicate a clear partisan association with the term fake news.

Interestingly, when defining fake news, some of the respondents explicitly reported behavior suggestive of the liar’s dividend. For example:

• “Donald Trump responding to negative news about himself” • “Donald Trump calling everything negative about him fake news” • “Donald Trump trying to change the narrative and have public not trust in new stories or journalists”

• “Donald Trump. Donald Trump is the person who made that phrase famous, as he just uses it to discredit news organizations that portray him in a negative light.”

Based on these findings from the pilot, we prefer the more neutral phrase “false and mis- leading” over “fake news” for inclusion in the politician allegations in the main study.

16 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

3.2 Implications for Main Study

Table6 summarizes how the results of the pilot study have informed the design of our main study.

Question Pilot Result Design Decision Are informational IU and RC appear to have distinct We will use IU and RC as distinct uncertainty and impacts on outcome measures: IU politician response treatments. rhetorical cover has large, negative impact on belief mechanisms dis- measure; RC has large, positive im- tinct enough to pact on support measure use as separate treatments? What is the best Use of a bi-directional uncertainty We will use distribution of the belief way to measure in- measure was confusing and did not measure to evaluate uncertainty. Belief formational uncer- give us additional information be- measure scale will be unidirectional to tainty? yond the distribution of the belief be clearer for participants. measure. Does use of the Yes, “fake news” has a statistically We will use alternative term “false and term “fake news” significant association with Republi- misleading” to describe stories in the carry partisan con- can party, and is visibly a polarizing politician response treatments. notation? term in open-ended responses. Which video treat- All videos were generally perceived We will use four videos (2 Democrat, 2 ments from candi- as moderately embarrassing and Republican). Two are more familiar to date set of 6 are plausibly faked, which makes com- respondents and thus serve as a harder best to use? parable and usable in a study of the test for our theory, given that we ex- liar’s dividend. We found respon- pect respondents beliefs and support to dents were more familiar with two change. The second two videos are less of the politicians/events depicted. familiar. Results can be disaggregated or aggregated across videos. Are respondents 98% reported no difficulties. We include subtitles in videos in case able to see/hear some respondents have trouble hear- videos? ing them. We include text that in- dicates respondents may view videos more than once before moving on. Can respondents Between 77% and 88% correctly We will repeat mention of politician correctly identify identify politician party. party in video text description and ti- politician party tle. that was provided to them from video description?

Table 5: Pilot Study Results that Inform Study Design

17 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

4 Experimental Design

4.1 Study Population and Survey Platform

We plan to administer our survey experiment online through NORC with a target sample size of about 2,500 - 3,000 participants. The study population is the American adults that make up NORC’s Amerispeak panel, and participants will have previously consented to participate in the panel. With this panel, the sampling frame is US households and we expect a high degree of external validity as the Amerispeak panel is designed to “provide at least 97 percent sample coverage of the U.S. population” (NORC 2019). Each respondent will view one of four 9 to 22 second videos (or a text transcript), a follow-up paragraph of text, a matrix of 8 outcome questions, additional demographic questions, and a debrief. The estimated time to complete the survey is 3-5 minutes, presenting a minimal time burden for participants.

4.2 Randomization and Treatment Assignment

Because NORC uses a nationally representative panel, we believe that our sample of par- ticipants will contain sufficient balance across key demographics of interest (e.g., political affiliation, gender, and race). Survey respondents will be randomly assigned to one of the twelve treatment groups described below, with an equal probability of assignment to each treatment group.

We are interested in how Americans react to politicians’ allegations of misinformation (claims of fake news or deepfakes) in response to scandalous news stories in both text and video formats. Additionally, we are interested in the potential for differential responses based on partisanship. With a focus on these three variables—partisanship, media format, and allegation of misinformation—we use a 2x2x3 factorial design to vary the presentation of a political news story to participants in our experiment.

18 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

To vary partisanship, the news story either presents the scandalous actions of a Democrat or Republican politician.3 To vary the media format, we either present a video clip of a news story or a corresponding text-based version of the news story. Finally, to vary the allegation of misinformation, the video/text is accompanied by one of the following options: 1) no response from the politician (control), 2) a statement from the politician that the story is a deepfake/fake news, with explicit priming about informational uncertainty, and 3) a similar statement but with explicit priming about rhetorical cover. Table6 shows this factorial design of our treatments. The specific videos used for our treatments and the wording of our survey items are included in Appendix A.

Politician Party ID Media Format Politician Response Democrat Text No Politician Response Democrat Text Informational Uncertainty Democrat Text Rhetorical Cover Democrat Video No Politician Response Democrat Video Informational Uncertainty Democrat Video Rhetorical Cover Republican Text No Politician Response Republican Text Informational Uncertainty Republican Text Rhetorical Cover Republican Video No Politician Response Republican Video Informational Uncertainty Republican Video Rhetorical Cover

Table 6: 2x2x3 Factorial Design of Experiment

The videos that will be shown to or transcribed for participants are real videos of politicians. taking actions that are arguably insensitive, embarrassing, or otherwise counter to their message, identity, or agenda. To ensure consistency across the text and video treatments, the text-based news stories provided are based on the transcripts of the video clips. The videos are of former politicians in order to ensure minimal impacts on current elected officials. Based on pilot results, we identified four videos that our pilot respondents viewed as similarly

3Combined with our knowledge of the partisanship of respondents, this allows us to evaluate responses to news stories about politicians with partisan identities that match or oppose those of respondents.

19 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno embarrassing and plausibly digitally faked. Moreover, we selected video clips that are as consistent as possible given available options in terms of length, content, and context. Two such videos (one Democrat and one Republican) are of politicians/events that respondents recognized, while a second pair of videos were unfamiliar to respondents. The former pair serves as a conservative test of our hypothesis: does the liar’s dividend pay off even for more well-known scandals?

We think it ethically important and necessary to appropriately study the liar’s dividend to use real videos and corresponding transcripts. However, the allegations of misinformation (claims of a deepfake or fake news) will not be based on real communication by the politicians or their agents and will instead be statements that we craft based on existing claims by other real politicians regarding deepfakes or fake news. That is, the “lie” component of the liar’s dividend in our experiment will be created by the researchers. Therefore, participants will be debriefed at the end in order explain the researchers’ role in crafting the response and to provide news media literacy resources on detecting fake news and deepfakes.

4.3 Outcome Measures and Additional Demographic Items

We use a matrix of 8 outcome measures to 1) assess whether respondents believe the story about the politician, 2) measure respondents’ willingness to support the politician, and 3) measure respondents’ trust in media generally. All outcome questions will use a 7-point Likert scale with the ends labeled as “Not at all” and “Definitely.” We use multiple questions for each of these three outcome categories of interest in order to build indices. The indices will be constructed following the procedure used by Kling, Liebman and Katz(2007) which involves linearly adding z-scores for the component outcome questions.

We also include additional demographic questions assessing participants’ news media literacy and digital literacy. The news media literacy question is adopted from three questions used to create a news media literacy index by the Reuters Institute for the Study of Journalism

20 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno at Oxford University (Newman et al. 2019). The second item addresses digital literacy, which is important in assessing how participant responses to allegations implicating deepfakes are moderated by their prior knowledge.4 The wording of all survey items is provided in Appendix A.

4.4 MDE Calculations

We use simulations based on the pilot study to calculate minimum detectable effects (MDE) along a range of possible sample sizes for two of our main outcomes of interest: the belief and support measures. As suggested by DeclareDesign(2019), the calculation of MDEs using pilot results is an improvement over power calculations because the latter are based on noisy effect estimates. The graphs below show sample sizes needed to achieve standardized MDEs based on the standard deviation of the control group outcomes as measured in the pilot and using the conventional 80% power and 5% significance levels.

With an anticipated sample size of 2,500, we should be able to detect standardized effects for our main hypothesis (H1) as small as 0.16 and 0.17 for the support and belief outcomes, respectively. These are relatively small effects, given that our pilot picked up on standardized effects of about 0.1 using a much more conservative version of the control vignette. In the pilot, participants in the control group received a politician response with a slightly less- strongly worded allegation, but in the main study, the relevant control group will receive no allegation response at all from the politician. We will focus on the marginal effects, pooling across different treatment conditions and estimating the effects of the politician’s response, politician partisanship, and media format on respondents’ evaluations.

While our experimental design lacks power to test all interactions between treatment condi- tions, the interactions between media format and allegation and between partisanship and

4We have opted to include these demographic questions post-treatment because we are concerned about potential priming effects, and the results of our pilot study suggest that the media literacy and digital literacy questions are not significantly impacted by treatment (p-values from global F-tests: .66 and .18, respectively.

21 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

allegation are of theoretical and policy importance to us. Furthermore, given that there is little existing research on the liar’s dividend and deepfakes, the factorial design allows us and other researchers to test a wider range of exploratory hypotheses. Therefore, in our exploratory analyses as well as our main analyses specified in the Analysis section, we will use the Benjamini-Hochberg method to correct for multiple testing.

5 Analysis Strategy

Below, we present the estimands of interest for each of our hypotheses, along with the asso- ciated regression models. We will use ordinary least squares (OLS) regression with covariate adjustment to estimate average treatment effects (ATEs). The use of randomization in our experiment allow us to obtain treatment and control groups that are identical, in expecta- tion, before treatment, and therefore eliminates the problem of unobserved heterogeneity. As the only difference, on average, between the groups is their exposure to treatment, we

22 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

can identify the average causal effects of the treatments through the differences in outcomes between groups.

H1 Liar’s Dividend Hypothesis

For the Liar’s Dividend Hypothesis, the estimand of interest is an ATE based on comparing groups that received one of the treatments with an allegation of misinformation, pooling across informational uncertainty and rhetorical cover, to the control group that received no response from the politician. The outcome of interest is the support index measure. Note that this ATE is a weighted average across the Democrat and Republican treatments and across the Text and Video treatments. We expect this ATE to be positive.

E[Support(D = allegation) − Support(D = no response)]

The associated regression specification is:

Support = β0 + β1allegation + γX +  (1)

where β1 is the estimated ATE, allegation is an indicator for receiving either the infor- mational uncertainty or rhetorical cover allegation treatment, X refers to a vector of the covariates described below in this section, and  refers to the error. The reference group is the group of participants who did not receive a response message from the politician.

H1.1 Informational Uncertainty Hypothesis

For the Informational Uncertainty Hypothesis, we are interested in evaluating a causal path- way for the liar’s dividend through belief in the story and thus focus on the belief index as the relevant outcome measure. The estimand of interest is an ATE comparing responses

23 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno between respondents that receive the informational uncertainty allegation and those in the control group. We expect this ATE to be negative.

E[Belief(D = IUallegation) − Belief(D = no response)]

The associated regression specification is:

Belief = β0 + β1IUallegation + β2RCallegation + γX +  (2)

where β1 is the estimated ATE of interest and the reference group is the politician nonre- sponse group.

We will also perform three exploratory analyses. First, we expect effects to be stronger for moderates, defined as those who identify as independents, lean Democrats, or lean Re- publicans. Therefore, we will use a regression specification interacting the IU allegation treatment with a moderate indicator variable to test whether treatments effects are larger for moderates:

Belief = β0 + β1IUallegation + β2moderate + β3IUallegation × moderate + γX +  (3)

Second, as an alternative measure of uncertainty, we will assess whether there is greater variance in the belief index measure for the IU allegation treatment group compared to the nonresponse control group. We will use the Kolmogorov-Smirnov test to evaluate whether the distribution of the belief index measure is significantly different between the two groups.

Finally, we will compare the effect of the informational uncertainty treatment, β1, to the

24 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

effect of the rhetorical cover treatment, β2, on belief using a z-test. This will help us to assess whether the two mechanisms operate in distinct ways.

H1.2 Rhetorical Cover Hypothesis

For the Rhetorical Cover Hypothesis, we focus again on the support index measure and consider the unique effect of the allegation treatment with rhetorical cover priming. The estimand of interest is an ATE comparing responses between respondents that receive the rhetorical cover allegation and those in the control group. We expect this ATE to be posi- tive.

E[Support(D = RCallegation) − Support(D = no response)]

The associated regression specification is:

Support = β0 + β1IUallegation + β2RCallegation + γX +  (4)

where β2 is the estimated ATE of interest and the reference group is the politician nonre- sponse group.

We will also perform two exploratory analyses. First, we expect effects to be stronger for strong partisans that are co-partisans with the politician, those who identify as strong Democrats, Democrats, Republicans, or strong Republicans and match the partisanship of the politician. Therefore, we will use a regression specification interacting the RC allegation treatment with a strong partisan indicator variable to test whether treatments effects are

25 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno larger for strong co-partisans:

Support = β0 + β1RCallegation + β2strong + β3RCallegation × strong + γX +  (5)

Second, we will compare the effect of the rhetorical cover treatment, β2, to the effect of the informational uncertainty treatment, β1, on support using a z-test. In conjunction with our exploratory analysis regarding belief, this will help us to shed light on the mechanisms’ potentially distinct causal effects on support, including through affecting beliefs or not.

H2 Deepfakes Hypothesis

For the Deepfakes Hypothesis, we consider whether the liar’s dividend is larger in the case of text or video, i.e., with respect to fake news or deepfakes. To do that, we compare the allegation treatment effect on the support outcome for the video format to the allegation treatment effect on the support outcome for the text format. In this case, the treatments are pooled across mechanisms and across politician partisanship. This estimand of interest is the difference between two ATEs and is estimated by the interaction term in the regression specification. We expect this effect to be negative, reflecting our expectation that lies about deepfakes pay out less.

E[Support(D = allegation) − Support(D = no response)|format = video]−

E[Support(D = allegation) − Support(D = no response)|format = text]

The associated regression specification is:

Support = β0 + β1allegation + β2media + β3allegation × media + γX +  (6)

26 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

H3 Trust in Media Hypothesis

For the Trust in Media Hypothesis, the estimand of interest is represented below and is similar to the ATE of interest for the Liar’s Dividend Hypothesis but with a focus on the trust in media index measure. We expect this ATE to be negative.

E[T rust(D = allegation) − T rust(D = no response)]

The associated regression specification is:

T rust = β0 + β1allegation + γX +  (7)

Further Considerations

We will use Huber-White heteroskedasticity-robust standard errors, and statistical signifi- cance will be assessed at the 5% alpha level. However, for hypothesis families with multiple tests, we will use the Benjamini-Hochberg method. The coefficients on covariates in our regression models will allow us to look at possible associations between covariates (such as age, education, race/ethnicity, and income) and attitudes towards deepfakes and fake news. While these coefficients cannot, and will not, be interpreted causally, we may comment on them, indicate possible associations of interest, and suggest possible avenues for future research.

5.1 Covariates and Balance

In our analysis, we will assess covariate balance for the following demographic covariates: partisanship, gender, race/ethnicity, age, education, and household income. These are the covariates that we will include in our regression models, and will be recoded based on demo- graphic data provided by NORC. We also include two additional covariates for news media

27 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno literacy and digital literacy as described in Appendix A.

The covariates will be coded as follows:

• Partisanship will be coded as a factor variable with seven levels: strong Democrat, Democrat, lean Democrat, Independent, lean Republican, Republican, and strong Republican.

• Gender will be coded as a factor variable with male and female as the two levels.

• Age will be coded as a factor variable with five levels based on the Pew Research Center generation age ranges. (https://www.pewresearch.org/topics/generations-and-age/)

• Race/ethnicity will likely be coded as a factor variable with White, Black or African Amer- ican, and Other as the three race/ethnicity categories, though this is subject to adjustment

based on the sample demographics.

• Education will be coded as a factor variable with three levels: high school graduate or less, some college or Associate degree, and Bachelor’s degree or higher. We may separate high

school graduates from those who did not graduate high school.

• Income will be coded as a factor variable with three levels from low to high income: < $30, 000, $30 − $74, 999, and $75, 000+.

• News media literacy will be coded as a binary variable based on whether respondents answer a factual question correctly.

• Digital literacy will be coded numerically based on a 5-point Likert scale from “Not at all” to “A great deal.”

We will present the proportion of respondents, along with the standard deviation, for each covariate level across the twelve treatment groups. It is not unusual for some covariates to exhibit statistically different values across different conditions by chance given the number of covariates and conditions. Therefore, we additionally determine balance by performing F-tests of global significance for these covariates for each treatment group. To do that, we

28 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno regress each of the twelve treatment group indicator variables on the specified covariates and extract the F-statistic and its associated p-value. The null hypothesis for each F-test is that the coefficients on the covariates are all equal to zero. This means that we expect that none of the covariates should predict treatment. Thus, balance is supported by failing to reject the null at the 5% significance level.

5.2 Limitations and Threats to Inference

With an experimental design, we expect a high degree of internal validity and the ability to identify our causal effects of interest. Nonetheless, it is possible that features of our treatment arms may be asymmetrical and may affect participant responses. These asymmetries may stem from differences in the way the video clips are edited between the Republican and Democrat politician (for example, presence of a news station logo) or differences in the perceived severity of the embarrassing story depicted or plausibility that the video could have been faked. To mitigate these concerns, we used a pilot study to assess the comparability of several candidate videos along several dimensions, and have selected videos that are as similar as possible. We have also attempted to minimize perceptions of partisan biases by using the terminology “false and misleading” in politician allegations rather than “fake news.” Further, we average effects across videos and across partisanship of respondents to minimize the chance that effects are driven by unique video content or asymmetric in differential partisan responses to co- or anti-partisan politicians.

Should we confront any unexpected issues in the administration of the survey and analy- sis of results, we will defer to the Standard Operating Procedures for Don Green’s lab at Columbia. These guidelines can be found at the following url: http://alexandercoppock. com/Green-Lab-SOP/Green Lab SOP.pdf.

29 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

6 Conclusion

Misinformation is a topic of significant and growing concern around the world. While the direct effects of misinformation have received much attention, the indirect effects of misinfor- mation may be even more concerning. In particular, we argue that unscrupulous politicians can receive a “liar’s dividend” by falsely alleging that scandalous or controversial stories about them are fake news or deepfakes, allowing them to maintain support in the face of damaging information. We have posited that the payoffs from the liar’s dividend work through two theoretical channels: by injecting informational uncertainty into the media en- vironment that upwardly biases evaluations of the politician’s type among non-partisans, or by providing rhetorical cover which increases support among strong co-partisans. Therefore, we employ a 2x2x3 factorial survey experiment (politician partisanship x media format x politician response), randomly assigning vignette treatments detailing real embarrassing or scandalous stories about American politicians to American citizens, to assess impacts on belief in news stories, support for politicians, and trust in media.

This study can help scholars in political science, policy, foreign affairs, communication and media studies, AI and machine learning, and other fields currently invested in issues related to misinformation and deepfakes. To our knowledge no prior work has quantitatively evalu- ated the indirect effects of misinformation or the liar’s dividend in particular. There is also a scholarly need to understand the level of impact deepfakes can have, and to contextualize these harms against those of fake news. This proposed study therefore uses experimental methods that can provide compelling causal evidence to address these gaps in knowledge. The study can help strengthen the understanding of policymakers, standards-development organizations, and other stakeholders interested in responding to the impacts of misinforma- tion about misinformation.

30 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

References

Agarwal, Shruti, Hany Farid, Yuming Gu, Mingming He, Koki Nagano and Hao Li. 2019. “Protecting World Leaders Against Deep Fakes.” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops p. 8.

Allcott, Hunt and Matthew Gentzkow. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31(2):211–236.

Bekiempis, Victoria. 2020. Facebook and Twitter reject Pelosi’s request to remove edited Trump video. The Guardian. URL: https://www.theguardian.com/us-news/2020/feb/09/nancy-pelosi-trump-state-of-the-union-video- twitter-facebook

Chesney, Robert and Danielle Citron. 2019. “Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics.” Foreign Affairs 98:147.

Chesney, Robert and Danielle Keats Citron. 2018. Deep Fakes: A Looming Challenge for Privacy, Democ- racy, and National Security. Number ID 3213954. URL: https://papers.ssrn.com/abstract=3213954

DeclareDesign. 2019. “Should a pilot study change your study design decisions?”. URL: https://declaredesign.org/blog/2019-01-23-pilot-studies.html

Erlanger, Steven. 2017. “‘Fake News,’ Trump’s Obsession, Is Now a Cudgel for Strongmen.” The New York Times . URL: https://www.nytimes.com/2017/12/12/world/europe/trump-fake-news-dictators.html

Guess, Andrew M., Brendan Nyhan and Jason Reifler. 2020. “Exposure to untrustworthy websites in the 2016 US election.” Nature Human Behaviour 4(5):472–480. URL: https://doi.org/10.1038/s41562-020-0833-x

Huang, Haifeng. 2015. “Propaganda as Signaling.” Comparative Politics 47(4):419–444.

Kalla, Joshua L and David E Broockman. 2018. “The minimal persuasive effects of campaign contact in general elections: Evidence from 49 field experiments.” American Political Science Review 112(1):148–166.

Kling, Jeffrey R, Jeffrey B Liebman and Lawrence F Katz. 2007. “Experimental analysis of neighborhood effects.” Econometrica 75(1):83–119.

31 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

Lazer, David M. J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild and et al. 2018. “The science of fake news.” Science 359(6380):1094–1096.

Lee, Tien-Tsung. 2010. “Why They Don’t Trust the Media: An Examination of Factors Predicting Trust.” American Behavioral Scientist 54(1):8–21.

Little, Andrew T. 2018. “Fake news, propaganda, and lies can be pervasive even if they aren’t persuasive.” Critique 11(1):21–34.

Mak, Tim and Dina Temple-Raston. 2020. Where Are The Deepfakes In This Presidential Election? NPR. URL: https://www.npr.org/2020/10/01/918223033/where-are-the-deepfakes-in-this-presidential-election

Maksl, Adam, Seth Ashley and Stephanie Craft. 2015. “Measuring news media literacy.” Journal of Media Literacy Education 6(3):29–45.

Meraz, Sharon. 2009. “Is There an Elite Hold? Traditional Media to Social Media Agenda Setting Influence in Blog Networks.” Journal of Computer-Mediated Communication 14(3):682–707.

Mitchell, Amy, Jeffrey Gottfried, Galen Stocking, Mason Walker and Sophia Fedeli. 2019. “Many Americans Say Made-Up News Is a Critical Problem That Needs To Be Fixed.”. URL: https://www.journalism.org/2019/06/05/many-americans-say-made-up-news-is-a-critical- problem-that-needs-to-be-fixed/

Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos and Rasmus Nielsen. 2019. Reuters institute digital news report 2019. Vol. 2019 Reuters Institute for the Study of Journalism.

Nisbet, Erik C., Kathryn E. Cooper and R. Kelly Garrett. 2015. “The Partisan Brain: How Dissonant Science Messages Lead Conservatives and Liberals to (Dis)Trust Science.” The Annals of the American Academy of Political and Social Science 658(1):36–66.

NORC. 2019. Technical Overview of the Amerispeak Panel: NORC’s Probability-Based Household Panel. URL: https://amerispeak.norc.org/Documents/Research/AmeriSpeak%20Technical%20Overview%202019%2002%2018.pdf

Oppenheim, Maya. 2017. Catalan referendum: Spanish foreign minister claims photos of police brutality are ‘fake’. The Independent. URL: https://www.independent.co.uk/news/world/europe/catalan-independence-referendum-photos- police-violence-fake-a7978876.html

32 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

O’Shaughnessy, Nicholas Jackson. 2004. Politics and propaganda. Manchester University Press.

Sabatier, Paul, Susan Hunter and Susan McLaughlin. 1987. “The Devil Shift: Perceptions and Misperceptions of Opponents.” Western Political Quarterly 40(3):449–476.

Schuster, Tal, Roei Schuster, Darsh J. Shah and Regina Barzilay. 2019. “Are We Safe Yet? The Limitations of Distributional Features for Fake News Detection.” arXiv:1908.09805 [cs] . arXiv: 1908.09805. URL: http://arxiv.org/abs/1908.09805

Schwartz, Oscar. 2018. “You thought fake news was bad? Deep fakes are where truth goes to die.” The Guardian . URL: https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth

Spohr, Dominic. 2017. “Fake news and ideological polarization: Filter bubbles and selective exposure on social media.” Business Information Review 34(3):150–160.

Taber, Charles S and Milton Lodge. 2006. “Motivated skepticism in the evaluation of political beliefs.” American journal of political science 50(3):755–769.

Tandoc Jr, Edson C. Tandoc, Zheng Wei Lim and Richard Ling. 2018. “Defining “Fake News”.” Digital Journalism 6(2):137–153.

Wang, Amy B. 2017. A mayor denies it is his voice on lewd, racist tapes. His colleagues say ‘resign.’. The Washington Post. URL: https://www.washingtonpost.com/news/post-nation/wp/2017/01/17/a-mayor-denies-its-his-voice- on-lewd-racist-tapes-his-colleagues-say-resign/

West, Emily A and Shanto Iyengar. 2020. “Partisanship as a social identity: Implications for polarization.” Political Behavior pp. 1–32.

Wittenberg, Chloe, Jonathan Zong, David Rand et al. 2020. “The (Minimal) Persuasive Advantage of Political Video over Text.”.

33 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

7 Appendix A: Main Study Survey Items

Text/Video Treatment

Each respondent will randomly receive one of the eight following possible prompts with 1) one of four politi- cians (two Republican and two Democrat) and 2) either a video prompt or text prompt about that politician. This corresponds to the first two factors of our factorial design (politician party ID and media format).

Your local news station has shown the following video clip of Republican politician Tim James. Please [watch the following video clip — read the following story]. You may [watch the video — read the story] as many times as you like. (You will not be able to go back after you go to the next page.)

Republican Tim James Accused of Making Offensive Remarks Embed this video (12 seconds) or text transcript

Your local news station has shown the following video clip of Democrat politician John Murtha. Please [watch the following video clip — read the following story]. You may [watch the video — read the story] as many times as you like. (You will not be able to go back after you go to the next page.)

Democrat John Murtha Accused of Making Offensive Remarks Embed this video (13 seconds) or text transcript

34 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

Your local news station has shown the following video clip of Republican politician Todd Akin. Please [watch the following video clip — read the following story]. You may [watch the video — read the story] as many times as you like. (You will not be able to go back after you go to the next page.)

Republican Todd Akin Accused of Making Offensive Remarks Embed this video (12 seconds) or text transcript

Your local news station has shown the following video clip of Democrat politician Jesse Jackson. Please [watch the following video clip — read the following story]. You may [watch the video — read the story] as many times as you like. (You will not be able to go back after you go to the next page.)

Democrat Jesse Jackson Accused of Making Offensive Remarks Embed this video (9 seconds) or text transcript

35 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

Politician Response Message

Next, respondents will randomly receive one of the following politician response messages.

No politician response message: The survey will skip ahead to the outcome questions

Informational Uncertainty treatment: Now, please read the politician’s response to the story carefully before moving on.

[Politician Name] Responds That Story is False and Misleading, People Should Be Skeptical In response to the recent allegations, [Republican — Democrat] [Politician Name] asserted that the story is false and misleading. He claimed that [the video is a deepfake, a computer-edited video that uses fake audio and images — the story is not based on true information]. When asked about the incident, he said that it’s well known that there’s a lot of misleading information, so people should be skeptical about what they hear. [Last Name] stated that “You can’t know what’s true these days with so much misinformation out there.”

Rhetorical Cover treatment: Now, please read the politician’s response to the story carefully before moving on.

[Politician Name] Responds That Story is False and Misleading, Attack by Opponent In response to the recent allegations, [Republican — Democrat] [Politician Name] asserted that the story is false and misleading. He claimed that [the video is a deepfake, a computer-edited video that uses fake audio and images — the story is not based on true information]. When asked about the incident, he said that the story is an attack by the opposition, and that people should not pay attention to it. [Last Name] stated that, “My opponent would say anything to hurt me, but my supporters know who’s really on their side.”

36 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

Outcome Measures

Next, respondents will be presented with a matrix of 8 outcome questions. All outcome questions will use a 7-point Likert scale with the ends labeled as “Not at all” and “Definitely.”

1. To what extent do you believe the story about the politician?

2. To what extent do you doubt that the story about the politician is true?

3. To what extent do you support the politician?

4. How likely would you be to defend the politician against critics?

5. How likely would you be to vote for the politician?

6. How likely would you be to donate to the politician?

7. To what extent do you trust the media?

8. To what extent do you believe that the media reports the news fairly?

Outcome questions 1 and 2 will be combined to create an index for belief in the story about the politician. Outcome questions 3-6 will be combined to create an index for politician support. Finally, outcome questions 7 and 8 will be combined to create an index for trust in media. To create the indices, we will follow the procedure used by Kling, Liebman and Katz(2007) to linearly add z-scores for the composite outcome questions.

Additional Demographic Questions

To assess potential protective factors against misinformation, we include items for news media literacy and digital literacy. The media literacy question has a single correct response from a set of four possible answers. This question is adopted from three questions used to create a news media literacy index by the Reuters Institute for the Study of Journalism at Oxford University (Newman et al. 2019).5 We assessed responses to these three questions in our pilot study and identified the single question most correlated with the overall news

5The three questions measure: respondents’ factual knowledge of how news sources are funded, how press releases are produced, and how news on social media is curated. Correct responses are summed to place respondents on a 0-3 scale for news literacy. Two of the three questions were adapted from a measure of news media literacy by Maksl, Ashley and Craft(2015) in the Journal of Media Literacy Education. The Reuters Institute has shown that higher news literacy on the 0-3 scale is correlated with measures important for assessing external validity, such as higher consumption of news stories from newspaper sources, discernment when selecting news stories, and consumption of unbiased credible news sources.

37 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

media literacy index. We plan to use this single question in our main study due to survey length limitations. The second item addresses digital literacy. Possible responses are along a five-point Likert scale from “Not at all” to “A great deal.” We have opted to include these demographic questions post-treatment because we are concerned about potential priming effects, and the results of our pilot study suggest that the media literacy and digital literacy questions are not significantly impacted by treatment.6

1. Which of the following is typically responsible for writing a press release?

• A reporter for a news organization

• A producer for a news organization

• A lawyer for a news aggregator

• A spokesperson for an organization [correct response]

• Don’t know

2. Computer algorithms can now be used to create ultra-realistic fake video content. How much had you heard about this before today?

Debrief

Finally, all respondents will be shown a debrief paragraph providing information about the survey and clari- fying any deception/misinformation:

The information provided to you about the politician is part of a study on false/fake news and “deepfakes,” or digitally altered video, and the impacts that they have on trust in politics and the media. While the video or story presented to you about the politician is real, the reply by the politician was created by a team of researchers and therefore does not represent an actual statement made by the politician.

To learn more about how to identify fake news stories and fake videos, see the following resources from the International Federation of Library Associations and Institutions and the MIT Media Lab Detect DeepFakes Project: https://www.ifla.org/publications/node/11174 and https://www.media.mit.edu/projects/detect- fakes/overview. 6Based on the pilot, we found that the treatments did not have a statistically significant impact on media literacy and digital literacy, with p-values of 0.66 and 0.18, respectively.

38 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

8 Appendix B: Pilot Study Survey Items

Part I: Evaluating Politician Allegation Wordings

Politician Alleges Story is Fake News

Imagine that a negative news story has come out about a politician. The news story claims that the politician has had an affair with a staff member.

In response, the politician asserts that the story is fake news. When asked about the incident, the politician says, “This is fake news.”

Politician Alleges Story is Fake News, People Should Be Skeptical

Imagine that a negative news story has come out about a politician. The news story claims that the politician has had an affair with a staff member.

In response, the politician asserts that the story is fake news. When asked about the incident, the politician says, “This is fake news.”

He says that it’s well known that there’s a lot of misleading information out there, so people should be skeptical about what they hear these days. The politician says, “You can’t know what’s true these days with so much misinformation out there.”

Politician Alleges Story is Fake News, Attack by Opponent

Imagine that a negative news story has come out about a politician. The news story claims that the politician has had an affair with a staff member.

In response, the politician asserts that the story is fake news. When asked about the incident, the politician says, “This is fake news.”

He says that the story is just an attack by the opposition, so people shouldn’t pay any attention to it. The

39 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

politician says, “My opponent would say anything to hurt me, but my supporters know who’s really on their side.”

Politician Alleges Story is False and Misleading

Imagine that a negative news story has come out about a politician. The news story claims that the politician has had an affair with a staff member.

In response, the politician asserts that the story is false and misleading. When asked about the incident, the politician says, “This is a false and misleading story.”

Politician Alleges Story is False and Misleading, People Should Be Skeptical

Imagine that a negative news story has come out about a politician. The news story claims that the politician has had an affair with a staff member.

In response, the politician asserts that the story is false and misleading. When asked about the incident, the politician says, “This is a false and misleading story.”

He says that it’s well known that there’s a lot of misleading information out there, so people should be skeptical about what they hear these days. The politician says, “You can’t know what’s true these days with so much misinformation out there.”

Politician Alleges Story is False and Misleading, Attack by Opponent

Imagine that a negative news story has come out about a politician. The news story claims that the politician has had an affair with a staff member.

In response, the politician asserts that the story is false and misleading. When asked about the incident, the politician says, “This is a false and misleading story.”

He says that the story is just an attack by the opposition, so people shouldn’t pay any attention to it. The

40 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

politician says, “My opponent would say anything to hurt me, but my supporters know who’s really on their side.”

Part I Outcome Questions

We used three 7-point Likert scale questions and an open-ended question to evaluate respondents’ reactions to hypothetical politician allegations in response to a hypothetical scandalous story:

• In your opinion, would you find the negative news story about the politician believable? [Extremely unbelievable ... Extremely believable]

• In your opinion, how would you rate the politician? [Extremely negatively ... Extremely positively]

• In your opinion, would you guess that the politician is a Republican or a Democrat? [Strong Democrat ... Strong Republican]

• What comes to mind when you hear the phrase “fake news”?

Part II: Evaluating Candidate Politician Videos

Please watch the following video clip of a Republican politician: https://youtu.be/IArvk5sYtnc

Please watch the following video clip of a Republican politician: https://youtu.be/hiE93Qw1OYo

Please watch the following video clip of a Republican politician: https://youtu.be/WIwu04J6lsc

41 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

Please watch the following video clip of a Republican politician: https://youtu.be/Onvy6nzsa1s

Please watch the following video clip of a Democrat politician: https://youtu.be/YkhAAZVza5k

Please watch the following video clip of a Democrat politician: https://youtu.be/3z ZHHZI-Jg

Part II Outcome Questions

We used the following questions to assess respondents’ impressions of the videos:

• Could you both see and hear the video? [Yes, No, Other]

• Was the politician a Democrat or Republican? [Democrat, Republican, Don’t know]

• Do you recognize the politician in the video? [Yes or No]

– If Yes: What is the politician’s name? You can write “don’t know.”

• Before viewing the video, had you heard about this story? [Yes or No]

• How embarrassing did you find the video for the politician? [Not embarrassing at all ... Extremely embarrassing]

• How much do you think this video could hurt the politician’s reputation? [Not at all ... A great deal]

• Video and audio can sometimes be digitally altered. Do you think it is believable that this video could have been faked? [Extremely unbelievable ... Extremely believable]

42 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

Demographic Questions

• Which of the following news outlets does NOT primarily depend on advertising for financial support?

– PBS [correct response]

– New York Times

– USA Today

– Don’t know

• Which of the following is typically responsible for writing a press release?

– A reporter for a news organization

– A producer for a news organization

– A lawyer for a news aggregator

– A spokesperson for an organization [correct response]

– Don’t know

• How are most of the individual decisions about what news stories to show people on Facebook made?

– At random

– By computer analysis of what stories might interest you [correct response]

– By editors and journalists that work for news outlets

– By editors and journalists that work for Facebook

– Don’t know

• What comes to mind when you hear the phrase “deepfake”?

• Computer algorithms can now be used to create ultra-realistic fake video content. How much had you heard about this before today? [None at all ... A great deal]

• What is your gender? [Male or Female]

• Choose the group listed below that best captures what you consider yourself to be. [Black or African American, White, Hispanic or Latino/a, Asian, American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander]

• Generally speaking, do you think of yourself as a Liberal, a Conservative, or an Independent? [Strong Republican ... Strong Democrat]

43 The Liar’s Dividend: Deepfakes and Fake News Schiff, Schiff, and Bueno

• What is the highest level of education you have completed? [Less than high school degree ... Profes- sional degree]

• In what year were you born? [Born 1928-1945, Born 1946-1964, Born 1965-1980, Born 1981-1996, Born 1997-2001]

44