Predicting the Role of Political Trolls in Social Media

Total Page:16

File Type:pdf, Size:1020Kb

Predicting the Role of Political Trolls in Social Media Predicting the Role of Political Trolls in Social Media Atanas Atanasov Gianmarco De Francisci Morales Preslav Nakov Sofia University ISI Foundation Qatar Computing Research Bulgaria Italy Institute, HBKU, Qatar [email protected] [email protected] [email protected] Abstract Such farms usually consist of state-sponsored agents who control a set of pseudonymous user We investigate the political roles of “Inter- accounts and personas, the so-called “sockpup- net trolls” in social media. Political trolls, pets”, which disseminate misinformation and pro- such as the ones linked to the Russian In- ternet Research Agency (IRA), have recently paganda in order to sway opinions, destabilize the gained enormous attention for their ability society, and even influence elections (Linvill and to sway public opinion and even influence Warren, 2018). elections. Analysis of the online traces of The behavior of political trolls has been ana- trolls has shown different behavioral patterns, lyzed in different recent circumstances, such as which target different slices of the population. the 2016 US Presidential Elections and the Brexit However, this analysis is manual and labor- referendum in UK (Linvill and Warren, 2018; intensive, thus making it impractical as a first- response tool for newly-discovered troll farms. Llewellyn et al., 2018). However, this kind of In this paper, we show how to automate this analysis requires painstaking and time-consuming analysis by using machine learning in a real- manual labor to sift through the data and to catego- istic setting. In particular, we show how to rize the trolls according to their actions. Our goal classify trolls according to their political role in the current paper is to automate this process —left, news feed, right— by using features with the help of machine learning (ML). In par- extracted from social media, i.e., Twitter, in ticular, we focus on the case of the 2016 US Pres- two scenarios: (i) in a traditional supervised idential Elections, for which a public dataset from learning scenario, where labels for trolls are available, and (ii) in a distant supervision sce- Twitter is available. For this case, we consider nario, where labels for trolls are not available, only accounts that post content in English, and we and we rely on more-commonly-available la- wish to divide the trolls into some of the functional bels for news outlets mentioned by the trolls. categories identified by Linvill and Warren(2018): Technically, we leverage the community struc- left troll, right troll, and news feed. ture and the text of the messages in the on- We consider two possible scenarios. The first, line social network of trolls represented as a prototypical ML scenario is supervised learning, graph, from which we extract several types of learned representations, i.e., embeddings, for where we want to learn a function from users to the trolls. Experiments on the “IRA Russian categories fleft, right, news feedg, and the ground Troll” dataset show that our methodology im- truth labels for the troll users are available. This proves over the state-of-the-art in the first sce- scenario has been considered previously in the lit- nario, while providing a compelling case for erature by Kim et al.(2019). Unfortunately, a so- the second scenario, which has not been ex- lution for such a scenario is not directly applicable plored in the literature thus far. to a real-world use case. Suppose a new troll farm 1 Introduction trying to sway the upcoming European or US elec- tions has just been discovered. While the identities Internet “trolls” are users of an online community of the accounts might be available, the labels to who quarrel and upset people, seeking to sow dis- learn from would not be present. Thus, any super- cord by posting inflammatory content. More re- vised machine learning approach would fall short cently, organized “troll farms” of political opinion of being a fully automated solution to our initial manipulation trolls have also emerged. problem. 1023 Proceedings of the 23rd Conference on Computational Natural Language Learning, pages 1023–1034 Hong Kong, China, November 3-4, 2019. c 2019 Association for Computational Linguistics A more realistic scenario assumes that labels for 2 Related Work troll accounts are not available. In this case, we 2.1 Trolls and Opinion Manipulation need to use some external information in order to learn a labeling function. Indeed, we leverage The promise of social media to democratize con- more persistent entities and their labels: news me- tent creation (Kaplan and Haenlein, 2010) has dia. We assume a learning scenario with distant been accompanied by many malicious attempts supervision where labels for news media are avail- to spread misleading information over this new able. By combining these labels with a citation medium, which quickly got populated by sock- graph from the troll accounts to news media, we puppets (Kumar et al., 2017), Internet water army can infer the final labeling on the accounts them- (Chen et al., 2013), astroturfers (Ratkiewicz et al., selves without any need for manual labeling. 2011), and seminar users (Darwish et al., 2017). One advantage of using distant supervision is Several studies have shown that trust is an impor- that we can get insights about the behavior of tant factor in online relationships (Ho et al., 2012; a newly-discovered troll farm quickly and effort- Ku, 2012; Hsu et al., 2014; Elbeltagi and Agag, lessly. Differently from troll accounts in social 2016; Ha et al., 2016), but building trust is a long- media, which usually have a high churn rate, news term process and our understanding of it is still media accounts in social media are quite stable. in its infancy (Salo and Karjaluoto, 2007). This Therefore, the latter can be used as an anchor point makes it easy for politicians and companies to ma- to understand the behavior of trolls, for which data nipulate user opinions in community forums (Del- may not be available. larocas, 2006; Li et al., 2016; Zhuang et al., 2018). We rely on embeddings extracted from social Trolls. Social media have seen the proliferation media. In particular, we use a combination of em- of fake news and clickbait (Hardalov et al., 2016; beddings built on the user-to-user mention graph, Karadzhov et al., 2017a), aggressiveness (Moore the user-to-hashtag mention graph, and the text of et al., 2012), and trolling (Cole, 2015). The lat- the tweets of the troll accounts. We further explore ter often is understood to concern malicious online several possible approaches using label propaga- behavior that is intended to disrupt interactions, tion for the distant supervision scenario. to aggravate interacting partners, and to lure them As a result of our approach, we improve the into fruitless argumentation in order to disrupt on- classification accuracy by more than 5 percent- line interactions and communication (Chen et al., age points for the supervised learning scenario. 2013). Here we are interested in studying not just The distant supervision scenario has not previ- any trolls, but those that engage in opinion manip- ously been considered in the literature, and is one ulation (Mihaylov et al., 2015a,b, 2018). This lat- of the main contributions of the paper. We show ter definition of troll has also become prominent in that even by hiding the labels from the ML algo- the general public discourse recently. Del Vicario rithm, we can recover 78.5% of the correct labels. et al.(2016) have also suggested that the spreading The contributions of this paper can be summa- of misinformation online is fostered by the pres- rized as follows: ence of polarization and echo chambers in social media (Garimella et al., 2016, 2017, 2018). • We predict the political role of Internet trolls Trolling behavior is present and has been stud- (left, news feed, right) in a realistic, unsuper- ied in all kinds of online media: online maga- vised scenario, where labels for the trolls are zines (Binns, 2012), social networking sites (Cole, not available, and which has not been explored 2015), online computer games (Thacker and Grif- in the literature before. fiths, 2012), online encyclopedia (Shachaf and • We propose a novel distant supervision ap- Hara, 2010), and online newspapers (Ruiz et al., proach for this scenario, based on graph em- 2011), among others. beddings, BERT, and label propagation, which Troll detection has been addressed by using projects the more-commonly-available labels domain-adapted sentiment analysis (Seah et al., for news media onto the trolls who cited these 2015), various lexico-syntactic features about user media. writing style and structure (Chen et al., 2012; • We improve over the state of the art in the tra- Mihaylov and Nakov, 2016), and graph-based ditional, fully supervised setting, where train- approaches over signed social networks (Kumar ing labels are available. et al., 2014). 1024 Sockpuppet is a related notion, and refers to For example, Castillo et al.(2011) leverage user a person who assumes a false identity in an In- reputation, author writing style, and various time- ternet community and then speaks to or about based features, Canini et al.(2011) analyze the themselves while pretending to be another person. interaction of content and social network struc- The term has also been used to refer to opinion ture, and Morris et al.(2012) studied how Twit- manipulation, e.g., in Wikipedia (Solorio et al., ter users judge truthfulness. Zubiaga et al.(2016) 2014). Sockpuppets have been identified by us- study how people handle rumors in social media, ing authorship-identification techniques and link and found that users with higher reputation are analysis (Bu et al., 2013). It has been also shown more trusted, and thus can spread rumors easily.
Recommended publications
  • How Russia Tried to Start a Race War in the United States
    Michigan Journal of Race and Law Volume 24 2019 Virtual Hatred: How Russia Tried to Start a Race War in the United States William J. Aceves California Western School of Law Follow this and additional works at: https://repository.law.umich.edu/mjrl Part of the Communications Law Commons, Internet Law Commons, and the Law and Race Commons Recommended Citation William J. Aceves, Virtual Hatred: How Russia Tried to Start a Race War in the United States, 24 MICH. J. RACE & L. 177 (2019). Available at: https://repository.law.umich.edu/mjrl/vol24/iss2/2 This Article is brought to you for free and open access by the Journals at University of Michigan Law School Scholarship Repository. It has been accepted for inclusion in Michigan Journal of Race and Law by an authorized editor of University of Michigan Law School Scholarship Repository. For more information, please contact [email protected]. VIRTUAL HATRED: HOW RUSSIA TRIED TO START A RACE WAR in the UNITED STATES William J. Aceves* During the 2016 U.S. presidential election, the Russian government engaged in a sophisticated strategy to influence the U.S. political system and manipulate American democracy. While most news reports have focused on the cyber-attacks aimed at Democratic Party leaders and possible contacts between Russian officials and the Trump presidential campaign, a more pernicious intervention took place. Throughout the campaign, Russian operatives created hundreds of fake personas on social media platforms and then posted thousands of advertisements and messages that sought to promote racial divisions in the United States. This was a coordinated propaganda effort.
    [Show full text]
  • ASD-Covert-Foreign-Money.Pdf
    overt C Foreign Covert Money Financial loopholes exploited by AUGUST 2020 authoritarians to fund political interference in democracies AUTHORS: Josh Rudolph and Thomas Morley © 2020 The Alliance for Securing Democracy Please direct inquiries to The Alliance for Securing Democracy at The German Marshall Fund of the United States 1700 18th Street, NW Washington, DC 20009 T 1 202 683 2650 E [email protected] This publication can be downloaded for free at https://securingdemocracy.gmfus.org/covert-foreign-money/. The views expressed in GMF publications and commentary are the views of the authors alone. Cover and map design: Kenny Nguyen Formatting design: Rachael Worthington Alliance for Securing Democracy The Alliance for Securing Democracy (ASD), a bipartisan initiative housed at the German Marshall Fund of the United States, develops comprehensive strategies to deter, defend against, and raise the costs on authoritarian efforts to undermine and interfere in democratic institutions. ASD brings together experts on disinformation, malign finance, emerging technologies, elections integrity, economic coercion, and cybersecurity, as well as regional experts, to collaborate across traditional stovepipes and develop cross-cutting frame- works. Authors Josh Rudolph Fellow for Malign Finance Thomas Morley Research Assistant Contents Executive Summary �������������������������������������������������������������������������������������������������������������������� 1 Introduction and Methodology ��������������������������������������������������������������������������������������������������
    [Show full text]
  • Taming the Trolls: the Need for an International Legal Framework to Regulate State Use of Disinformation on Social Media
    Taming the Trolls: The Need for an International Legal Framework to Regulate State Use of Disinformation on Social Media * ASHLEY C. NICOLAS INTRODUCTION Consider a hypothetical scenario in which hundreds of agents of the Russian GRU arrive in the United States months prior to a presidential election.1 The Russian agents spend the weeks leading up to the election going door to door in vulnerable communities, spreading false stories intended to manipulate the population into electing a candidate with policies favorable to Russian positions. The agents set up television stations, use radio broadcasts, and usurp the resources of local newspapers to expand their reach and propagate falsehoods. The presence of GRU agents on U.S. soil is an incursion into territorial integrity⎯a clear invasion of sovereignty.2 At every step, Russia would be required to expend tremendous resources, overcome traditional media barriers, and risk exposure, making this hypothetical grossly unrealistic. Compare the hypothetical with the actual actions of the Russians during the 2016 U.S. presidential election. Sitting behind computers in St. Petersburg, without ever setting foot in the United States, Russian agents were able to manipulate the U.S. population in the most sacred of domestic affairs⎯an election. Russian “trolls” targeted vulnerable populations through social media, reaching millions of users at a minimal cost and without reliance on established media institutions.3 Without using * Georgetown Law, J.D. expected 2019; United States Military Academy, B.S. 2009; Loyola Marymount University M.Ed. 2016. © 2018, Ashley C. Nicolas. The author is a former U.S. Army Intelligence Officer.
    [Show full text]
  • Recent Trends in Online Foreign Influence Efforts
    Recent Trends in Online Foreign Influence Efforts Diego A. Martin, Jacob N. Shapiro, Michelle Nedashkovskaya Woodrow Wilson School of Public and International Affairs Princeton University Princeton, New Jersey, United States E-mail: [email protected] Email: [email protected] E-mail: [email protected] Abstract: Foreign governments have used social media to influence politics in a range of countries by promoting propaganda, advocating controversial viewpoints, and spreading disinformation. We analyze 53 distinct foreign influence efforts (FIEs) targeting 24 different countries from 2013 through 2018. FIEs are defined as (i) coordinated campaigns by one state to impact one or more specific aspects of politics in another state (ii) through media channels, including social media, (iii) by producing content designed to appear indigenous to the target state. The objective of such campaigns can be quite broad and to date have included influencing political decisions by shaping election outcomes at various levels, shifting the political agenda on topics ranging from health to security, and encouraging political polarization. We draw on more than 460 media reports to identify FIEs, track their progress, and classify their features. Introduction Information and Communications Technologies (ICTs) have changed the way people communi- cate about politics and access information on a wide range of topics (Foley 2004, Chigona et al. 2009). Social media in particular has transformed communication between leaders and voters by enabling direct politician-to-voter engagement outside traditional avenues, such as speeches and press conferences (Ott 2017). In the 2016 U.S. presidential election, for example, social media platforms were more widely viewed than traditional editorial media and were central to the campaigns of both Democratic candidate Hillary Clinton and Republican candidate Donald Trump (Enli 2017).
    [Show full text]
  • Freedom on the Net 2016
    FREEDOM ON THE NET 2016 China 2015 2016 Population: 1.371 billion Not Not Internet Freedom Status Internet Penetration 2015 (ITU): 50 percent Free Free Social Media/ICT Apps Blocked: Yes Obstacles to Access (0-25) 18 18 Political/Social Content Blocked: Yes Limits on Content (0-35) 30 30 Bloggers/ICT Users Arrested: Yes Violations of User Rights (0-40) 40 40 TOTAL* (0-100) 88 88 Press Freedom 2016 Status: Not Free * 0=most free, 100=least free Key Developments: June 2015 – May 2016 • A draft cybersecurity law could step up requirements for internet companies to store data in China, censor information, and shut down services for security reasons, under the aus- pices of the Cyberspace Administration of China (see Legal Environment). • An antiterrorism law passed in December 2015 requires technology companies to cooperate with authorities to decrypt data, and introduced content restrictions that could suppress legitimate speech (see Content Removal and Surveillance, Privacy, and Anonymity). • A criminal law amendment effective since November 2015 introduced penalties of up to seven years in prison for posting misinformation on social media (see Legal Environment). • Real-name registration requirements were tightened for internet users, with unregistered mobile phone accounts closed in September 2015, and app providers instructed to regis- ter and store user data in 2016 (see Surveillance, Privacy, and Anonymity). • Websites operated by the South China Morning Post, The Economist and Time magazine were among those newly blocked for reporting perceived as critical of President Xi Jin- ping (see Blocking and Filtering). www.freedomonthenet.org FREEDOM CHINA ON THE NET 2016 Introduction China was the world’s worst abuser of internet freedom in the 2016 Freedom on the Net survey for the second consecutive year.
    [Show full text]
  • COVID-19 Effects and Russian Disinformation Campaigns
    Homeland Security Affairs Journal SPECIAL COVID-19 ISSUE COVID-19 Effects and Russian Disinformation Campaigns By Wesley R. Moy and Kacper Gradon 2 COVID-19 Effects and Russian Disinformation Campaigns By Wesley R. Moy and Kacper Gradon Abstract The effects of the novel coronavirus and its related disease COVID-19 have been far reaching, touching American society and its Western partners’ health and mortality, economics, social relationships, and politics. The effects have highlighted longstanding societal divisions, disproportionately harming minority groups and the socioeconomically disadvantaged with Black Americans and their communities hit especially hard. The problem being considered is the potential for Russian malign foreign influence to exploit the divides exacerbated by the coronavirus and target the United States and its European allies during the period leading to the 2020 elections and beyond. Possible coronavirus effects are inventoried and categorized into economic, healthcare, and environmental issue areas that may be leveraged and weaponized. The article includes the scenarios of such weaponization, including the description of focal points that may be attacked. We conclude with a set of recommendations to counter malign foreign influence. Suggested Citation Moy, Wesley and Kacper Gradon. “COVID-19 Effects and Russian Disinformation” Homeland Security Affairs 16, Article 8 (December, 2020) www.hsaj.org/articles16533. Introduction In late-May 2020, the novel coronavirus (SARS-CoV-2) and its related illness COVID-19, had resulted in the deaths of more than 100,000 Americans and infected nearly 1.7 million. By mid- October 2020, deaths had increased to over 210,000 and nearly 8,000,000 have been infected.
    [Show full text]
  • Disinformation, and Influence Campaigns on Twitter 'Fake News'
    Disinformation, ‘Fake News’ and Influence Campaigns on Twitter OCTOBER 2018 Matthew Hindman Vlad Barash George Washington University Graphika Contents Executive Summary . 3 Introduction . 7 A Problem Both Old and New . 9 Defining Fake News Outlets . 13 Bots, Trolls and ‘Cyborgs’ on Twitter . 16 Map Methodology . 19 Election Data and Maps . 22 Election Core Map Election Periphery Map Postelection Map Fake Accounts From Russia’s Most Prominent Troll Farm . 33 Disinformation Campaigns on Twitter: Chronotopes . 34 #NoDAPL #WikiLeaks #SpiritCooking #SyriaHoax #SethRich Conclusion . 43 Bibliography . 45 Notes . 55 2 EXECUTIVE SUMMARY This study is one of the largest analyses to date on how fake news spread on Twitter both during and after the 2016 election campaign. Using tools and mapping methods from Graphika, a social media intelligence firm, we study more than 10 million tweets from 700,000 Twitter accounts that linked to more than 600 fake and conspiracy news outlets. Crucially, we study fake and con- spiracy news both before and after the election, allowing us to measure how the fake news ecosystem has evolved since November 2016. Much fake news and disinformation is still being spread on Twitter. Consistent with other research, we find more than 6.6 million tweets linking to fake and conspiracy news publishers in the month before the 2016 election. Yet disinformation continues to be a substantial problem postelection, with 4.0 million tweets linking to fake and conspiracy news publishers found in a 30-day period from mid-March to mid-April 2017. Contrary to claims that fake news is a game of “whack-a-mole,” more than 80 percent of the disinformation accounts in our election maps are still active as this report goes to press.
    [Show full text]
  • Hacks, Leaks and Disruptions | Russian Cyber Strategies
    CHAILLOT PAPER Nº 148 — October 2018 Hacks, leaks and disruptions Russian cyber strategies EDITED BY Nicu Popescu and Stanislav Secrieru WITH CONTRIBUTIONS FROM Siim Alatalu, Irina Borogan, Elena Chernenko, Sven Herpig, Oscar Jonsson, Xymena Kurowska, Jarno Limnell, Patryk Pawlak, Piret Pernik, Thomas Reinhold, Anatoly Reshetnikov, Andrei Soldatov and Jean-Baptiste Jeangène Vilmer Chaillot Papers HACKS, LEAKS AND DISRUPTIONS RUSSIAN CYBER STRATEGIES Edited by Nicu Popescu and Stanislav Secrieru CHAILLOT PAPERS October 2018 148 Disclaimer The views expressed in this Chaillot Paper are solely those of the authors and do not necessarily reflect the views of the Institute or of the European Union. European Union Institute for Security Studies Paris Director: Gustav Lindstrom © EU Institute for Security Studies, 2018. Reproduction is authorised, provided prior permission is sought from the Institute and the source is acknowledged, save where otherwise stated. Contents Executive summary 5 Introduction: Russia’s cyber prowess – where, how and what for? 9 Nicu Popescu and Stanislav Secrieru Russia’s cyber posture Russia’s approach to cyber: the best defence is a good offence 15 1 Andrei Soldatov and Irina Borogan Russia’s trolling complex at home and abroad 25 2 Xymena Kurowska and Anatoly Reshetnikov Spotting the bear: credible attribution and Russian 3 operations in cyberspace 33 Sven Herpig and Thomas Reinhold Russia’s cyber diplomacy 43 4 Elena Chernenko Case studies of Russian cyberattacks The early days of cyberattacks: 5 the cases of Estonia,
    [Show full text]
  • Battling the Internet Water Army: Detection of Hidden Paid Posters
    Battling the Internet Water Army: Detection of Hidden Paid Posters Cheng Chen Kui Wu Venkatesh Srinivasan Xudong Zhang Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science University of Victoria University of Victoria University of Victoria Peking University Victoria, BC, Canada Victoria, BC, Canada Victoria, BC, Canada Beijing, China Abstract—We initiate a systematic study to help distinguish on different online communities and websites. Companies are a special group of online users, called hidden paid posters, or always interested in effective strategies to attract public atten- termed “Internet water army” in China, from the legitimate tion towards their products. The idea of online paid posters ones. On the Internet, the paid posters represent a new type of online job opportunity. They get paid for posting comments is similar to word-of-mouth advertisement. If a company hires and new threads or articles on different online communities enough online users, it would be able to create hot and trending and websites for some hidden purposes, e.g., to influence the topics designed to gain popularity. Furthermore, the articles opinion of other people towards certain social events or business or comments from a group of paid posters are also likely markets. Though an interesting strategy in business marketing, to capture the attention of common users and influence their paid posters may create a significant negative effect on the online communities, since the information from paid posters is usually decision. In this way, online paid posters present a powerful not trustworthy. When two competitive companies hire paid and efficient strategy for companies.
    [Show full text]
  • Breaking the Spin Cycle: Teaching Complexity in the 19.3
    Lane Glisson 461 Breaking the Spin Cycle: Teaching Complexity in the 19.3. Age of Fake News portal Lane Glisson publication, abstract: This article describes a discussion-based approach for teaching college students to identify the characteristics of ethical journalism and scholarly writing, by comparingfor fake news with credible information in a strategically planned slideshow. Much has been written on the need to instruct our students about disinformation. This librarian shares a lesson plan that engages students’ critical thinking skills by using a blend of humor, analysis, and a compelling visual presentation. The teaching method is contextualized by research on the distrust of the press and scientific evidence since the rise of hyper-partisan cable news, Russian trollaccepted farms, and alternative facts. and Introduction edited, Throughout our culture, the old notions of “truth” and “knowledge” are in danger of being replaced by the new ones of “opinion,” “perception” and “credibility.” copy Michio Kakutani1 What if truth is not an absolute or a relative, but a skill—a muscle, like memory, that collectively we have neglected so much that we have grown measurably weaker at using it? How might we rebuild it, going from chronic to bionic? reviewed, Kevin Young2 npeer 2015, I knew I had a problem. After several years of teaching library instruction is classes, I noticed that my conception of factual- ity and that of my students had diverged. Most Most students preferred Istudents preferred Google and YouTube to do their mss. Google and YouTube to do research. When asked in my classes how they dis- cerned the credibility of a website, most shrugged their research.
    [Show full text]
  • H-Diplo | ISSF POLICY Series America and the World—2017 and Beyond
    H-Diplo | ISSF POLICY Series America and the World—2017 and Beyond Fractured: Trump’s Foreign Policy after Two Years Essay by David C. Hendrickson, Colorado College Published on 29 January 2019 | issforum.org Editor: Diane Labrosse Web and Production Editor: George Fujii Shortlink: http://tiny.cc/PR-1-5BN Permalink: http://issforum.org/roundtables/policy/1-5BN-fractured PDF URL: http://issforum.org/ISSF/PDF/Policy-Roundtable-1-5BN.pdf he presidency of Donald Trump is the strangest act in American history; unprecedented in form, in style an endless sequence of improvisations and malapropisms.1 But in substance there is continuity, probably much more than is customarily recognized. It is hard to recognize the continuity, amid the Tdaily meltd owns (and biennial shutdowns), but it exists. In large measure Trump has been a Republican president, carrying out a Republican agenda. His attack on the regulatory agencies follows a Republican script. His call for a prodigious boost to military spending, combined with sharp cuts in taxes, has been the Republican program since the time of Ronald Reagan’s presidency. His climate skepticism corresponds with that of Republican leaders in Congress. On trade and immigration, Trump has departed most radically from Bush Republicanism, but even in that regard Trump’s policies harken back to older traditions in the Grand Old Party. He is different in character and temperament from every Republican predecessor as president, yet has attached himself to much of the traditional Republican program.2 It is in foreign policy, the subject of this essay, where Trump’s role has been most disorienting, his performance ‘up-ending’ in substance and method.
    [Show full text]
  • Trends in Online Foreign Influence Efforts
    Trends in Online Foreign Influence Efforts∗ Diego A. Martiny Jacob N. Shapiroz Version 1.2 July 8, 2019 Abstract Information and Communications Technologies (ICTs) create novel opportuni- ties for a wide range of political actors. In particular, foreign governments have used social media to influence politics in a range of countries by promoting propaganda, advocating controversial viewpoints, and spreading disinformation. This report de- scribes a new database of such 53 such foreign influence efforts (FIE) targeting 24 different countries from 2013 through 2018. FIEs are defined as: (i) coordinated campaigns by one state to impact one or more specific aspects of politics in an- other state, (ii) through media channels, including social media, by (iii) producing content designed to appear indigenous to the target state. The objective of such campaigns can be quite broad and to date have included influencing political deci- sions by shaping election outcomes at various levels, shifting the political agenda on topics ranging from health to security, and encouraging political polarization. Our data draw on more than 460 media reports to identify FIEs, track their progress, and classify their features. ∗We are grateful to a range of colleagues including Laura Courchesne, Nick Feamster, Andy Guess, Hans Klein, and Brendan Stewart for helpful comments and feedback on our coding scheme. Will Lowe provided invaluable advice on data structure and data entry. Arya Goel, Janette Lu, Imane Mabrouk, Matthew Merrigan, Kamila Radjabova, and Nina Sheridan provided excellent research assistance. This work was possible thanks to generous funding from the Bertelsmann Foundation and Microsoft. All errors are our own.
    [Show full text]