Predicting the Role of Political Trolls in Social Media

Total Page:16

File Type:pdf, Size:1020Kb

Predicting the Role of Political Trolls in Social Media Predicting the Role of Political Trolls in Social Media Atanas Atanasov Gianmarco De Francisci Morales Preslav Nakov Sofia University ISI Foundation Qatar Computing Research Bulgaria Italy Institute, HBKU, Qatar [email protected] [email protected] [email protected] Abstract Such farms usually consist of state-sponsored agents who control a set of pseudonymous user We investigate the political roles of “Inter- accounts and personas, the so-called “sockpup- net trolls” in social media. Political trolls, pets”, which disseminate misinformation and pro- such as the ones linked to the Russian In- ternet Research Agency (IRA), have recently paganda in order to sway opinions, destabi- gained enormous attention for their ability lize the society, and even influence elections to sway public opinion and even influence (Linvill and Warren, 2018). elections. Analysis of the online traces of The behavior of political trolls has been ana- trolls has shown different behavioral patterns, lyzed in different recent circumstances, such as which target different slices of the population. the 2016 US Presidential Elections and the Brexit However, this analysis is manual and labor- referendum in UK (Linvill and Warren, 2018; intensive, thus making it impractical as a first- response tool for newly-discovered troll farms. Llewellyn et al., 2018). However, this kind of In this paper, we show how to automate this analysis requires painstaking and time-consuming analysis by using machine learning in a real- manual labor to sift through the data and to catego- istic setting. In particular, we show how to rize the trolls according to their actions. Our goal classify trolls according to their political role in the current paper is to automate this process —left, news feed, right— by using features with the help of machine learning (ML). In par- extracted from social media, i.e., Twitter, in ticular, we focus on the case of the 2016 US Pres- two scenarios: (i) in a traditional supervised idential Elections, for which a public dataset from learning scenario, where labels for trolls are available, and (ii) in a distant supervision sce- Twitter is available. For this case, we consider nario, where labels for trolls are not available, only accounts that post content in English, and we and we rely on more-commonly-available la- wish to divide the trolls into some of the func- bels for news outlets mentioned by the trolls. tional categories identified by Linvill and Warren Technically, we leverage the community struc- (2018): left troll, right troll, and news feed. ture and the text of the messages in the on- We consider two possible scenarios. The first, line social network of trolls represented as a prototypical ML scenario is supervised learning, graph, from which we extract several types of arXiv:1910.02001v1 [cs.CL] 4 Oct 2019 learned representations, i.e., embeddings, for where we want to learn a function from users to the trolls. Experiments on the “IRA Russian categories {left, right, news feed}, and the ground Troll” dataset show that our methodology im- truth labels for the troll users are available. This proves over the state-of-the-art in the first sce- scenario has been considered previously in the lit- nario, while providing a compelling case for erature by Kim et al. (2019). Unfortunately, a so- the second scenario, which has not been ex- lution for such a scenario is not directly applicable plored in the literature thus far. to a real-world use case. Suppose a new troll farm 1 Introduction trying to sway the upcoming European or US elec- tions has just been discovered. While the identities Internet “trolls” are users of an online community of the accounts might be available, the labels to who quarrel and upset people, seeking to sow dis- learn from would not be present. Thus, any super- cord by posting inflammatory content. More re- vised machine learning approach would fall short cently, organized “troll farms” of political opinion of being a fully automated solution to our initial manipulation trolls have also emerged. problem. A more realistic scenario assumes that labels for 2 Related Work troll accounts are not available. In this case, we 2.1 Trolls and Opinion Manipulation need to use some external information in order to learn a labeling function. Indeed, we leverage The promise of social media to democratize con- more persistent entities and their labels: news me- tent creation (Kaplan and Haenlein, 2010) has dia. We assume a learning scenario with distant been accompanied by many malicious attempts supervision where labels for news media are avail- to spread misleading information over this new able. By combining these labels with a citation medium, which quickly got populated by sock- graph from the troll accounts to news media, we puppets (Kumar et al., 2017), Internet water army can infer the final labeling on the accounts them- (Chen et al., 2013), astroturfers (Ratkiewicz et al., selves without any need for manual labeling. 2011), and seminar users (Darwish et al., 2017). One advantage of using distant supervision is Several studies have shown that trust is an impor- that we can get insights about the behavior of tant factor in online relationships (Ho et al., 2012; a newly-discovered troll farm quickly and effort- Ku, 2012; Hsu et al., 2014; Elbeltagi and Agag, lessly. Differently from troll accounts in social 2016; Ha et al., 2016), but building trust is a long- media, which usually have a high churn rate, news term process and our understanding of it is still media accounts in social media are quite stable. in its infancy (Salo and Karjaluoto, 2007). This Therefore, the latter can be used as an anchor point makes it easy for politicians and companies to to understand the behavior of trolls, for which data manipulate user opinions in forums (Dellarocas, may not be available. 2006; Li et al., 2016; Zhuang et al., 2018). We rely on embeddings extracted from social Trolls. Social media have seen the prolifera- media. In particular, we use a combination of em- tion of fake news and clickbait (Hardalov et al., beddings built on the user-to-user mention graph, 2016; Karadzhov et al., 2017a), aggressiveness the user-to-hashtag mention graph, and the text of (Moore et al., 2012), and trolling (Cole, 2015). the tweets of the troll accounts. We further explore The latter often is understood to concern mali- several possible approaches using label propaga- cious online behavior that is intended to disrupt tion for the distant supervision scenario. interactions, to aggravate interacting partners, and As a result of our approach, we improve the to lure them into fruitless argumentation in or- classification accuracy by more than 5 percent- der to disrupt online interactions and communica- age points for the supervised learning scenario. tion (Chen et al., 2013). Here we are interested in The distant supervision scenario has not previ- studying not just any trolls, but those that engage ously been considered in the literature, and is one in opinion manipulation (Mihaylov et al., 2015a,b, of the main contributions of the paper. We show 2018). This latter definition of troll has also be- that even by hiding the labels from the ML algo- come prominent in the general public discourse rithm, we can recover 78.5% of the correct labels. recently. Del Vicario et al. (2016) have also sug- The contributions of this paper can be summa- gested that the spreading of misinformation on- rized as follows: line is fostered by the presence of polarization and echo chambers in social media (Garimella et al., • We predict the political role of Internet trolls 2016, 2017, 2018). (left, news feed, right) in a realistic, unsuper- Trolling behavior is present and has been vised scenario, where labels for the trolls are studied in all kinds of online media: on- not available, and which has not been explored line magazines (Binns, 2012), social network- in the literature before. ing sites (Cole, 2015), online computer games • We propose a novel distant supervision ap- (Thacker and Griffiths, 2012), online encyclope- proach for this scenario, based on graph em- dia (Shachaf and Hara, 2010), and online newspa- beddings, BERT, and label propagation, which pers (Ruiz et al., 2011), among others. projects the more-commonly-available labels Troll detection was addressed using domain- for news media onto the trolls who cited these adapted sentiment analysis (Seah et al., 2015), media. lexico-syntactic features about writing style and • We improve over the state of the art in the tra- structure (Chen et al., 2012; Mihaylov and Nakov, ditional, fully supervised setting, where train- 2016), and graph-based approaches over signed ing labels are available. social networks (Kumar et al., 2014). Sockpuppet is a related notion, and refers to For example, Castillo et al. (2011) leverage user a person who assumes a false identity in an In- reputation, author writing style, and various time- ternet community and then speaks to or about based features, Canini et al. (2011) analyze the themselves while pretending to be another person. interaction of content and social network struc- The term has also been used to refer to opinion ture, and Morris et al. (2012) studied how Twit- manipulation, e.g., in Wikipedia (Solorio et al., ter users judge truthfulness. Zubiaga et al. (2016) 2014). Sockpuppets have been identified by us- study how people handle rumors in social me- ing authorship-identification techniques and link dia, and found that users with higher reputation analysis (Bu et al., 2013). It has been also shown are more trusted, and thus can spread rumors eas- that sockpuppets differ from ordinary users in their ily. Lukasik et al. (2015) use temporal patterns posting behavior, linguistic traits, and social net- to detect rumors and to predict their frequency, work structure (Kumar et al., 2017).
Recommended publications
  • How the Chinese Government Fabricates Social Media Posts
    American Political Science Review (2017) 111, 3, 484–501 doi:10.1017/S0003055417000144 c American Political Science Association 2017 How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument GARY KING Harvard University JENNIFER PAN Stanford University MARGARET E. ROBERTS University of California, San Diego he Chinese government has long been suspected of hiring as many as 2 million people to surrep- titiously insert huge numbers of pseudonymous and other deceptive writings into the stream of T real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called 50c party posts vociferously argue for the government’s side in political and policy debates. As we show, this is also true of most posts openly accused on social media of being 50c. Yet almost no systematic empirical evidence exists for this claim https://doi.org/10.1017/S0003055417000144 . or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity. In the first large-scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime.
    [Show full text]
  • Information Systems in Great Power Competition with China
    STRATEGIC STUDIES QUARTERLY - PERSPECTIVE Decide, Disrupt, Destroy: Information Systems in Great Power Competition with China AINIKKI RIIKONEN Abstract Technologies for creating and distributing knowledge have impacted international politics and conflict for centuries, and today the infrastructure for communicating knowledge has expanded. These technologies, along with attempts to exploit their vulnerabilities, will shape twenty-first- century great power competition between the US and China. Likewise, great power competition will shape the way China develops and uses these technologies across the whole spectrum of competition to make decisions, disrupt the operational environment, and destroy adversary capabilities. ***** he 2018 US National Defense Strategy (NDS) cites Russia and the People’s Republic of China (PRC) as “revisionist powers” that “want to shape a world consistent with their authoritarian model— gaining veto authority over other nations’ economic, diplomatic, and secu- T 1 rity decisions.” It describes these countries as competitors seeking to use “other areas of competition short of open warfare to achieve their [au- thoritarian] ends” and to “optimize their targeting of our battle networks and operational concepts.”2 The NDS assesses that competition will occur along the entire spectrum of statecraft from peace to open conflict and that Russia and the PRC will align their foreign policies with their models of governance. If this assessment is correct, and if technology plays a sig- nificant role in international politics, then technology will affect the whole spectrum of great power competition and conflict. Information architec- ture—the structures of technology that collect and relay information worldwide—is innately connected to power projection. The PRC has been innovating in this area, and its expanded information capabilities—and risks to US capabilities—will shape competition in the twenty- first cen- tury.
    [Show full text]
  • Freedom on the Net 2016
    FREEDOM ON THE NET 2016 China 2015 2016 Population: 1.371 billion Not Not Internet Freedom Status Internet Penetration 2015 (ITU): 50 percent Free Free Social Media/ICT Apps Blocked: Yes Obstacles to Access (0-25) 18 18 Political/Social Content Blocked: Yes Limits on Content (0-35) 30 30 Bloggers/ICT Users Arrested: Yes Violations of User Rights (0-40) 40 40 TOTAL* (0-100) 88 88 Press Freedom 2016 Status: Not Free * 0=most free, 100=least free Key Developments: June 2015 – May 2016 • A draft cybersecurity law could step up requirements for internet companies to store data in China, censor information, and shut down services for security reasons, under the aus- pices of the Cyberspace Administration of China (see Legal Environment). • An antiterrorism law passed in December 2015 requires technology companies to cooperate with authorities to decrypt data, and introduced content restrictions that could suppress legitimate speech (see Content Removal and Surveillance, Privacy, and Anonymity). • A criminal law amendment effective since November 2015 introduced penalties of up to seven years in prison for posting misinformation on social media (see Legal Environment). • Real-name registration requirements were tightened for internet users, with unregistered mobile phone accounts closed in September 2015, and app providers instructed to regis- ter and store user data in 2016 (see Surveillance, Privacy, and Anonymity). • Websites operated by the South China Morning Post, The Economist and Time magazine were among those newly blocked for reporting perceived as critical of President Xi Jin- ping (see Blocking and Filtering). www.freedomonthenet.org FREEDOM CHINA ON THE NET 2016 Introduction China was the world’s worst abuser of internet freedom in the 2016 Freedom on the Net survey for the second consecutive year.
    [Show full text]
  • How the Vietnamese State Uses Cyber Troops to Shape Online Discourse
    ISSUE: 2021 No. 22 ISSN 2335-6677 RESEARCHERS AT ISEAS – YUSOF ISHAK INSTITUTE ANALYSE CURRENT EVENTS Singapore | 3 March 2021 How The Vietnamese State Uses Cyber Troops to Shape Online Discourse Dien Nguyen An Luong* A Vietnamese youth checks his mobile phone on the century-old Long Bien Bridge in Hanoi on September 3, 2019. Vietnamese authorities have handled public political criticism - both online and in real life - with a calibrated mixture of toleration, responsiveness and repression. Photo: Manan VATSYAYANA, AFP. * Dien Nguyen An Luong is Visiting Fellow with the Media, Technology and Society Programme of the ISEAS – Yusof Ishak Institute. A journalist with significant experience as managing editor at Vietnam’s top newsrooms, his work has also appeared in the New York Times, the Washington Post, the Guardian, South China Morning Post, and other publications. 1 ISSUE: 2021 No. 22 ISSN 2335-6677 EXECUTIVE SUMMARY • The operations of Vietnam’s public opinion shapers and cyber-troops reveal that the online discourse is manipulated to enforce the Communist Party’s line. • Vietnamese authorities constantly grapple with the vexing question: How to strike a delicate balance between placating critical public sentiment online while ensuring that it does not spill over into protests against the regime. • When it comes to methods, targets and motives, there appears to be significant crossover between public opinion shapers and the government’s cyber troops. • The Vietnamese state cyber-troops have been encouraged to use real accounts to mass- report content. This helps explain why it is the only Southeast Asian state to publicly acknowledge having a military cyber unit.
    [Show full text]
  • Chinese Computational Propaganda: Automation, Algorithms and the Manipulation of Information About Chinese Politics on Twitter and Weibo
    This is a repository copy of Chinese computational propaganda: automation, algorithms and the manipulation of information about Chinese politics on Twitter and Weibo. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/136994/ Version: Accepted Version Article: Bolsover, G orcid.org/0000-0003-2982-1032 and Howard, P (2019) Chinese computational propaganda: automation, algorithms and the manipulation of information about Chinese politics on Twitter and Weibo. Information, Communication & Society, 22 (14). pp. 2063-2080. ISSN 1369-118X https://doi.org/10.1080/1369118X.2018.1476576 © 2018 Informa UK Limited, trading as Taylor & Francis Group. This is an Accepted Manuscript of an article published by Taylor & Francis in Information, Communication & Society on 24 May 2018, available online: https://doi.org/10.1080/1369118X.2018.1476576 Reuse Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ Chinese computational propaganda: automation, algorithms and the manipulation of information about Chinese politics on Twitter and Weibo Gillian Bolsover and Philip Howard1 Oxford Internet Institute, University of Oxford, Oxford, UK.
    [Show full text]
  • Forbidden Feeds: Government Controls on Social Media in China
    FORBIDDEN FEEDS Government Controls on Social Media in China 1 FORBIDDEN FEEDS Government Controls on Social Media in China March 13, 2018 © 2018 PEN America. All rights reserved. PEN America stands at the intersection of literature and hu- man rights to protect open expression in the United States and worldwide. We champion the freedom to write, recognizing the power of the word to transform the world. Our mission is to unite writers and their allies to celebrate creative expression and defend the liberties that make it possible. Founded in 1922, PEN America is the largest of more than 100 centers of PEN International. Our strength is in our membership—a nationwide community of more than 7,000 novelists, journalists, poets, es- sayists, playwrights, editors, publishers, translators, agents, and other writing professionals. For more information, visit pen.org. Cover Illustration: Badiucao CONTENTS EXECUTIVE SUMMARY 4 INTRODUCTION : AN UNFULFILLED PROMISE 7 OUTLINE AND METHODOLOGY 10 KEY FINDINGS 11 SECTION I : AN OVERVIEW OF THE SYSTEM OF SOCIAL MEDIA CENSORSHIP 12 The Prevalence of Social Media Usage in China 12 Digital Rights—Including the Right to Free Expression—Under International Law 14 China’s Control of Online Expression: A Historical Perspective 15 State Control over Social Media: Policy 17 State Control over Social Media: Recent Laws and Regulations 18 SECTION II: SOCIAL MEDIA CENSORSHIP IN PRACTICE 24 A Typology of Censored Topics 24 The Corporate Responsibility to Censor its Users 29 The Mechanics of Censorship 32 Tibet and
    [Show full text]
  • Battling the Internet Water Army: Detection of Hidden Paid Posters
    Battling the Internet Water Army: Detection of Hidden Paid Posters Cheng Chen Kui Wu Venkatesh Srinivasan Xudong Zhang Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science University of Victoria University of Victoria University of Victoria Peking University Victoria, BC, Canada Victoria, BC, Canada Victoria, BC, Canada Beijing, China Abstract—We initiate a systematic study to help distinguish on different online communities and websites. Companies are a special group of online users, called hidden paid posters, or always interested in effective strategies to attract public atten- termed “Internet water army” in China, from the legitimate tion towards their products. The idea of online paid posters ones. On the Internet, the paid posters represent a new type of online job opportunity. They get paid for posting comments is similar to word-of-mouth advertisement. If a company hires and new threads or articles on different online communities enough online users, it would be able to create hot and trending and websites for some hidden purposes, e.g., to influence the topics designed to gain popularity. Furthermore, the articles opinion of other people towards certain social events or business or comments from a group of paid posters are also likely markets. Though an interesting strategy in business marketing, to capture the attention of common users and influence their paid posters may create a significant negative effect on the online communities, since the information from paid posters is usually decision. In this way, online paid posters present a powerful not trustworthy. When two competitive companies hire paid and efficient strategy for companies.
    [Show full text]
  • Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People’S Republic of China
    Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People’s Republic of China Rong-Ching Chang Chun-Ming Lai [email protected] [email protected] Tunghai University Tunghai University Taiwan Taiwan Kai-Lai Chang Chu-Hsing Lin [email protected] [email protected] Tunghai University Tunghai University Taiwan Taiwan ABSTRACT ACM Reference Format: The digital media, identified as computational propaganda provides Rong-Ching Chang, Chun-Ming Lai, Kai-Lai Chang, and Chu-Hsing Lin. a pathway for propaganda to expand its reach without limit. State- 2021. Dataset of Propaganda Techniques of the State-Sponsored Informa- tion Operation of the People’s Republic of China. In KDD ’21: The Sec- backed propaganda aims to shape the audiences’ cognition toward ond International MIS2 Workshop: Misinformation and Misbehavior Min- entities in favor of a certain political party or authority. Further- ing on the Web, Aug 15, 2021, Virtual. ACM, New York, NY, USA, 5 pages. more, it has become part of modern information warfare used in https://doi.org/10.1145/nnnnnnn.nnnnnnn order to gain an advantage over opponents. Most of the current studies focus on using machine learning, quantitative, and qualitative methods to distinguish if a certain piece 1 INTRODUCTION of information on social media is propaganda. Mainly conducted Propaganda has the purpose of framing and influencing opinions. on English content, but very little research addresses Chinese Man- With the rise of the internet and social media, propaganda has darin content. From propaganda detection, we want to go one step adopted a powerful tool for its unlimited reach, as well as multiple further to providing more fine-grained information on propaganda forms of content that can further drive engagement online and techniques that are applied.
    [Show full text]
  • Digital Authoritarianism and Human Rights in Asia’ (2019) 3 Global Campus Human Rights Journal 269-285
    MAV Ambay III, N Gauchan, M Hasanah & NK Jaiwong ‘Dystopia is now: Digital authoritarianism and human rights in Asia’ (2019) 3 Global Campus Human Rights Journal 269-285 https://doi.org/20.500.11825/1575 Dystopia is now: Digital authoritarianism and human rights in Asia Mark Anthony V Ambay III,* Neha Gauchan,** Mahesti Hasanah*** and Numfon K Jaiwong**** Abstract: The advent of new information and communication technologies has opened up new economic opportunities, heightened the availability of information, and expanded access to education and health knowledge and services. These technologies have also provided new avenues for political, economic, social participation, and have presented new opportunities and methods for the advancement of human rights. At the same time, these same technologies can be used to violate human rights. This article queries as to how exactly states and other actors use digital authoritarianism to limit human rights. The study aims to understand what threats to human rights are presented by using new information and communication technologies. The article critically examines available literature on authoritarian practices using information and communication technologies, reports of government and intergovernmental bodies, non-governmental organisations, and various media agencies as well as by gathering first-hand data of samples of digital authoritarianism. The article argues that states and other actors practise digital authoritarianism by invading privacy, denying access to information and spreading misinformation, and limiting expression and participation, all of which violate the rights to freedom of expression, information and participation. Case studies of digital authoritarian practices are presented in the study, drawing on experiences and circumstances in several Asian countries.
    [Show full text]
  • How Censorship in China Allows Government Criticism but Silences Collective Expression GARY KING Harvard University JENNIFER PAN Harvard University MARGARET E
    American Political Science Review Page 1 of 18 May 2013 doi:10.1017/S0003055413000014 How Censorship in China Allows Government Criticism but Silences Collective Expression GARY KING Harvard University JENNIFER PAN Harvard University MARGARET E. ROBERTS Harvard University e offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have W devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future—and, as such, seem to clearly expose government intent. INTRODUCTION Ang 2011, and our interviews with informants, granted anonymity). China overall is tied with Burma at 187th he size and sophistication of the Chinese gov- of 197 countries on a scale of press freedom (Freedom ernment’s program to selectively censor the House 2012), but the Chinese censorship effort is by Texpressed views of the Chinese people is un- far the largest.
    [Show full text]
  • Chapter 3 Section 5
    SECTION 5: CHINA’S DOMESTIC INFORMATION CONTROLS, GLOBAL MEDIA INFLUENCE, AND CYBER DIPLOMACY Key Findings • China’s current information controls, including the govern- ment’s new social credit initiative, represent a significant es- calation in censorship, surveillance, and invasion of privacy by the authorities. • The Chinese state’s repression of journalists has expanded to target foreign reporters and their local Chinese staff. It is now much more difficult for all journalists to investigate politically sensitive stories. • The investment activities of large, Chinese Communist Par- ty-linked corporations in the U.S. media industry risk under- mining the independence of film studios by forcing them to consider self-censorship in order to gain access to the Chinese market. • China’s overseas influence operations to pressure foreign media have become much more assertive. In some cases, even without direct pressure by Chinese entities, Western media companies now self-censor out of deference to Chinese sensitivity. • Beijing is promoting its concept of “Internet sovereignty” to jus- tify restrictions on freedom of expression in China. These poli- cies act as trade barriers to U.S. companies through both cen- sorship and restrictions on cross-border data transfers, and they are fundamental points of disagreement between Washington and Beijing. • In its participation in international negotiations on global Inter- net governance, norms in cyberspace, and cybersecurity, Beijing seeks to ensure continued control of networks and information in China and to reduce the risk of actions by other countries that are not in its interest. Fearing that international law will be used by other countries against China, Beijing is unwilling to agree on specific applications of international law to cyberspace.
    [Show full text]
  • Producers of Disinformation – Version
    Live research from the digital edges of democracy YOU ARE VIEWING AN OLDER VERSION OF THIS LIVE RESEARCH REVIEW. Switch to the latest version. Live Research Review Producers of Disinformation - Version 1.2 By Samuel Spies · January 22, 2020 Version 1.0 Table of Contents Introduction Disinformation in the internet age Who produces disinformation, and why? Commercial motivations Political motivations Conclusion Avenues for future study Spies, Samuel. 2020. "Producers of Disinformation" V1.0. Social Science Research Council, MediaWell. January 22, 2020. https://mediawell.ssrc.org/literature-reviews/producers-of-disinformat ion/versions/1-0/ Introduction One of the promises of the internet has been that anyone with a device and a data connection can find an audience. Until relatively recently, many social scientists and activists joined technologists in their optimism that social media and digital publishing would connect the world, give a voice to marginalized communities, bridge social and cultural divides, and topple authoritarian regimes. Reality has proven to be far more complicated. Despite that theoretical access to audience, enormous asymmetries and disparities persist in the way information is produced. Elites, authoritarian regimes, and corporations still have greater influence and access to communicative power. Those elites have retained their ability to inject favorable narratives into the massive media conglomerates that still dominate mediascapes in much of the world. At the same time, we have observed the darker side of the internet’s democratic promise of audience. In a variety of global contexts, we see political extremists, hate groups, and misguided health activists recruiting members, vilifying minorities, and spreading misinformation. These online activities can have very real life-or-death consequences offline, including mass shootings, ethnic strife, disease outbreaks, and social upheaval.
    [Show full text]