Towards a Deception Detection Framework for Social Media

Total Page:16

File Type:pdf, Size:1020Kb

Towards a Deception Detection Framework for Social Media CAN UNCLASSIFIED Towards a Deception Detection Framework for Social Media Bruce Forrester DRDC – Valcartier Research Centre Friederike Von Franqué Von Franqué consulting 25th ICCRTS Virtual event, 2–6 and 9–13 November 2020 Topic 1: C2 in the Information Age Paper number: 091 Date of Publication from Ext Publisher: December 2020 The body of this CAN UNCLASSIFIED document does not contain the required security banners according to DND security standards. However, it must be treated as CAN UNCLASSIFIED and protected appropriately based on the terms and conditions specified on the covering page. Defence Research and Development Canada External Literature (P) DRDC-RDDC-2020-P228 December 2020 CAN UNCLASSIFIED CAN UNCLASSIFIED IMPORTANT INFORMATIVE STATEMENTS This document was reviewed for Controlled Goods by Defence Research and Development Canada using the Schedule to the Defence Production Act. Disclaimer: This document is not published by the Editorial Office of Defence Research and Development Canada, an agency of the Department of National Defence of Canada but is to be catalogued in the Canadian Defence Information System (CANDIS), the national repository for Defence S&T documents. Her Majesty the Queen in Right of Canada (Department of National Defence) makes no representations or warranties, expressed or implied, of any kind whatsoever, and assumes no liability for the accuracy, reliability, completeness, currency or usefulness of any information, product, process or material included in this document. Nothing in this document should be interpreted as an endorsement for the specific use of any tool, technique or process examined in it. Any reliance on, or use of, any information, product, process or material included in this document is at the sole risk of the person so using it or relying on it. Canada does not assume any liability in respect of any damages or losses arising out of or in connection with the use of, or reliance on, any information, product, process or material included in this document. Template in use: EO Publishing App for CR-EL Eng 2019-01-03-v1.dotm © Her Majesty the Queen in Right of Canada (Department of National Defence), 2020 © Sa Majesté la Reine en droit du Canada (Ministère de la Défense nationale), 2020 CAN UNCLASSIFIED 25th ICCRTS – 3-5 November, 2020 “The Future of Command and Control” Towards a Deception Detection Framework for Social Media Paper number: 091 Topic 1: C2 in the Information Age Bruce Forrester Defence R&D Canada – Valcartier 2459 Pie-XI North Quebec, QC, G3J 1X5 Tel.: (418) 844-4000 #4943 [email protected] Friederike Von Franqué Von Franqué Consluting 0176-83076104 [email protected] Abstract The democratization of communication media has had significant consequences for military command and control. Never before have adversaries had such free and direct access to our local populations allowing for influence through propaganda, disinformation and deception. In fact, social media platforms can help target messages to the exact demographic desired, while keeping attribution hidden. Commanders have been reluctant to embrace the new communication technologies which has resulted in playing catch up. Meanwhile, our opponents have infiltrated our communication spaces and ‘recruited’ thousands of followers who spread their messages unwittingly. This paper will present a new research framework for deception that will help overcome the issues of attribution of message originators. Concentrating on uncovering narratives, methods and intent rather than individuals alleviates many ethical problems of social media analytics within western societies. The framework will help to guide research on deception detection and increase assessment confidence for intelligence analysts and public affairs involved in the ongoing information environment clashes of power and politics. 1. Introduction In warfare, the consequences of deception can be severe and decisive. Not surprisingly, deceiving and influencing the enemy, or one’s own population for that matter, is not new. It has existed for thousands of years and as Sun Tzu stated in his book The Art of War, “the supreme art of war is to subdue the enemy without fighting”. Operational and tactical commanders’ use forms of deception on a regular basis to keep the actual or future adversary misinformed or just guessing about military resources, processes or future plans. The use of influence campaigns designed to sow division, garner support, or to just create chaos in adversary countries have similarly been used before. However, this aspect of the art of war is more than ever easier due to affordances of cyberspace and the democratization of information and communications via social media. A recent example is the suspected involvement of Russia during the 2016 US presidential election. It seems clear that outside forces were at play and trying to influence voters. For example, Timberg [1] states: “There is no way to know whether the Russian campaign proved decisive in electing Trump, but researchers portray it as part of a broadly effective strategy of sowing distrust in U.S. democracy and its leaders. The tactics included penetrating the computers of election officials in several states and releasing troves of hacked emails that embarrassed Clinton in the final months of her campaign.” Russian cyber and influence activities have been well documented [2-4] in the Ukraine during the annexation of the Crimea which was accomplished with almost no fighting. In fact, Berzin [3] states that “the Russian view of modern warfare is based on the idea that the main battle-space is the mind, and as a result, new-generation wars are to be dominated by information and psychological warfare…” (p.5). Such influence activities indeed point to a new focus of warfare; one conducted in cyber space, that exploits the use of deception and influence, and that broadcasts messages using social media. Within the cyber domain, Russia’s tool kit includes the “weaponization of information, culture and money” [5] in an effort to inject disinformation, confusion and proliferate falsehoods. This is being accomplished through using all available information channels such as TV news channels (i.e. RT, Sputnik), newspapers, YouTube channels (i.e. RT), Blogs and websites [6], as well as state sponsored Trolls [7, 8] who are ever-present on many social media outlets. Most of these means are combined and intermingled to create repetition of alternative narratives in many places, effectively strengthening perceived authenticity of the message. One of the most important channels to disseminate influential information operations are social media. Their reach has become expansive. Social media provides access to literally billions of people at a very granular level where even regionally targeted news can create viral explosions of a scale that have real effect in the physical world. For example, Pizzagate is the case of Edgar Welch who took his AR-15 rifle to Comet Ping Pong pizzeria in Washington to save children from ritualistic child abuse [9]. A convoluted conspiracy theory that originated and spread via social media but ended up with Welch actually going to the pizzeria with a rifle. Different deception techniques use social media as a platform or have been newly developed for this environment. Established propaganda techniques such as lies, misinformation, exaggeration, omission, bluffs, white lies, etc., meet with the newly available digital deception and propaganda techniques such as deep fakes, phishing and trolling and can each lead to unique manifestations within social media. It is not easy to spot these deception techniques, on the contrary, it is easy to become overwhelmed and muddled in one’s approach to detection of deception. Even fact- checking websites (i.e. Snopes.com, Google Fact Check) that are designed to help people differentiate fact from fiction are being faked [10]. To help researchers and operators deal with this complexity, a comprehensive detection framework is required. This paper presents an empirical-based framework for deception detection in social media. The framework described will allow for directed research, the production of indicators, and the development of algorithms. Single pieces of information are not usually considered sufficient to make solid recommendations and decisions. Once the framework is validated it will allow the ability for triangulation between its parts in order to increase confidence in any declaration of deception detection, thus improving a commander’s situational awareness and decision making capability. 1.1 What is Deception? Many areas of expertise developed definitions of their understanding of deception and their close relatives: lies, fraud, trickery, delusion, misguidance, or misdirection. In military deception operations the objectives are often designed to make the opponent believe some falsehood about military strength or obfuscate future actions. A famous example was the “Operation Fortitude” embedded into the planning of the invasion of Normandy in 1944: For a non-existent army, stage designers built dummy tanks made of rubber and dummy airports with wooden airplanes. Officers engaged in radio communication, ordered food supply and discussed fictitious attack plans [11]. While there is no precise overlap in the definitions of deception from domain to domain, the definitions are close to each other and there are at least four characteristics that are common to most [12, 13]: a. The intent is deliberate; b. The intent is to mislead; c. Targets are unaware; and d. The aim is to transfer a false belief to another person. Deception campaigns have as a goal to get the target or population to do something that the deceiver wants them to do or not do and thus give the deceiver, in a way, control over the targets’ actions or general behaviour. It can be to confuse, delay, or waste resources of an opponent, to put the receiver at disadvantage, or to hide one’s own purpose. Deception can occur to discredit and divide. Deception is an art form and as such will never remain static but continue to evolve and change.
Recommended publications
  • Freedom on the Net 2016
    FREEDOM ON THE NET 2016 China 2015 2016 Population: 1.371 billion Not Not Internet Freedom Status Internet Penetration 2015 (ITU): 50 percent Free Free Social Media/ICT Apps Blocked: Yes Obstacles to Access (0-25) 18 18 Political/Social Content Blocked: Yes Limits on Content (0-35) 30 30 Bloggers/ICT Users Arrested: Yes Violations of User Rights (0-40) 40 40 TOTAL* (0-100) 88 88 Press Freedom 2016 Status: Not Free * 0=most free, 100=least free Key Developments: June 2015 – May 2016 • A draft cybersecurity law could step up requirements for internet companies to store data in China, censor information, and shut down services for security reasons, under the aus- pices of the Cyberspace Administration of China (see Legal Environment). • An antiterrorism law passed in December 2015 requires technology companies to cooperate with authorities to decrypt data, and introduced content restrictions that could suppress legitimate speech (see Content Removal and Surveillance, Privacy, and Anonymity). • A criminal law amendment effective since November 2015 introduced penalties of up to seven years in prison for posting misinformation on social media (see Legal Environment). • Real-name registration requirements were tightened for internet users, with unregistered mobile phone accounts closed in September 2015, and app providers instructed to regis- ter and store user data in 2016 (see Surveillance, Privacy, and Anonymity). • Websites operated by the South China Morning Post, The Economist and Time magazine were among those newly blocked for reporting perceived as critical of President Xi Jin- ping (see Blocking and Filtering). www.freedomonthenet.org FREEDOM CHINA ON THE NET 2016 Introduction China was the world’s worst abuser of internet freedom in the 2016 Freedom on the Net survey for the second consecutive year.
    [Show full text]
  • Battling the Internet Water Army: Detection of Hidden Paid Posters
    Battling the Internet Water Army: Detection of Hidden Paid Posters Cheng Chen Kui Wu Venkatesh Srinivasan Xudong Zhang Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science University of Victoria University of Victoria University of Victoria Peking University Victoria, BC, Canada Victoria, BC, Canada Victoria, BC, Canada Beijing, China Abstract—We initiate a systematic study to help distinguish on different online communities and websites. Companies are a special group of online users, called hidden paid posters, or always interested in effective strategies to attract public atten- termed “Internet water army” in China, from the legitimate tion towards their products. The idea of online paid posters ones. On the Internet, the paid posters represent a new type of online job opportunity. They get paid for posting comments is similar to word-of-mouth advertisement. If a company hires and new threads or articles on different online communities enough online users, it would be able to create hot and trending and websites for some hidden purposes, e.g., to influence the topics designed to gain popularity. Furthermore, the articles opinion of other people towards certain social events or business or comments from a group of paid posters are also likely markets. Though an interesting strategy in business marketing, to capture the attention of common users and influence their paid posters may create a significant negative effect on the online communities, since the information from paid posters is usually decision. In this way, online paid posters present a powerful not trustworthy. When two competitive companies hire paid and efficient strategy for companies.
    [Show full text]
  • Producers of Disinformation – Version
    Live research from the digital edges of democracy YOU ARE VIEWING AN OLDER VERSION OF THIS LIVE RESEARCH REVIEW. Switch to the latest version. Live Research Review Producers of Disinformation - Version 1.2 By Samuel Spies · January 22, 2020 Version 1.0 Table of Contents Introduction Disinformation in the internet age Who produces disinformation, and why? Commercial motivations Political motivations Conclusion Avenues for future study Spies, Samuel. 2020. "Producers of Disinformation" V1.0. Social Science Research Council, MediaWell. January 22, 2020. https://mediawell.ssrc.org/literature-reviews/producers-of-disinformat ion/versions/1-0/ Introduction One of the promises of the internet has been that anyone with a device and a data connection can find an audience. Until relatively recently, many social scientists and activists joined technologists in their optimism that social media and digital publishing would connect the world, give a voice to marginalized communities, bridge social and cultural divides, and topple authoritarian regimes. Reality has proven to be far more complicated. Despite that theoretical access to audience, enormous asymmetries and disparities persist in the way information is produced. Elites, authoritarian regimes, and corporations still have greater influence and access to communicative power. Those elites have retained their ability to inject favorable narratives into the massive media conglomerates that still dominate mediascapes in much of the world. At the same time, we have observed the darker side of the internet’s democratic promise of audience. In a variety of global contexts, we see political extremists, hate groups, and misguided health activists recruiting members, vilifying minorities, and spreading misinformation. These online activities can have very real life-or-death consequences offline, including mass shootings, ethnic strife, disease outbreaks, and social upheaval.
    [Show full text]
  • Modeling the Effect of Internet Censorship on Political Protest in China
    International Journal of Communication 12(2018), 3294–3316 1932–8036/20180005 Implicit and Explicit Control: Modeling the Effect of Internet Censorship on Political Protest in China JIAYIN LU YUPEI ZHAO Sun Yat-sen University, China This study brings the theory of structural threats to Internet research to examine the impact of Internet censorship on young adults’ political expression and protest. Conducted with a Web survey of university students in China (N = 2,188), this study shows, first, that the degree of awareness of Internet laws and regulations can contribute directly to young adults’ political protests or accelerate their political protest through online political expression, and second, that the degree of psychological perception of Internet censorship can directly weaken political protest or indirectly limit it by curtailing young people’s online political expression. Moreover, Internet censorship as an intended threat leads to political protest, but the relationship between Internet censorship and political protest is mediated through online political expression. This article discusses the implications of the findings for freedom of speech and Internet regulation under an authoritarian regime. Keywords: structural threats, Internet censorship, political expression, political protest Political protest has grown recently in the digital agora, particularly among young people (Lilleker & Koc-Michalska, 2017; Yamamoto, Kushin, & Dalisay, 2015). Citizens have engaged in powerful forms of political participation through digital technology—for instance, in the Zapatista uprising in Mexico (Abigail, 2011), the Arab Spring protests (Carolina, 2017), protests in Russia and Ukraine, the Taiwan Sunflower movement (Tsatsou & Zhao, 2016), and protests surrounding the election of Donald Trump in 2016 (Lawrence & Boydstun, 2017).
    [Show full text]
  • Current Trends in Detecting Internet Water Army on Social Media Platforms and Its Challenges: a Survey
    Special Issue - 2020 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 NCICCT - 2020 Conference Proceedings Current Trends in Detecting Internet Water Army on Social Media Platforms and its Challenges: A Survey D. Hamithra Jothi Prof. T. Rajasenbagam M.E. PG Scholar, Assistant professor Computer Science and Engineering, Computer Science and Engineering, Government College of Technology, Government College of Technology, Coimbatore, India. Coimbatore, India. Abstract — The stable nature of the Internet is being disrupted by The water armies wrap themselves as normal users and the Internet water armies over decades. The exact detection of such supply the internet with adverse information over a short span of Internet water armies is highly needed. The Internet water armies time in order to meet the target. This makes them possess more had invaded all sorts of digital communication platforms. The complicated features. To be precise, they employ the rumors to behavior of Internet water armies to manipulate the information manipulate the opinion of general audience to make their views have adverse impacts on the content reliability on these platforms. stronger. The thrust for this work is the untouched areas which Social imbalance could also be caused owing to the increasing threat surrounded Internet water armies. The remaining part of the paper of diverting the public to trust unauthenticated information. A wide range of research is being conducted to invade the Internet water is organized as follows: Section II shows the works related to this army and their impacts on the common people. This paper attempts paper. Section III details the collection of data.
    [Show full text]
  • Digital Astroturfing
    Digital Astroturfing: Definition, typology, and countermeasures. Marko Kovic,∗ Adrian Rauchfleisch,y and Marc Selez ZIPAR - Zurich Institute of Public Affairs Research, Zurich, Switzerland First draft: December 22, 2015 This version: October 12, 2016 ∗[email protected] yadrian.rauchfl[email protected] [email protected] 1 Abstract In recent years, several instances of political actors creating fake grassroots activity on the Internet have been uncovered. We propose to call such fake online grassroots activity digital astroturfing. In this paper, we lay out a conceptual map of the phenomenon of digital astroturfing. To that end, we first define digital astroturfing as a form of manufactured, deceptive and strategic top-down activity on the In- ternet initiated by political actors that mimics bottom-up activity by autonomous individuals. Next, we explore a typology of digital astroturf- ing according to the dimensions of the target of digital astroturfing, the political actors who engage in digital astroturfing and the goals of the digital astroturfing activity. Following the discussion of our proposed typology, we introduce the concept of digital astroturfing repertoires, the possible combinations of tools, venues and actions used for digital astroturfing efforts. Finally, we discuss how to prevent or curb dig- ital astroturfing by implementing certain restrictive or incentivizing countermeasures. The main use of this conceptual study is to serve as a basis for future empirical work. Even though empirical research on digital astroturfing is inherently difficult since digital astroturfing is a clandestine activity, it is not impossible. We suggest some viable research strategies. 1 Introduction: The many faces of digital as- troturfing On August 18, 2015, Lyudmila Savchuk won a lawsuit against her former employer, the St.
    [Show full text]
  • Communication Control and Its Impact on Political Legitimacy in Four Asian Cities Chua Puay Hoe
    1 Communication Control and its Impact on Political Legitimacy in Four Asian Cities Chua Puay Hoe Royal Holloway, University of London PhD Thesis 2 Declaration of Authorship I, Chua Puay Hoe, hereby declare that this thesis and the work presented in it is entirely my own. Where I have consulted the work of others, this is always clearly stated. Signed: ______________________ Date: __8/8/2020______________ 3 Abstract This thesis examines the relationship between perceptions of communication control and political legitimacy. Four Asian cities (Beijing, Hong Kong, Singapore and Taipei) are compared and analysed as these societies have different levels of governmental influence in the media landscape while having similarities in other aspects. Importantly, the media systems in the societies to be analysed are often ranked vastly different by international civil society organizations. While vastly different in media control policy, these societies are frequently analysed together as they are at similar level of economic development and frequently seen to be of Confucian heritage culture that emphasizes on collectivism and deference to authority. Political legitimacy is the acceptance of and willingness to obey the government, which ultimately is based on the perception of the populace. Legitimacy can be performance-based (ability to deliver the physical needs) or process-based (sound governance system and procedures that are accepted by the people). Freedom of expression and media freedom would be critical elements in process-based legitimacy. Some studies concluded that a controlled media system would result in a trusting public supportive of the government while others show an inverse relationship between media control and political participation.
    [Show full text]
  • Freedom on the Net 2015
    FREEDOM ON THE NET 2015 China 2014 2015 Population: 1.36 billion Not Not Internet Freedom Status Internet Penetration 2014: 49 percent Free Free Social Media/ICT Apps Blocked: Yes Obstacles to Access (0-25) 19 18 Political/Social Content Blocked: Yes Limits on Content (0-35) 29 30 Bloggers/ICT Users Arrested: Yes Violations of User Rights (0-40) 39 40 TOTAL* (0-100) 87 88 Press Freedom 2015 Status: Not Free * 0=most free, 100=least free Key Developments: June 2014 – May 2015 • In January 2015, Chinese authorities reported an upgrade to the national firewall that blocked several providers of virtual private networks in the name of “cyberspace sover- eignty” (see Blocking and Filtering). • The China Internet Network Information Center was found to be issuing false digital se- curity certificates for a number of websites, including Google, exposing the sites’ users to “man in the middle” attacks (see Technical Attacks). • The government strengthened its real-name registration laws for blogs, instant-messag- ing services, discussion forums, and comment sections of websites (see Surveillance, Privacy, and Anonymity). • In November 2014, the Chinese government introduced a draft counterterrorism law that would require all telecommunications companies and internet services to provide the government with “backdoor” access and copies of encryption keys (see Surveillance, Privacy, and Anonymity). 1 www.freedomhouse.org FREEDOM China ON THE NET 2015 Introduction The Chinese Communist Party (CCP) under general secretary and state president Xi Jinping contin- ued to pursue “cyberspace sovereignty” as a top policy strategy during the coverage period of this report. The aim of establishing control was particularly evident in the government’s attitude toward foreign internet companies, its undermining of digital security protocols, and its ongoing erosion of user rights, including through extralegal detentions and the imposition of prison sentences for on- line speech.
    [Show full text]
  • Predicting the Role of Political Trolls in Social Media
    Predicting the Role of Political Trolls in Social Media Atanas Atanasov Gianmarco De Francisci Morales Preslav Nakov Sofia University ISI Foundation Qatar Computing Research Bulgaria Italy Institute, HBKU, Qatar [email protected] [email protected] [email protected] Abstract Such farms usually consist of state-sponsored agents who control a set of pseudonymous user We investigate the political roles of “Inter- accounts and personas, the so-called “sockpup- net trolls” in social media. Political trolls, pets”, which disseminate misinformation and pro- such as the ones linked to the Russian In- ternet Research Agency (IRA), have recently paganda in order to sway opinions, destabi- gained enormous attention for their ability lize the society, and even influence elections to sway public opinion and even influence (Linvill and Warren, 2018). elections. Analysis of the online traces of The behavior of political trolls has been ana- trolls has shown different behavioral patterns, lyzed in different recent circumstances, such as which target different slices of the population. the 2016 US Presidential Elections and the Brexit However, this analysis is manual and labor- referendum in UK (Linvill and Warren, 2018; intensive, thus making it impractical as a first- response tool for newly-discovered troll farms. Llewellyn et al., 2018). However, this kind of In this paper, we show how to automate this analysis requires painstaking and time-consuming analysis by using machine learning in a real- manual labor to sift through the data and to catego- istic setting. In particular, we show how to rize the trolls according to their actions. Our goal classify trolls according to their political role in the current paper is to automate this process —left, news feed, right— by using features with the help of machine learning (ML).
    [Show full text]
  • China's News Media Tweeting, Competing with US Sources
    Nip, J. Y. M. and Sun, C. (2018). China’s News Media Tweeting, Competing With US Sources. Westminster Papers in Communication and Culture, 13(1), 98–122, DOI: https://doi.org/10.16997/wpcc.292 RESEARCH ARTICLE China’s News Media Tweeting, Competing With US Sources Joyce Y. M. Nip and Chao Sun University of Sydney, AU Corresponding author: Joyce Y. M. Nip ([email protected]) This paper examines China’s recent initiative on international social media and assesses its effectiveness in counteracting Western dominance in international communication. Analysing data collected from the Twitter platform of three public accounts run by China’s state news media CGTN, People’s Daily and Xinhua News, it finds that their news agenda about China focuses on the country’s top leaders and achievements, while that about other countries is on breaking news. Their China-related tweets receive more positive replies than their non-China-related tweets, but tweets about China’s top leader, Xi Jinping, receive fewer positive replies than soft news items. Analysis of Twitter data of the #southchinasea hashtag finds that China’s media mainly compete with US sources for influence. China’s state media influence the news agenda on the issue by active and persistent tweeting and drawing retweets. However, US sources are more influential as a whole in setting the news agenda and amplifying certain news events. The study finds evidence that forces seemingly unfriendly to both China and the US attempt to skew the news agenda of #southchinasea using manipulated accounts. Keywords: Chinese media; external propaganda; media globalization; South China Sea; social bot; Twitter Introduction Western dominance of international communication (MacBride et al., 1980) manifested as a real diplomatic issue for China in the 1990s, after Western television news beamed images of the injured and dead next to rolling military tanks in the streets of Beijing in early June 1989 to the living rooms of audiences around the world.
    [Show full text]
  • The Pennsylvania State University the Graduate School Donald P. Bellisario College of Communications STATE PROPAGANDA and COUNTE
    The Pennsylvania State University The Graduate School Donald P. Bellisario College of Communications STATE PROPAGANDA AND COUNTER-PROPAGANDA: TWO CASE STUDIES ON NATIONALIST PROPAGANDA IN MAINLAND CHINA AND HONG KONG A Dissertation in Mass Communications by Luwei Rose Luqiu @2018 Luwei Rose Luqiu Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy May 2018 The dissertation of Luwei Rose Luqiu was reviewed and approved by the following: Bu Zhong Associate Professor of Journalism Dissertation Advisor Chair of Committee Mary Beth Oliver Distinguished Professor of Media Effect Research Patrick Parsons Don Davis Professor of Ethics John McCarthy Distinguished Professor of Sociology Matthew McAllister Professor of Media Studies Chair of Graduate Programs of Donald P. Bellisario College of Communications *Signatures are on file in the Graduate School iii ABSTRACT This research aims to study the propaganda and counter-propaganda strategies used in both a closed and an open society by conducting two case studies in mainland China and Hong Kong. Nationalist propaganda campaigns concerning four independence movements in Tibet, Xinjiang, Taiwan, and Hong Kong were compared and analyzed to explore the underlying mechanism of Chinese Communist Party’s propaganda strategies. The framing strategies employed in the four independence movements were also compared, which were significant different among the movements under study. The Hong Kong independence movement was used to demonstrate the framing contest in Hong Kong, while state propaganda faces different challengers. A hostile media effect and a third-person effect were revealed among mainland Chinese netizens. This research adds new evidence to the observation that the state-controlled media might change people’s behavior, but they could hardly change their beliefs.
    [Show full text]
  • A Survey on Computational Propaganda Detection
    Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Survey Track A Survey on Computational Propaganda Detection Giovanni Da San Martino1∗ , Stefano Cresci2 , Alberto Barron-Cede´ no˜ 3 , Seunghak Yu4 , Roberto Di Pietro5 and Preslav Nakov1 1Qatar Computing Research Institute, HBKU, Doha, Qatar 2Institute of Informatics and Telematics, IIT-CNR, Pisa, Italy 3DIT, Alma Mater Studiorum–Universita` di Bologna, Forl`ı, Italy 4MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA, USA 5College of Science and Engineering, HBKU, Doha, Qatar fgmartino, rdipietro, [email protected], [email protected], [email protected], [email protected] Abstract Whereas false statements are not really a new phenomenon —e.g., yellow press has been around for decades— this time Propaganda campaigns aim at influencing people’s things were notably different in terms of scale and effective- mindset with the purpose of advancing a specific ness thanks to social media, which provided both a medium to agenda. They exploit the anonymity of the Internet, reach millions of users and an easy way to micro-target spe- the micro-profiling ability of social networks, and cific narrow groups of voters based on precise geographic, the ease of automatically creating and managing demographic, psychological, and/or political profiling. coordinated networks of accounts, to reach millions An important aspect of the problem that is often largely of social network users with persuasive messages, ignored is the mechanism through which disinformation specifically targeted to topics each individual user is being conveyed, which is using propaganda techniques. is sensitive to, and ultimately influencing the out- These include specific rhetorical and psychological tech- come on a targeted issue.
    [Show full text]