Detection and Analysis of Water Army Groups on Virtual Community

Total Page:16

File Type:pdf, Size:1020Kb

Detection and Analysis of Water Army Groups on Virtual Community Detection and Analysis of Water Army Groups on Virtual Community Guirong Chen1,2, Wandong Cai1, Jiuming Huang3, Huijie Xu1, Rong Wang2, Hua Jiang2, Fengqin Zhang2 1 Department of Computer Science, Northwestern Polytechnical University , 710029 Xi’an, China 2 School of Information and Navigation, Air Force Engineering University of PL, 710077 Xi’an, China 3 College of Computer, National University of Defense Technology, 410073 Changsha, China {guirongchen315, rongwang66}@163.com [email protected] [email protected] [email protected] [email protected] [email protected] Abstract. Water army is prevalent in social networks and it causes harmful effect to the public opinion and security of cyberspace. This paper proposes a novel water army groups detection method which consists of 4 steps. Firstly, we break the virtual community into a series of time windows and find the suspicious periods when water army groups are active. Then we build the user cooperative networks of suspicious periods according to user’s reply behaviors and cluster them based on their Cosine similarity. After that, we prune the cooperative networks by just remaining the edges whose weight is larger than some threshold and get some suspicious user clusters. Finally, we conduct deeper analysis to the behaviors of the cluster users to determine whether they are water army groups or not. The experiment results show that our method can identify water army groups on virtual community efficiently and it has a high accuracy. Keywords: Social Networks; Detection of Water Army Groups; Empirical Analysis of Water Army Activities; Virtual Community 1 Introduction With the maturity and popularization of Web2.0 technology, kinds of interactive networks such as online forums, blogs, microblogs and other social networks have become more and more popular. According to the report of China Internet Network Information Center (CNNIC), there are around 618 million Internet users in China, which is approximately 45.8% of its total population [1]. Not only does the Internet become the platform in which people communicate with each other, but also it has been the window through which people observe and supervise the management of the social and the country. However, the inherent characteristics of Internet such as freedom, openness and anonymity and the immature of the laws and supervision mechanisms for Internet management make it possible for some malicious people to abuse it. Water army is one typical example. To attract public attention towards their products or enhance their reputation or attack their competitor’s, companies may hire many people to ‘flood’ the Internet with huge number of purposeful comments and articles. The Internet users who make money by posting comments and articles on different online communities or websites are called ‘water army’ in China. Sometimes water army can be used as an efficient way in business marketing. For example, before a new movie or TV serial is released, the host company may hire water army to submit articles, posts and comments about the movie or TV serial, the actors or actress of it on kinds of online communities to attract people’s attention and trigger their curiosity. But sometimes water army can be used in malicious ways. For example, some people or companies may hire water army to spread negative, false and concoctive information about their competitors. By initiating and taking part in discussions on Internet and submitting a great number of comments, water army can influence the opinions of other people towards some product or social event. Further more, some people or organizations who have an ulterior motive even might hire water army to slander the government and our country. In a word, water army is not only harmful to the benefits of ordinary internet users, normal companies, but also affects the social stability. Hence, detection of water army becomes one of the most important scientific issues today. However, study on detection and analysis of water army is still in the initial stage and few papers about this problem has been published. There are 3 reasons for this: (1) water army is a new phenomenon of the Internet, and studies on this problem have just begun; (2) there is no public datasets in which users have been labeled as water army or not, and it is very difficult to recognize water army accounts by manually reading the comments they submit; (3) it is hard to verify the accuracy of the detection algorithm. Despite the difficulties, researchers at home and abroad have begun to conduct studies on this problem and obtained some achievements. Chen et al. conduct empirical research on the behavior patterns of water army by analyzing the ‘3Q conflict’ real data which they get from several online communities and news websites. They get some meaningful conclusions: (1) water army prefers to submit new topics than replies; (2) the posting time intervals of water army are small; (3) water army would like to use different temporary accounts to log in virtual communities and most of them will discard the accounts when the task is completed; (4) in order to finish the task as quickly as they can, water army incline to submit duplicate or nearly duplicate comments [3]. Fan et al. analyze a real dataset of one online forum manually and find that there are many water army accounts on that online forum and they are well organized [4]. Li et al. try to use text sentiment analysis technology to identify network cheaters who are similar to water army [5]. Review spammer refers to people who submit spam reviews or spam opinions on social network or electronic commerce websites. As water army and review spammers exist similarity in many aspects such as purpose, behavior patterns, organization and so on, so the achievements on review spammer detection have important significance to water army detection. Mo et al. present an overview of spammer detection technologies and give some suggestions for possible extensions [6]. Professor Liu Bing and his research team conduct deep research on detection of review spam and spammers on electronic commerce websites [7-11]. Paper [7] takes advantage of text classification and pattern matching technology to detect review spam. By analyzing the product or brand rating behaviors of different users, paper [8] finds some abnormal rating patterns and detects some review spammers via mining abnormal rating behaviors. Paper [9] presents a novel review graph to describe the relationship of all the reviews, reviewers and stores, and proposes an iterative computation model to identify suspicious reviewers on the electronic commerce website. In view of the fact that spammers on electronic commerce websites usually appear in groups or twists, paper [10] constructs a dataset including spammer groups manually, and analyzes the spammers’ behaviors on individual and population levels, and then proposes a new spammer group detection algorithm. The research findings of Professor Liu Bing and his team have great implications for the paper. Spam and spammer detection problem has also attracted the attention of other researchers at home and abroad. Husn et al. conduct empirical analysis on the behavior characteristics of spam botnets [12]. Brendel et al. construct a social network of online users and detect spammers on that network through computing and analyzing the network properties [15]. Inspired by the PageRank algorithm, zhou et al. propose a novel ranking algorithm which can distinguish malicious users from normal ones. Like paper [8-9], this algorithm is also based on the rating results of different users. Xu et al. analyze the collaborative behaviors of spammers [15], Lu et al. propose a spam and spammer detection algorithm using graph models [16]. Lin et al. present a new method to identify spam using social network analysis technology [17]. To cheat online, some internet users may use fake identities to create the illusion of support to some product or people pretending to be a totally different person. The different identities of the same person online is called ‘sock puppet’. Recently, the problem of sock puppet detection attracts researchers’ attention. Bu et al. present a novel sock puppet detection algorithm which combines link analysis and authorship identification techniques [18]. Based on the fact that sock puppets appear in pairs, Zheng et al. propose a simple method which detects sock puppets by analyzing the relationship of users [19]. Jiang et al. consider water army accounts as outliers and use the outlier detection algorithm to detect them [20]. In a word, the problem of malicious user detection has attracted wide attention [21]. Although researchers have carried on beneficial exploration on the problem of water army detection, there are still many problems to be solved. The findings of Chen et al. can detect water army whose purpose is to spread some specific information wildly [3], but they can’t identify other type of water army. The method proposed by Li et al. can find water army whose comments have intense and consistent emotion or attitude[5], but most of the time, in order to create hot online topics, attract people’s attention, trigger their curious, water army might release many comments whose emotion are ambiguous or contradictory with each other. The existing methods can’t detect this type of water army. The research findings about spammer and sock puppet detection are of great significance to the study of water army detection, while the methods can’t be used to detect water army accounts on online forums directly. There are 3 reasons. Firstly, there is no rating mechanism on forums, so we can’t identify water army accounts via analyzing users’ rating behaviors. Secondly, the contents submitted by forum users are very complex, very difficult to describe, that the detection algorithm based on text analysis can’t get a good accuracy when they are used to detect water army accounts on forums.
Recommended publications
  • Freedom on the Net 2016
    FREEDOM ON THE NET 2016 China 2015 2016 Population: 1.371 billion Not Not Internet Freedom Status Internet Penetration 2015 (ITU): 50 percent Free Free Social Media/ICT Apps Blocked: Yes Obstacles to Access (0-25) 18 18 Political/Social Content Blocked: Yes Limits on Content (0-35) 30 30 Bloggers/ICT Users Arrested: Yes Violations of User Rights (0-40) 40 40 TOTAL* (0-100) 88 88 Press Freedom 2016 Status: Not Free * 0=most free, 100=least free Key Developments: June 2015 – May 2016 • A draft cybersecurity law could step up requirements for internet companies to store data in China, censor information, and shut down services for security reasons, under the aus- pices of the Cyberspace Administration of China (see Legal Environment). • An antiterrorism law passed in December 2015 requires technology companies to cooperate with authorities to decrypt data, and introduced content restrictions that could suppress legitimate speech (see Content Removal and Surveillance, Privacy, and Anonymity). • A criminal law amendment effective since November 2015 introduced penalties of up to seven years in prison for posting misinformation on social media (see Legal Environment). • Real-name registration requirements were tightened for internet users, with unregistered mobile phone accounts closed in September 2015, and app providers instructed to regis- ter and store user data in 2016 (see Surveillance, Privacy, and Anonymity). • Websites operated by the South China Morning Post, The Economist and Time magazine were among those newly blocked for reporting perceived as critical of President Xi Jin- ping (see Blocking and Filtering). www.freedomonthenet.org FREEDOM CHINA ON THE NET 2016 Introduction China was the world’s worst abuser of internet freedom in the 2016 Freedom on the Net survey for the second consecutive year.
    [Show full text]
  • Battling the Internet Water Army: Detection of Hidden Paid Posters
    Battling the Internet Water Army: Detection of Hidden Paid Posters Cheng Chen Kui Wu Venkatesh Srinivasan Xudong Zhang Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science University of Victoria University of Victoria University of Victoria Peking University Victoria, BC, Canada Victoria, BC, Canada Victoria, BC, Canada Beijing, China Abstract—We initiate a systematic study to help distinguish on different online communities and websites. Companies are a special group of online users, called hidden paid posters, or always interested in effective strategies to attract public atten- termed “Internet water army” in China, from the legitimate tion towards their products. The idea of online paid posters ones. On the Internet, the paid posters represent a new type of online job opportunity. They get paid for posting comments is similar to word-of-mouth advertisement. If a company hires and new threads or articles on different online communities enough online users, it would be able to create hot and trending and websites for some hidden purposes, e.g., to influence the topics designed to gain popularity. Furthermore, the articles opinion of other people towards certain social events or business or comments from a group of paid posters are also likely markets. Though an interesting strategy in business marketing, to capture the attention of common users and influence their paid posters may create a significant negative effect on the online communities, since the information from paid posters is usually decision. In this way, online paid posters present a powerful not trustworthy. When two competitive companies hire paid and efficient strategy for companies.
    [Show full text]
  • Producers of Disinformation – Version
    Live research from the digital edges of democracy YOU ARE VIEWING AN OLDER VERSION OF THIS LIVE RESEARCH REVIEW. Switch to the latest version. Live Research Review Producers of Disinformation - Version 1.2 By Samuel Spies · January 22, 2020 Version 1.0 Table of Contents Introduction Disinformation in the internet age Who produces disinformation, and why? Commercial motivations Political motivations Conclusion Avenues for future study Spies, Samuel. 2020. "Producers of Disinformation" V1.0. Social Science Research Council, MediaWell. January 22, 2020. https://mediawell.ssrc.org/literature-reviews/producers-of-disinformat ion/versions/1-0/ Introduction One of the promises of the internet has been that anyone with a device and a data connection can find an audience. Until relatively recently, many social scientists and activists joined technologists in their optimism that social media and digital publishing would connect the world, give a voice to marginalized communities, bridge social and cultural divides, and topple authoritarian regimes. Reality has proven to be far more complicated. Despite that theoretical access to audience, enormous asymmetries and disparities persist in the way information is produced. Elites, authoritarian regimes, and corporations still have greater influence and access to communicative power. Those elites have retained their ability to inject favorable narratives into the massive media conglomerates that still dominate mediascapes in much of the world. At the same time, we have observed the darker side of the internet’s democratic promise of audience. In a variety of global contexts, we see political extremists, hate groups, and misguided health activists recruiting members, vilifying minorities, and spreading misinformation. These online activities can have very real life-or-death consequences offline, including mass shootings, ethnic strife, disease outbreaks, and social upheaval.
    [Show full text]
  • Modeling the Effect of Internet Censorship on Political Protest in China
    International Journal of Communication 12(2018), 3294–3316 1932–8036/20180005 Implicit and Explicit Control: Modeling the Effect of Internet Censorship on Political Protest in China JIAYIN LU YUPEI ZHAO Sun Yat-sen University, China This study brings the theory of structural threats to Internet research to examine the impact of Internet censorship on young adults’ political expression and protest. Conducted with a Web survey of university students in China (N = 2,188), this study shows, first, that the degree of awareness of Internet laws and regulations can contribute directly to young adults’ political protests or accelerate their political protest through online political expression, and second, that the degree of psychological perception of Internet censorship can directly weaken political protest or indirectly limit it by curtailing young people’s online political expression. Moreover, Internet censorship as an intended threat leads to political protest, but the relationship between Internet censorship and political protest is mediated through online political expression. This article discusses the implications of the findings for freedom of speech and Internet regulation under an authoritarian regime. Keywords: structural threats, Internet censorship, political expression, political protest Political protest has grown recently in the digital agora, particularly among young people (Lilleker & Koc-Michalska, 2017; Yamamoto, Kushin, & Dalisay, 2015). Citizens have engaged in powerful forms of political participation through digital technology—for instance, in the Zapatista uprising in Mexico (Abigail, 2011), the Arab Spring protests (Carolina, 2017), protests in Russia and Ukraine, the Taiwan Sunflower movement (Tsatsou & Zhao, 2016), and protests surrounding the election of Donald Trump in 2016 (Lawrence & Boydstun, 2017).
    [Show full text]
  • Current Trends in Detecting Internet Water Army on Social Media Platforms and Its Challenges: a Survey
    Special Issue - 2020 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 NCICCT - 2020 Conference Proceedings Current Trends in Detecting Internet Water Army on Social Media Platforms and its Challenges: A Survey D. Hamithra Jothi Prof. T. Rajasenbagam M.E. PG Scholar, Assistant professor Computer Science and Engineering, Computer Science and Engineering, Government College of Technology, Government College of Technology, Coimbatore, India. Coimbatore, India. Abstract — The stable nature of the Internet is being disrupted by The water armies wrap themselves as normal users and the Internet water armies over decades. The exact detection of such supply the internet with adverse information over a short span of Internet water armies is highly needed. The Internet water armies time in order to meet the target. This makes them possess more had invaded all sorts of digital communication platforms. The complicated features. To be precise, they employ the rumors to behavior of Internet water armies to manipulate the information manipulate the opinion of general audience to make their views have adverse impacts on the content reliability on these platforms. stronger. The thrust for this work is the untouched areas which Social imbalance could also be caused owing to the increasing threat surrounded Internet water armies. The remaining part of the paper of diverting the public to trust unauthenticated information. A wide range of research is being conducted to invade the Internet water is organized as follows: Section II shows the works related to this army and their impacts on the common people. This paper attempts paper. Section III details the collection of data.
    [Show full text]
  • Digital Astroturfing
    Digital Astroturfing: Definition, typology, and countermeasures. Marko Kovic,∗ Adrian Rauchfleisch,y and Marc Selez ZIPAR - Zurich Institute of Public Affairs Research, Zurich, Switzerland First draft: December 22, 2015 This version: October 12, 2016 ∗[email protected] yadrian.rauchfl[email protected] [email protected] 1 Abstract In recent years, several instances of political actors creating fake grassroots activity on the Internet have been uncovered. We propose to call such fake online grassroots activity digital astroturfing. In this paper, we lay out a conceptual map of the phenomenon of digital astroturfing. To that end, we first define digital astroturfing as a form of manufactured, deceptive and strategic top-down activity on the In- ternet initiated by political actors that mimics bottom-up activity by autonomous individuals. Next, we explore a typology of digital astroturf- ing according to the dimensions of the target of digital astroturfing, the political actors who engage in digital astroturfing and the goals of the digital astroturfing activity. Following the discussion of our proposed typology, we introduce the concept of digital astroturfing repertoires, the possible combinations of tools, venues and actions used for digital astroturfing efforts. Finally, we discuss how to prevent or curb dig- ital astroturfing by implementing certain restrictive or incentivizing countermeasures. The main use of this conceptual study is to serve as a basis for future empirical work. Even though empirical research on digital astroturfing is inherently difficult since digital astroturfing is a clandestine activity, it is not impossible. We suggest some viable research strategies. 1 Introduction: The many faces of digital as- troturfing On August 18, 2015, Lyudmila Savchuk won a lawsuit against her former employer, the St.
    [Show full text]
  • Communication Control and Its Impact on Political Legitimacy in Four Asian Cities Chua Puay Hoe
    1 Communication Control and its Impact on Political Legitimacy in Four Asian Cities Chua Puay Hoe Royal Holloway, University of London PhD Thesis 2 Declaration of Authorship I, Chua Puay Hoe, hereby declare that this thesis and the work presented in it is entirely my own. Where I have consulted the work of others, this is always clearly stated. Signed: ______________________ Date: __8/8/2020______________ 3 Abstract This thesis examines the relationship between perceptions of communication control and political legitimacy. Four Asian cities (Beijing, Hong Kong, Singapore and Taipei) are compared and analysed as these societies have different levels of governmental influence in the media landscape while having similarities in other aspects. Importantly, the media systems in the societies to be analysed are often ranked vastly different by international civil society organizations. While vastly different in media control policy, these societies are frequently analysed together as they are at similar level of economic development and frequently seen to be of Confucian heritage culture that emphasizes on collectivism and deference to authority. Political legitimacy is the acceptance of and willingness to obey the government, which ultimately is based on the perception of the populace. Legitimacy can be performance-based (ability to deliver the physical needs) or process-based (sound governance system and procedures that are accepted by the people). Freedom of expression and media freedom would be critical elements in process-based legitimacy. Some studies concluded that a controlled media system would result in a trusting public supportive of the government while others show an inverse relationship between media control and political participation.
    [Show full text]
  • Freedom on the Net 2015
    FREEDOM ON THE NET 2015 China 2014 2015 Population: 1.36 billion Not Not Internet Freedom Status Internet Penetration 2014: 49 percent Free Free Social Media/ICT Apps Blocked: Yes Obstacles to Access (0-25) 19 18 Political/Social Content Blocked: Yes Limits on Content (0-35) 29 30 Bloggers/ICT Users Arrested: Yes Violations of User Rights (0-40) 39 40 TOTAL* (0-100) 87 88 Press Freedom 2015 Status: Not Free * 0=most free, 100=least free Key Developments: June 2014 – May 2015 • In January 2015, Chinese authorities reported an upgrade to the national firewall that blocked several providers of virtual private networks in the name of “cyberspace sover- eignty” (see Blocking and Filtering). • The China Internet Network Information Center was found to be issuing false digital se- curity certificates for a number of websites, including Google, exposing the sites’ users to “man in the middle” attacks (see Technical Attacks). • The government strengthened its real-name registration laws for blogs, instant-messag- ing services, discussion forums, and comment sections of websites (see Surveillance, Privacy, and Anonymity). • In November 2014, the Chinese government introduced a draft counterterrorism law that would require all telecommunications companies and internet services to provide the government with “backdoor” access and copies of encryption keys (see Surveillance, Privacy, and Anonymity). 1 www.freedomhouse.org FREEDOM China ON THE NET 2015 Introduction The Chinese Communist Party (CCP) under general secretary and state president Xi Jinping contin- ued to pursue “cyberspace sovereignty” as a top policy strategy during the coverage period of this report. The aim of establishing control was particularly evident in the government’s attitude toward foreign internet companies, its undermining of digital security protocols, and its ongoing erosion of user rights, including through extralegal detentions and the imposition of prison sentences for on- line speech.
    [Show full text]
  • Predicting the Role of Political Trolls in Social Media
    Predicting the Role of Political Trolls in Social Media Atanas Atanasov Gianmarco De Francisci Morales Preslav Nakov Sofia University ISI Foundation Qatar Computing Research Bulgaria Italy Institute, HBKU, Qatar [email protected] [email protected] [email protected] Abstract Such farms usually consist of state-sponsored agents who control a set of pseudonymous user We investigate the political roles of “Inter- accounts and personas, the so-called “sockpup- net trolls” in social media. Political trolls, pets”, which disseminate misinformation and pro- such as the ones linked to the Russian In- ternet Research Agency (IRA), have recently paganda in order to sway opinions, destabi- gained enormous attention for their ability lize the society, and even influence elections to sway public opinion and even influence (Linvill and Warren, 2018). elections. Analysis of the online traces of The behavior of political trolls has been ana- trolls has shown different behavioral patterns, lyzed in different recent circumstances, such as which target different slices of the population. the 2016 US Presidential Elections and the Brexit However, this analysis is manual and labor- referendum in UK (Linvill and Warren, 2018; intensive, thus making it impractical as a first- response tool for newly-discovered troll farms. Llewellyn et al., 2018). However, this kind of In this paper, we show how to automate this analysis requires painstaking and time-consuming analysis by using machine learning in a real- manual labor to sift through the data and to catego- istic setting. In particular, we show how to rize the trolls according to their actions. Our goal classify trolls according to their political role in the current paper is to automate this process —left, news feed, right— by using features with the help of machine learning (ML).
    [Show full text]
  • China's News Media Tweeting, Competing with US Sources
    Nip, J. Y. M. and Sun, C. (2018). China’s News Media Tweeting, Competing With US Sources. Westminster Papers in Communication and Culture, 13(1), 98–122, DOI: https://doi.org/10.16997/wpcc.292 RESEARCH ARTICLE China’s News Media Tweeting, Competing With US Sources Joyce Y. M. Nip and Chao Sun University of Sydney, AU Corresponding author: Joyce Y. M. Nip ([email protected]) This paper examines China’s recent initiative on international social media and assesses its effectiveness in counteracting Western dominance in international communication. Analysing data collected from the Twitter platform of three public accounts run by China’s state news media CGTN, People’s Daily and Xinhua News, it finds that their news agenda about China focuses on the country’s top leaders and achievements, while that about other countries is on breaking news. Their China-related tweets receive more positive replies than their non-China-related tweets, but tweets about China’s top leader, Xi Jinping, receive fewer positive replies than soft news items. Analysis of Twitter data of the #southchinasea hashtag finds that China’s media mainly compete with US sources for influence. China’s state media influence the news agenda on the issue by active and persistent tweeting and drawing retweets. However, US sources are more influential as a whole in setting the news agenda and amplifying certain news events. The study finds evidence that forces seemingly unfriendly to both China and the US attempt to skew the news agenda of #southchinasea using manipulated accounts. Keywords: Chinese media; external propaganda; media globalization; South China Sea; social bot; Twitter Introduction Western dominance of international communication (MacBride et al., 1980) manifested as a real diplomatic issue for China in the 1990s, after Western television news beamed images of the injured and dead next to rolling military tanks in the streets of Beijing in early June 1989 to the living rooms of audiences around the world.
    [Show full text]
  • Towards a Deception Detection Framework for Social Media
    CAN UNCLASSIFIED Towards a Deception Detection Framework for Social Media Bruce Forrester DRDC – Valcartier Research Centre Friederike Von Franqué Von Franqué consulting 25th ICCRTS Virtual event, 2–6 and 9–13 November 2020 Topic 1: C2 in the Information Age Paper number: 091 Date of Publication from Ext Publisher: December 2020 The body of this CAN UNCLASSIFIED document does not contain the required security banners according to DND security standards. However, it must be treated as CAN UNCLASSIFIED and protected appropriately based on the terms and conditions specified on the covering page. Defence Research and Development Canada External Literature (P) DRDC-RDDC-2020-P228 December 2020 CAN UNCLASSIFIED CAN UNCLASSIFIED IMPORTANT INFORMATIVE STATEMENTS This document was reviewed for Controlled Goods by Defence Research and Development Canada using the Schedule to the Defence Production Act. Disclaimer: This document is not published by the Editorial Office of Defence Research and Development Canada, an agency of the Department of National Defence of Canada but is to be catalogued in the Canadian Defence Information System (CANDIS), the national repository for Defence S&T documents. Her Majesty the Queen in Right of Canada (Department of National Defence) makes no representations or warranties, expressed or implied, of any kind whatsoever, and assumes no liability for the accuracy, reliability, completeness, currency or usefulness of any information, product, process or material included in this document. Nothing in this document should be interpreted as an endorsement for the specific use of any tool, technique or process examined in it. Any reliance on, or use of, any information, product, process or material included in this document is at the sole risk of the person so using it or relying on it.
    [Show full text]
  • The Pennsylvania State University the Graduate School Donald P. Bellisario College of Communications STATE PROPAGANDA and COUNTE
    The Pennsylvania State University The Graduate School Donald P. Bellisario College of Communications STATE PROPAGANDA AND COUNTER-PROPAGANDA: TWO CASE STUDIES ON NATIONALIST PROPAGANDA IN MAINLAND CHINA AND HONG KONG A Dissertation in Mass Communications by Luwei Rose Luqiu @2018 Luwei Rose Luqiu Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy May 2018 The dissertation of Luwei Rose Luqiu was reviewed and approved by the following: Bu Zhong Associate Professor of Journalism Dissertation Advisor Chair of Committee Mary Beth Oliver Distinguished Professor of Media Effect Research Patrick Parsons Don Davis Professor of Ethics John McCarthy Distinguished Professor of Sociology Matthew McAllister Professor of Media Studies Chair of Graduate Programs of Donald P. Bellisario College of Communications *Signatures are on file in the Graduate School iii ABSTRACT This research aims to study the propaganda and counter-propaganda strategies used in both a closed and an open society by conducting two case studies in mainland China and Hong Kong. Nationalist propaganda campaigns concerning four independence movements in Tibet, Xinjiang, Taiwan, and Hong Kong were compared and analyzed to explore the underlying mechanism of Chinese Communist Party’s propaganda strategies. The framing strategies employed in the four independence movements were also compared, which were significant different among the movements under study. The Hong Kong independence movement was used to demonstrate the framing contest in Hong Kong, while state propaganda faces different challengers. A hostile media effect and a third-person effect were revealed among mainland Chinese netizens. This research adds new evidence to the observation that the state-controlled media might change people’s behavior, but they could hardly change their beliefs.
    [Show full text]