Scaling Authoritarian Information Control How China Adjusts the Level of Online Censorship
Total Page:16
File Type:pdf, Size:1020Kb
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/348429732 Scaling Authoritarian Information Control How China Adjusts the Level of Online Censorship Article in Political Research Quarterly · January 2021 CITATIONS READS 0 102 2 authors: Rongbin Han Li Shao University of Georgia Zhejiang University 19 PUBLICATIONS 254 CITATIONS 7 PUBLICATIONS 11 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Self-censorship View project Cyber Criticism of Chinese Public Intellectuals View project All content following this page was uploaded by Li Shao on 19 January 2021. The user has requested enhancement of the downloaded file. Scaling Authoritarian Information Control How China Adjusts the Level of Online Censorship Rongbin Han Department of International Affairs, University of Georgia Address: 322 Candler Hall, University of Georgia, Athens GA 30602 Phone: 706-542-6705 Email: [email protected] Li Shao (Corresponding Author) Department of Political Science, School of Public Affairs, Zhejiang University Address: 634 School of Public Affairs, Zhejiang U Zijingang, Hangzhou, China 310068 Phone: 86-0592-5633-6986 Email: [email protected] 0 Abstract Autocracies can conduct “strategic censorship" online by selectively targeting different types of content, and by adjusting the level of information control. While studies have confirmed the state’s selective targeting behaviour in censorship, few have empirically examined how the autocracies may adjust the control level. Using data with a 6-year span, this paper tests whether the Chinese state scales up control over citizenry complaints in reaction to a series of socio-political events. The results show that instead of responding to mass protests and major disasters as previous studies have suggested, the state tend to adjust the control level because of political ceremonies, policy shifts, or leadership changes. The findings help refine the strategic censorship theory and offer a granular understanding of the motives and tactics of authoritarian information control. Keywords Authoritarian information control, strategic censorship, Internet, China (Political Research Quarterly, forthcoming) 1 The literature on censorship in autocracies has explored the selective targeting phenomenon extensively (e.g. Bamman, O’Connor, and Smith 2012; Fu, Chan, and Chau 2013; King, Pan, and Roberts 2013; Qin, Strömberg, and Wu 2017), with the focus on why the government blocks certain types of content but not the others. However, the level of censorship may not be constant over a specific type of content. For example, to celebrate the 70th Anniversary of the founding of the People’s Republic, Chinese authorities banned “overly entertaining” TV shows for a hundred days from August 1, 2019 onwards, while promoting only 86 selected mainstream propaganda TV series.1 Such a ban instead of targeting a specific type of taboo topic represents a scale-up of the level of information control, albeit for a limited time period. How to explain such temporary “scale-up of control” on certain content during special time periods? Building on the strategic censorship model by Peter Lorentzen (2014), this article provides evidence on how the Chinese state adjusts the level of control over online information across the time. More specifically, we address the question: under what circumstances does the Chinese government scale up the level of information control? This article differentiates selective targeting in state censorship, which has been heavily studied, from adjustments in the information control level and focuses only on the latter. If selective targeting is about sorting out what type of content is tolerated or censored, thus concerns primarily with mapping the boundaries of expression, adjusting the level of control indicates the scenario in which the state shrinks or expands the zone of permissible expression. Using a metaphor of highway patrol here, we are not studying whether traffic police catching and punishing drivers going above the speed limit of 65 miles per hour but when and why the traffic authorities temporarily or permanently adjust 2 the speed limit from 65 to 45 miles per hour. We operationalize adjustments in the control level as changes in the volume of a broad category of online expression— citizenry complaints about local problems (again, not specific cases of such complaints). We then test whether a series of socio-political events have caused fluctuation in this category of expression by comparing the daily counts of posts on China’s largest online forum Tianya.cn to the state-sponsored forum Local Leadership Message Board between July 2009 and July 2015. We find that the Chinese government tends to scale up the control level at politically symbolic moments such as national anniversaries or political meetings but less so in cases of crisis events like protests, foreign revolutions, disasters, or accidents. This research adds to studies on information control and authoritarian politics in several ways. First, while scholars argue that the state may adjust the overall control level (Lorentzen 2014), only a few have empirically explored the conditions under which the state makes such adjustments (e.g., Cairns and Carlson 2016; Repnikova 2017; Ruan et al. 2020). This paper contributes to the burgeoning literature by testing whether certain socio-political events cause changes in the overall control level over a broad category of content (citizenry complaints) that include multiple specific topics. Second, analysis in this article reveals the complicated and mixed motives of authoritarian information control: adding to studies that highlight the Party-state’s motives to preserve stability (e.g. King, Pan, and Roberts 2013; Lorentzen 2014; Roberts 2018), we find the symbolic concerns and the will of the leadership are strong drives for information control, where the state is more likely to adjust the control level in response to political ceremonies and the leadership change than to shocks such as protests and accidents. Finally, by refining 3 the “strategic censorship” model (Lorentzen 2014) and providing a more granular understanding of the motives and tactics of authoritarian information control, the article enriches the literature on authoritarian resilience in the digital age (e.g. Gilley 2003; Nathan 2003; Shambaugh 2008). The fact that the Chinese state not only scales the level of censorship over time, but also differentiates such adjustments to distinctive socio- political events shows that authoritarian information control can be customized by the state for different purposes, revealing the state’s capacity to adjust to the digital challenges. Authoritarian Information Control: Categorizing and Scaling Though the conventional wisdom suggests that authoritarianism is incompatible with free information (e.g. Friedrich and Brzezinski 1965; Gainous, Wagner, and Ziegler 2018; Levitsky and Way 2010; McMillan and Zoido 2004), recent studies show that authoritarian states may not suppress information and media completely because critical expression can serve a safety valve (Hassid 2012), provide necessary policy feedback and check local agents (Lorentzen 2013), and allow prediction of events such as protests and corruption charges (Qin, Strömberg, and Wu 2017). Along this line, Peter Lorentzen (2014) argues that authoritarian states may employ “strategic censorship” to benefit from freer information without taking the risk of being overthrown. Here “strategic censorship” means the government can (1) selectively target critical information such as allowing exposure of local scandals but not criticism on top leaders and (2) adjust the amount of criticism depending on the level of social tensions. The notion of “selective targeting” is easy to understand, which boils down to the 4 reasoning and practice of categorizing content into two types, with one type to be removed and the other type kept. Since not all types of critical information are equally threatening and some may even be beneficial, autocracies like Chinese government can suppress more threatening types while tolerating less harmful types. As Lorentzen (2014) reasons, the Chinese state may suppress criticism of top leaders while tolerating disclosure of local scandals. Along this line, through analysis of censored online posts and experimental research, King, Pan, and Roberts (2013) find that China prioritizes expression related to collective action over general criticism of the government. Others, while may contest what types of content are more censored, confirm the selective targeting argument by showing the state prioritizing various types of content over other types when censoring (e.g. Bamman, O’Connor, and Smith 2012; Fu, Chan, and Chau 2013; Qin, Strömberg, and Wu 2017; Shao 2018; Q. Tai 2014). Tai and Fu (2020), through examining censored articles on the popular social media platform WeChat, further find that in addition to “sensitive” or “problematic” topics, state censorship may selectively target articles that contain a higher number of key specific terms, especially those signalling conflicts or tensions. In other words, the state may be driven by the logic of suppressing online focal points, i.e. hot-button issues perceived as crises. In addition to selective targeting, scholars increasingly realize that “sensitiveness” of online expression as well as the censorship system are both contextually contingent