Right-Wing Social Media and Political Unrest

Daniel Karell Andrew M. Linke Edward Holland Department of Sociology Geography Department Department of Geosciences Yale University The University of Utah University of Arkansas and Peace Research Institute Oslo (PRIO)

VERSION 3

First version: May 3, 2021 This version: July 30, 2021

______

Daniel Karell and Andrew Linke contributed equally to the study design and analysis. Edward Holland, Daniel Karell, and Andrew Linke contributed equally to the writing. We thank Luis Leon Medina for research assistance and Carl Dahlman, Jeffrey Jensen, Andrew Linder, and Abdul Noury for valuable comments. Corresponding author: Daniel Karell ([email protected]). ABSTRACT

Does right-wing social media use increase right-wing demonstrations and violence? We create a spatial panel dataset measuring 57,505 right-wing social media posts and 1,765 incidents of right- wing unrest in all counties from January 2020 through January 2021. Using spatial regression analysis with county and month fixed effects, we find that county-level right-wing social media activity in a given month increased the frequency of subsequent right-wing unrest events. Additional analyses using matching, entropy balancing, and fixed-effects instrumental variable methods that account for observable and unobservable confounders have consistent findings, and offer steps towards causal interpretation. A semi-supervised computational analysis of five million right-wing social media comments from January 2020 through January 2021 provides insight into the mechanisms connecting right-wing social media activity to right-wing contentious events. Our study advances research on social media activity and offline political protest by illuminating countrywide patterns of localized instances of unrest. In addition, it helps develop an emerging research agenda on social media and overt violence.

KEY WORDS

Social media; Right-wing mobilization; Protest; Violence; ; Spatial analysis

2

INTRODUCTION

Does right-wing social media use increase right-wing demonstrations and violence? In the wake of events like the 2019 Christchurch mosque shootings and the 2021 United States (U.S.) Capitol riot, we are in the early stages of understanding how—or even if—right-wing online activity leads to offline contentious political activity, such as protests and demonstrations, which sometimes turn violent. On one hand, social media influence political attitudes and behavior (Bond, et al. 2012). For example, some social media users express more polarized views after being exposed to tweets from politicians with opposing (Bail, et al. 2018), reduce their sectarian language if they are sanctioned online (Siegel and Badaan 2020), and regard ethnic others more negatively after deactivating their accounts (Asimovic, et al. 2021). In addition, there is evidence that right-wing social media (RWSM) can lead to physical hate crimes. Anti-refugee sentiment on Facebook predicts attacks on refugees in Germany (Müller and Schwarz 2020a) and hate speech on correlates with racially and religiously aggravated crimes in London (Williams et al. 2020). In the U.S., counties with greater Twitter use than others have more hate crimes, and ’s anti-Muslim Tweets predicted subsequent hate crimes (Müller and Schwarz 2020b). On the other hand, we still have an incomplete picture of how social media activity results in offline political unrest. While there is extensive research linking online engagement to online collective action (e.g., Barberá, et al. 2015, González-Bailón and Wang 2016; see also Tufekci and Freelon 2013), the evidence tying it to offline political unrest is more limited. Some studies of social media and civic and political participation, conceived more broadly, find that social media use has little effect on users’ opinions and behavior (Theocharis and Lowe 2016; Foos, et al. 2020; see also Boulianne 2015 and Guess, et al. 2020). Research specifically on social media and offline protest have focused on authoritarian contexts (e.g., Steinert-Threlkeld 2017; Tufecki 2017; Weidmann and Rød 2019; Enikolopov, et al. 2020) or large but rare protests, such as the 2015 Charlie Hebdo protests (Larson, et al. 2019) and the 2021 U.S. Capitol riot (Van Dijcke and Wright 2021) (see also Caren, et al. 2020; Hsiao 2021). Furthermore, with respect to the political right in liberal democracies, it does not appear at first glance that using RWSM produces widespread unrest. Millions of people do the former, while the latter is comparatively rare (Marwick and Clancy 2020). Previously, analyzing the relationship between social media activity and offline political unrest countrywide over many months has been difficult, if not impossible, due to data limitations. Social media data rarely include location information disaggregated below the state level. If detailed

3

location information is available, the data are sampled from the complete collection of observations using undisclosed methodologies (McCormick, et al. 2017; Kim, et al. 2020) and only describe individuals who voluntarily share their location, which can bias inferences (Malik, et al. 2015; Beauchamp 2017; Flores 2017; Steinert-Threlkeld 2017; Mitts 2019). These limitations have helped focus the literature on high-profile, singular events with many thousands of participants (e.g., Steinert-Threlkeld 2017; Enikolopov, et al. 2020; Van Dijcke and Wright 2021), rather than smaller incidents of political unrest that can happen in communities countrywide. We examine whether activity on RWSM results in localized political unrest in the U.S. and address data limitations related to this question by using a previously unpublished spatiotemporal record of 57,505 posts during 2020 and early 2021 on the RWSM platform, Parler. Parler, which first came online in 2018, has been popular with the U.S. right-wing since early 2020 (Isaac and Browning 2020; Aliapoulios, et al. 2021; Nicas and Alba 2021) and was singled out by the 2021 U.S. Senate report on the U.S. Capitol riot for hosting messages inciting the attack (HSAGAC and CRA 2021). We join the observations of Parler activity with a database of thousands of contentious political events, creating a county-month spatial panel dataset of RWSM activity and political unrest in U.S. counties during 2020 and early 2021 (N = 37,704). Figure 1A shows that right-wing unrest from January 2020 to January 2021 was relatively rare in the U.S. The majority of counties, especially those in the middle portion of the country, experienced no incidents. However, a fifth of all counties had at least one incident and 90 (3%) had five or more. Moreover, right-wing events were not concentrated in counties containing state capitols. Many confrontations occurred in, for example, north-central (Okanogan County), the Upper Peninsula of Michigan (Schoolcraft County), and southwestern Virginia (Smyth County). Compared to the conflict and protest events, Parler use was more common and evenly distributed across the country during 2020 and early 2021 (Figure 1B). Eighty percent of counties had at least one instance of activity (and, we presume, numerous instances of unobserved passive engagement, such as users reading others’ posts); 1228 counties (40%) had at least five instances. Similar to the spatial distribution of right-wing events, we see that, when normalized by population, RWSM activity is not concentrated exclusively in major population centers. For example, we observe relatively high rates of activity in northern California (Humboldt and Trinity counties), western Indiana (Parke County), and northeastern Pennsylvania (Pike County).

4

Figure 1. Right-wing unrest (Panel A) and Parler activity (Panel B) across the United States from January 2020 through January 2021. Weekly temporal trends of activity and events are displayed in Panel C. They are positively correlated (0.52, p < 0.001).

Figure 1C presents the temporal distribution of Parler activity and right-wing unrest. Both rose slowly through the first half of 2020, after which Parler plateaued (then rose again) while unrest declined slowly. The two weekly time series are positively correlated; the bivariate Spearman’s rank correlation for non-normally distributed data is 0.52 (p < 0.001). We additionally examine the temporal correlations of Parler and unrest events across up to 10-week lags in either direction from a given week (Appendix A, Figure A1). These results indicate positive and significant correlations forwards and backwards in time within a window roughly of 10 and five weeks, respectively. Some overlap backwards in time is possible because people participating in right-wing events may have used Parler at the time, but the correlation fades after roughly one month. Drawing on the research showing that social media can influence political attitudes, promote participation in large protests, and lead to hate crimes, our core expectation is that RWSM activity will increase confrontations involving right-wing actors. We test this expectation at a county level. The impact of a person using social media could spread throughout their offline community, such as a county, since users of a social media platform are likely to share their online community’s rhetoric and ideologies with friends, colleagues, and neighbors who are non-users. Our hypothesis is:

H1. In U.S. counties, Parler activity will lead to an increase in subsequent right-wing unrest.

In testing this hypothesis, and particularly by focusing on countrywide, monthly patterns, our study extends previous research on social media activity and major offline political protests,

5

mostly located in authoritarian contexts, by shedding new light on the link between online activity and localized political confrontations. In addition, it helps advance an emerging literature on social media and physical violence, particularly violence perpetrated by liberal democracies’ political right (e.g., Marwick and Clancy 2020; Müller and Schwarz 2020a; Müller and Schwarz 2020b). Understanding this relationship is especially important during a political moment characterized by increasing tolerance of among those in the American right-wing (PRRI 2021). Yet, to be clear, we focus on RWSM and right-wing outcomes because of the characteristics of our data; it is possible that left-wing social media activity results in left-wing contentious events.

RESULTS

Our empirical investigation has three parts. We first examine the relationship between RWSM and unrest using spatial regression methods with spatially and temporally lagged variables and county and month fixed effects (FEs). This approach is straightforward and uses our entire dataset. However, it has functional form and causal inference limitations. To address these, the second part comprises additional analyses using matching, entropy balancing, and instrumental variable approaches. These methods offer causal estimates under different assumptions, thereby adding complementary support for our main results. However, the research designs have their own limitations, such as data loss. The third part turns to explaining the findings of the first two parts: it is a semi-supervised computational analysis of five million Parler comments from January 2020 through January 2021 that sheds light on the mechanisms linking RWSM to right-wing contentious events. Due to space limitations, we present this final part in an appendix and discuss the main findings in the Discussion section.

Spatial regression Our first model uses ordinary least squares (OLS) to regress county-level unrest event rates (logged and population normalized) on rates of RWSM activity (logged and population normalized) during the previous month in the same county with no additional parameters (Table 1, Model 1). The results show that the basic hypothesized relationship exists (coefficient = 0.007; p < 0.001), and that the following additions to the model’s specification only improve model fit. Our second model adds county, state, and month FEs. The results indicate a positive effect of RWSM use on subsequent right-wing events that is statistically significant at conventional levels (p < 0.05) (Table 1, Model 2).

6

However, the occurrence of unrest can be a function of prior and nearby contentious unrest. Our third model accounts for these possible temporal and spillover influences by including month and spatial lags (both logged and population normalized). We again find a positive effect of RWSM activity on later right-wing events (p < 0.05) (Table 1, Model 3). The results also suggest that unrest events in neighboring counties have a positive effect on counties’ unrest. In contrast, counties’ antecedent events have no statistically significant effect. While our results indicate that activity on a RWSM platform can lead to subsequent right- wing unrest, the magnitude of the effect is not exceptionally large. A 10% increase in activity on Parler yields a 0.04% increase in the county-level number of events per 100,000 people. Despite the effect’s size, two characteristics of social media and contentious events caution against dismissing its importance. First, there are few barriers to using social media, and the membership of social media platforms can grow quickly and exponentially. Consequently, activity on RWSM platforms could increase dramatically in the near future, especially considering Parler’s Spring 2021 revival (Nicas 2021) and the launch of a new social media platform by members of Donald Trump’s administration (McGraw, et al. 2021). Second, a single incident of right-wing unrest can be highly consequential. For example, during 2020, several protests by the , an extremist right-wing group, resulted in mêlées, numerous hospitalizations, and even fatalities (e.g., Haas, et al. 2020; Herman, et al. 2020; Kirkpatrick and Feuer 2021). Appendix B offers further discussion of why RWSM’s effect on right-wing unrest has important consequences. A set of robustness checks offers confidence in the main finding that RWSM activity is associated with subsequent right-wing unrest (detailed results in Appendix B, Table B1). First, because right-wing contentious event counts exhibit overdispersion with many low values and relatively few counties experiencing more than several incidents in a given month, we model this distribution using a different regression functional form. A negative binomial generalized linear model (GLM) similar to the main model (i.e., Table 1, Model 3) gives consistent results. We find that the odds of an event occurring increase by about 60% with each Parler post (per 100,000 people) (coefficient = 0.453, p < 0.001).

7

Model 1 Model 2 Model 3 (Intercept) 0.014*** -0.014*** -0.015*** (0.001) (0.004) (0.004)

Prior social media activity 0.007*** 0.003* 0.004* (0.001) (0.001) (0.001)

Prior right-wing events -0.014 (0.010)

Neighboring counties’ right-wing events 0.077*** (0.012)

County fixed effects No Yes Yes State fixed effects No Yes Yes Month fixed effects No Yes Yes R2 0.002 0.148 0.149 Adjusted R2 0.002 0.071 0.071 N 37,704 37,704 37,704

Table 1. The effect of prior right-wing social media activity on right-wing unrest. Table shows the results of OLS regressions of right-wing ACLED events (logged, population normalized) on Parler posts (logged, population normalized). The antecedent terms capture events from the preceding month (logged, population normalized) and the spatial lag captures all events in neighboring counties (logged, population normalized). Robust standard errors in parentheses. *** p < 0.001; ** p < 0.01; * p < 0.05

Second, we code a binary version of the response variable to identify whether Parler activity has a relationship with any right-wing contentious events. Counties with one or more incidents of right-wing unrest in a given month are assigned the value 1 and other counties are assigned 0. We then use this variable to estimate a basic GLM logistic regression similar to our main model. We also use this variable in our third robustness check: a rare-events (RE) GLM logistic regression designed to estimate response variables with many zeros (King and Zeng 2001; Choirat, et al. 2020), which accounts for the fact that many county-months experienced no right-wing contentious events. The results of the basic and RE logistic regressions show that prior Parler activity is a positive and statistically significant predictor of right-wing unrest (logit: coefficient = 0.412, odds ratio = 1.51, p < 0.001; RE logit: coefficient = 0.366, odds ratio = 1.442, p < 0.001). Finally, because the definition of which neighboring counties are considered neighbors could influence our results, our fourth robustness check tests an alternative second-order spatial weights matrix in our main model. The effect of Parler activity remains positive and significant (coefficient = 0.003, p < 0.05).

8

The results of our main and robustness check models offer support for our hypothesis. Yet, we might be interpreting the empirical evidence incorrectly. To gain confidence in the theoretical model guiding our hypothesis, we conduct two further checks. First, we examine whether the posited causal order is correct—perhaps right-wing unrest results in more activity on Parler? To examine this question of reverse causality, we regress Parler activity on prior right-wing unrest rates, with month and spatial lags of Parler activity and county, state, and month fixed effects (i.e. Model 3 in Table 1 but with the variables reversed). The results indicate that right-wing unrest is not associated with subsequent Parler activity (coefficient = 0.011, standard error = 0.029) (detailed results in Appendix C, Table C1). Second, is the posited relationship sound? Our conceptualization of this relationship assumes that RWSM activity affects right-wing unrest specifically. The primary challenge to the assumption is that RWSM could in fact increase various kinds of unrest. For example, perhaps greater engagement with Parler strengthens users’ interest in a broad range of social and political debates, resulting in a tendency to protest multiple issues, which would not be exclusive to participating in right-wing contentious events. To evaluate this possibility, we regress Parler activity on non-right-wing incidents of unrest using a specification otherwise identical to the main model (i.e., Table 1, Model 3). The results indicate that prior RWSM use is not associated with these events (coefficient = -0.003, standard error = 0.003), thereby providing support for the hypothesized relationship between RWSM and right-wing unrest (detailed results in Appendix D, Table D1). While these last two checks increase confidence in our interpretation of the results, challenges remain. The findings could stem from confounders not accounted for by the models’ temporal ordering of variables or by the county, state, and month FEs. In the second part of our study, we address the possibility of such confounding relationships with additional analyses drawing on matching, entropy balancing, and FE instrumental variable regression methods.

Additional analyses We begin addressing possible missed confounding relationships with a quasi-experimental matching research design. Matching addresses confounding in observational data by approximating randomized assignment into treatment and control conditions. It does so, in brief, by making the treated and control groups alike with regards to observable covariates that could affect selection into the treatment condition (Stuart 2010). Applied to our study, matching allows us to compare counties with prior Parler activity to counties that are similar in terms of characteristics that could lead to

9

Parler use, such as the level of support for Donald Trump and the rate of preceding right-wing political activity, but which had no Parler activity. Mirroring a randomized experiment in this way has the additional benefit of preventing certain kinds of (potentially very distinctive) counties interacting with Parler activity to drive the results. Moreover, non-parametrically matching the data, or “preprocessing”, before fitting a regression model with the matching weights and county-level covariates, as we do, reduces the model dependence of our estimates (Ho, et al. 2007). We first use a stratum approach based on exact covariate value matches. This yields satisfactory balance: after matching, all standardized mean differences for the covariates are below 0.1 (detailed results in Appendix E, Figure E1). Both treated and control units were discarded, so we interpret the effect as conditional on the post-matching sample. After matching, we include the matching weights in linear regression models regressing right-wing unrest on Parler activity during the previous month. The baseline specification of the regression does not include any covariates. The full specification includes a range of covariates capturing the temporal and spatial lags from the first part of the study and counties’ demographic characteristics, socio-economic conditions, and prevailing political . In line with our main model, the results, shown in Figure 2, suggest that prior RWSM activity leads to an increase in right-wing unrest (baseline: coefficient = 0.018, p < 0.001; full: coefficient = 0.014, p < 0.001) (detailed results in Appendix E, Table E1). In addition to discarding observations, exact matching is not ideal for matching with continuous variables. Consequently, we additionally use a distance approach employing 1:1 nearest neighbor matching with replacement based on propensity scores estimated with a generalized linear model. This technique allows us to directly account for many plausible reasons why counties might have Parler activity, including one of the most important, the prevailing pre-treatment political ideology (Isaac and Browning 2020; see also Foos et al., 2020; Guess, et al. 2020), as well as previous right-wing unrest, demographic characteristics, and socio-economic conditions. We again achieve satisfactory balance; all standardized mean differences for the covariates are below 0.1 and the distributional balance of distances is comparable across the treated and control groups (detailed results in Appendix E, Figure E2 and Figure E3). No treated units are discarded. When incorporating the matched weights into the baseline and fully specified models, we again obtain results indicating that the latter increases the former (baseline: coefficient = 0.015, p < 0.001; full: coefficient = 0.013, p < 0.001) (Figure 2; detailed results in Appendix E, Table E1).

10

Figure 2. The effect of prior right- wing social media activity on right- wing unrest, estimated after preprocessing the data using matching approaches and entropy balancing. The baseline models regress right-wing unrest on preceding Parler activity without covariates. The fully specified models include the temporal and spatial lags and a range of covariates. Bars indicate the 95% confidence intervals based on robust standard errors for stratum matching and entropy balancing, and cluster-robust standard errors for distance matching.

The matching implementation (Ho, et al. 2011) requires the treatment to be binary. To estimate models while using a continuous encoding of prior Parler activity, like in the first part of our study, we use a third approach that weights the observations: entropy balancing (Hainmueller 2012). After entropy balancing, the Pearson’s correlations between the treatment and covariates are all near to or at zero (detailed results in Appendix E, Figure E4), indicating that the weights sufficiently reduce the relationship between treatment and selection covariates (which are the same as those used in the distance matching). No units are discarded, so we interpret the effect of prior RWSM activity as an average treatment effect. We include the weights in the baseline and fully specified models and find that a 10% increase in activity on Parler results in a 0.04% increase in the county-level number of events per 100,000 people (baseline: coefficient = 0.004, p < 0.05; full: coefficient = 0.004, p < 0.01) (Figure 2; detailed results in Appendix E, Table E1). This is the same effect size as estimated in our main model (Table 1, Model 3). With this matching and entropy balancing analysis, we directly account for plausible observed causes of counties’ Parler activity, including the ones we consider likely: counties’ prevailing political ideology before Parler’s launch, rates of previous right-wing unrest, and social and economic characteristics, such as residents’ race, age, education, and employment. Matching and weighting on these variables allows us to estimate a causal effect. However, it is possible that there

11

still remains an unobserved reason for counties’ Parler activity (although such a reason is not obvious). To account for unobserved confounding, we conduct a second additional analysis using a fixed-effects instrumental variable (FE-IV) approach (Laidley and Conley 2018). The instrument in the FE-IV analysis is the maximum broadband download speed (MBDS) available in each county, recorded biannually by the Federal Communications Commission (FCC). The instrument’s validity depends on MBDS being associated with Parler activity. It also depends on MBDS satisfying the exclusion restriction: counties’ MBDS should have no direct effect on right-wing unrest, nor should it influence unrest through any relationship other than through Parler activity. We believe it is safe to assume that broadband providers’ offerings do not directly cause people to participate in right-wing protest, but it is possible that MBDS relates to unrest in some way other than Parler use. The primary way this can happen is by increasing activity on other social media platforms, such as Twitter and Facebook. While we think faster MBDS does result in more activity on other social media platforms, there is no evidence of general social media use leading to right-wing localized political unrest. For example, when Facebook temporarily adjusted its news feed algorithm to depress the visibility of right-wing news sources immediately after the U.S. election on November 3, 2020 (Roose, et al. 2020), the rate of right-wing unrest remained unchanged (it was consistently about three events per week from the start of October to the week before Christmas; see Figure 1C), suggesting no relationship between Facebook content and right-wing unrest. In fact, there is evidence suggesting that that the opposite is true: Twitter activity lowered the vote share for Donald Trump in the 2016 presidential election (Fujiwara, et al. 2020). More broadly, the volume and diversity of content and users on mainstream platforms—not to mention these platform’s overall liberal slant (e.g., Fujiwara, et al. 2020)—militates against a meaningful relationship with countrywide right-wing activities. Moreover, even if a relationship between mainstream platforms and right-wing unrest does exist in the U.S., aggregate usage rates of these mature platforms are stable. For example, Facebook reported negligible changes in the number of daily users in the U.S. and Canada across quarters in 2020 (Iqbal 2021). As a result, any relationship between these social media platforms and right-wing events is controlled for by the FE-IV models’ county- and time-FEs. The FCC’s MBDS records are biannual, so we create a new dataset for this analysis. We aggregate the monthly county-level observations from the main analysis into two sequential periods covering 2020, resulting in 6,280 county-period observations. Then, we match the observations from the first period to the FCC records from December 2019, and those from the second period to the

12

June 2020 records. With this dataset, we implement the FE-IV analysis using two pairs of models. The first is a baseline bivariate model pair: an OLS and a two-stage least squared (2SLS) model, neither with FEs or temporal or spatial lags. The second pair, the main model set, also consists of an OLS and 2SLS model, although each with FEs and temporal and first-order spatial lags. We first find that MBDS is positively associated with Parler activity at conventional significance levels (p < 0.001) in both the baseline and main 2SLS first-stage models. The instrument’s F statistics—228 in the baseline model, 13.35 in the main model—exceed the threshold of 10, indicating sufficient strength (Laidley and Conley 2018). Then, using the exogenous variation of Parler activity, and assuming the exclusion restriction holds, we estimate a causal effect of RWSM on right-wing unrest, shown in Figure 3 (complete results in Appendix F, Table F1). The baseline 2SLS model indicates that a 10% increase in Parler activity leads to a 0.6% increase in subsequent right-wing unrest per 100,000 people (coefficient = 0.067, p < 0.01). The result of the main 2SLS model is similar: a 10% increase in Parler activity increases subsequent unrest by about 2% (coefficient = 0.240, p < 0.001). We also see in Figure 2 that the 2SLS results are consistent with both non-instrumented OLS models. Furthermore, with the strong relationship detected across our models, it is little surprise that these results are similar to the main results and other additional analysis.

Figure 3. The effect of prior right- wing social media activity on right- wing unrest, estimated in the FE- IV analysis. The baseline model set regresses right-wing unrest on preceding Parler activity without fixed effects or temporal or spatial lags. The main model set includes fixed effects and temporal and spatial lags. Each model set includes both an OLS and 2SLS model. Bars indicate the 95% confidence intervals based on robust standard errors.

13

DISCUSSION

Does right-wing social media use translate into right-wing demonstrations and violence? We find evidence that it does. Our conclusion echoes recent studies showing that social media influence political attitudes, help mobilize large political protests, and increase rates of hate crimes. While this literature, when considered together, has suggested a link between RWSM and unrest, previous research has focused on authoritarian contexts and major yet rare demonstrations, and not specifically on the political right in liberal democracies, such as the United States. Moreover, an absence of sufficient data to test this relationship has hindered analyses that are countrywide and fine-grained in temporal and spatial resolutions. In contrast, with our dataset containing precise location and date information of RWSM activity and right-wing contentious events, we obtain results indicating that RWSM leads to subsequent political unrest in the U.S. during 2020 and early 2021. The combination of a spatial panel regression analysis, analyses using matching and entropy balancing, a FE-IV analysis, several robustness checks, and a placebo test instill confidence in our conclusions. Still, our study is based on observational data. It provides the usual advantages of observational research, with the conventional limitations: it examines patterns over a broad spatiotemporal scope, thereby complementing qualitative and experimental research (e.g., Bursztyn, et al. 2020; Jasser, et al. 2021). For example, Bursztyn, et al. (2020) find experimentally that perceiving Donald Trump as locally popular makes xenophobic study participants recruited from one U.S. city more willing to act xenophobically, and our study offers evidence that this effect may have been unfolding across the country during 2020 through exposure to RWSM. The findings from the first two parts of our study raise the question of why RWSM activity results in localized right-wing unrest throughout the U.S. To provide an initial answer, our study has a third part: a semi-supervised computational analysis of five million Parler comments from January 2020 through January 2021, as well as a sub-group analysis of our main spatial panel dataset. Both are presented in Appendix F. The content analysis does not uncover evidence of Parler users coordinating events, such as protests, suggesting that while RWSM could be used to organize political unrest (e.g., Larson, et al. 2019; Enikolopov, et al. 2020; Müller and Schwarz 2020a; Hsiao 2021), this may not fully explain the positive relationship between right-wing online activity and offline confrontations. Instead, we find evidence suggesting that this relationship is driven by users’ shifting perceptions of norms, which also corresponds to the experimental results of Bursztyn, et al.

14

(2020). Specifically, Parler—and potentially other RWSM—contain a significant amount of language embracing new members, and each time existing users see this kind of content, they gain confidence that whichever attitudes and behaviors are the norm on Parler are in fact widely held and, therefore, likely to be acceptable not only online, but also offline. These initial results point toward a way that social media influences political unrest beyond the well-documented coordination mechanism. Namely, as others have found experimentally (e.g., Foos, et al. 2020; Guess, et al. 2020), people’s engagement with RWSM does not introduce them to new ideas or alternative political beliefs; instead, being active on RWSM can transform their understanding of the world around them by shifting their sense of what counts as accepted speech and behavior. Of course, there is nothing particularly “right-wing” about social media affecting perceptions of norms and influencing people’s understanding of the world. Instead, the nature of the Parler platform—and the valuable geographic data capturing its use—restrict our focus to right-wing unrest and its underlying mechanisms. New sources of reliably measured data and further work are required to precisely understand the contexts across the in which social media affect contentious political action and violence, which specific characteristics of social media shape these outcomes (Cinelli et al 2021), and how the effect of social media engagement on unrest can operate differently from that of traditional media (e.g., DellaVigna, et al. 2014; Yanagizawa-Drott 2014; Adena et al. 2015).

MATERIALS AND METHODS

Right-wing social media activity Our observations of RWSM activity are from the Parler platform, an online space for the U.S. right- wing community. Parler was founded in the fall of 2018 and became popular in 2020; by the start of 2021, it had approximately 10 million users (Aliapoulios, et al. 2021). Despite Parler’s founding chief executive describing the platform as a “neutral town square” (Nicas and Alba 2021), Parler is all but exclusively used by the U.S. right-wing. Some well-known right-wing figures, such as Dan Bongino, , and Rudolph Giuliani, endorsed the platform (Isaac and Browning 2020; Aliapoulios, et al. 2021), and many of the rioters at the U.S. Capitol on January 6, 2021 were using Parler at the time (MacGillis 2021). We draw on a database of Parler users sharing, or uploading, self-recorded videos in the U.S. during 2020 and early 2021 (Cameron 2021; Van Dijcke and Wright 2021). Each sharing instance (N

15

= 57,505) is associated with metadata, including date, longitude, and latitude. We mapped the location of these activities, then used a zonal statistic to calculate the total activity within 3,142 county polygons per month. This constituted our county-month unit of analysis (N = 37,704) for the main analysis.

Right-wing unrest Our observations of political unrest come from the Armed Conflict Location and Event Data project (ACLED) georeferenced database of political contentious events and violence in the U.S. from January 2020 through January 2021 (Raleigh, et al. 2010). In this database, an event is any congregation of three or more people demonstrating against entities such as government institutions, policies, and protected classes of people. Violence can accompany demonstrations, but not necessarily. Instances of political violence consist of any use of force by a group with a political purpose or motivation. Each event and incident of violence is recorded from public media sources, then associated with a location and date, as well as the actors involved. We used ACLED’s descriptions of involved actors to identify which events involved -wing in the U.S. context. We provide details of this classification as well as examples of right-wing unrest events in Appendix A. Using our classification, ACLED recorded 1,765 right-wing unrest events in the United States between the beginning of January 2020 and the end of January 2021. We designated the remaining events (N = 22,375) as “non-right-wing” unrest; we use these in a placebo analysis. As with the Parler posts, we mapped the ACLED event locations, aggregated the number of events within counties per month, and merged these totals with the Parler data to create a spatial panel dataset. Table A1 in Appendix A reports descriptive statistics of the dataset.

Spatial panel dataset summary The dataset additionally contains a variable for county-level total population, obtained from the 2019 American Community Survey (ACS), administered by the U.S. Census Bureau, used to normalize the counts of events (per 100,000). For our main analysis, presented in the “Spatial regression” section, we did not add other county-level attributes, such as median household income and unemployment rates, because these characteristics are not likely to have varied significantly during 2020 (and ACS reports data annually). The potential confounding effects of such attributes are accounted for by county FEs in our models. For the additional analyses, we did merge this main spatial panel dataset

16

with variables obtained from the ACS and MIT Election Data and Science Lab (2018). We explain these changes to the dataset below. We use counties as units of analysis even though the mechanisms linking social media use and right-wing unrest operate at an individual level. We do this for two reasons: how the influence of social media engagement works and the fact that counties (or equivalents) are a meaningful administrative unit of governance in the U.S. Regarding the first reason, people can be exposed to social media content indirectly through word of mouth, even if they do not participate in an online community. For example, they can hear the online community’s rhetoric and ideas from friends, neighbors, and coworkers who do engage with the community. Thus, individuals in a county where Parler use is common could be affected by contextual exposure. Social media users are also active on Parler in passive ways; they might read and view content even if they infrequently post or share (the kind of activity our data record). The spatially aggregated count of sharing activity proxies this kind of information consumption. Regarding the second reason for analyzing counties, county administrative boundaries have practical everyday meaning for most U.S. residents. That is, potential participants in political events likely see the county as a spatial unit that structures their lives: most people not only experience county laws, regulations, and taxes, but also vote in county elections and work and socialize in their counties (especially during officials’ “stay close to home” messaging during the Covid-19 pandemic). Additionally, in many parts of the U.S., counties are linked with other forms of administration, like school districts. Therefore, it is reasonable to expect that counties would roughly bound an important portion of individuals’ political sentiment, attention, and action.

Spatial regression methods Our main model (Table 1, Model 3) uses an OLS regression to estimate the logged and population normalized number of right-wing unrest events � in county � and month � as a function of:

�!" = �# + �$�!"%$ + �!"%$ + �!"& + �'� + �(� + �)� + �!" with intercept �# and where �$ describes the influence of prior Parler activity (�!"%$). To account for the possibility that right-wing events encourage subsequent events, the model controls for these events in the previous month �!"%$. A post-estimation Moran’s I statistic (Griffith 1987) revealed spatial autocorrelation in the model residuals, which is a violation of the assumption that our variables are independently and

17

identically distributed among counties. We then compared (a) standard error correction and (b) spatial autoregressive (SAR) remedies of the violation of the assumption using a Lagrange multiplier test. The Lagrange test statistic suggested optimal SAR parameter �!"&, which measures the average number of right-wing events that occurred within first-order neighboring weights matrix �. County FEs i provide within-unit panel effects that control for unobserved differences between counties, including typical county attributes, such as median household income, as well as characteristics relevant to our study, such as residents’ consumption of media (e.g., Fox News) and usage rates of other social media platforms (e.g., Facebook). State FEs o further account for unobserved political, economic, and demographic differences within larger territories. Parameter C controls for unobserved effects across months, such as seasonal weather and waves of the Covid-19 pandemic. Stochastic error is represented as �. A Moran’s I test of clustering in the residuals of the preferred OLS model with the SAR parameter confirmed that there is no residual spatial autocorrelation. Because of the temporal lag structure, we drop the first month in the dataset (January 2020). To assess the robustness of the main results, we additionally estimate a series of alternative models, explained in the Results section and presented in detail in Appendix B. First, we estimate a GLM negative binomial model. Second, we code a dichotomous version of the response variable, then use this variable to estimate a GLM logistic regression similar to our main model. Third, we fit an RE logistic regression designed to estimate response variables with many zeros. This estimate also uses the dichotomous coding of the right-wing contentious event response variable. The three GLMs exclude county FEs that would result in perfect prediction (exclusion) of observations with zero events in all months of the analysis time frame (Beck and Katz 2001; Buhaug 2010). Our fourth and final robustness check reproduces our main model while including a second-order neighboring weights matrix. In the first of the two checks of our interpretation, we regress Parler activity (logged and population normalized) on prior county-level right-wing unrest rates (logged and population normalized), with month and spatial lags of Parler activity (both logged and population normalized), as well as county, state, and month FEs. The goal is to directly test for reverse causality (i.e., unrest events lead to Parler usage). In the second check, we conduct a placebo outcome test. This test uses the main model’s specification with a different response variable. Instead of estimating the effect of Parler activity on right-wing unrest, the response variable captures instances of non-right-wing unrest, as discussed in the earlier “Right-wing unrest” section. For this test, we re-created and

18

included in the model the prior and neighboring-county conflict variables using these observations of non-right-wing contentious events (see Appendix C).

Methods used in the additional analyses The first additional analysis uses matching and weighting to preprocess the data before parametrically estimating causal effects (Ho, et al 2007; Ho, et al. 2011). We begin with two matching approaches, one using a stratum technique and the other using a distance technique. Our implementation of the stratum technique employs exact matching based on counties’ values of prior right-wing unrest (logged and population normalized), prior right-wing unrest in neighboring counties (logged and population normalized), U.S. state, and the month the outcome is observed. The exact matching discards both treated and control units, so we interpret the subsequent estimand as an average treatment effect in the treated, conditional on the sample. (For treated units, 5,186 out of 6,261 are retained; for control units, 22,665 out of 30,939 are retained.) We implement the matching using MatchIt version 4.2.0 (Ho, et al. 2011). We then use the matching weights in a linear regression that regresses right-wing unrest (logged and population normalized) on Parler activity during the previous month (binary coding; 1 = at least one Parler post). The baseline specification of the regression did not include any covariates as controls. The full specification controlled for prior right-wing unrest (logged and population normalized), neighboring counties’ right-wing unrest (logged and population normalized), percent of residents identifying as white, median household income, unemployment rate for residents aged 20 to 64, the proportion of residents between 15 and 44 years old, the proportion of residents between 18 and 24 years with a high school diploma, the vote margin for Donald Trump in the 2016 election, and a binary variable indicating whether the county contains the state capital or not. The demographic and socio-economic information come from the 2019 ACS and the vote data from the MIT Election Data and Science Lab (2018). When we include variables for the U.S. state and month of observation in the post-matching regression, we obtain the same results as when they are excluded. We report robust standard errors. We do not interpret the coefficients of the covariates because the estimates do not correspond to causal effects and they may be significantly confounded (Greifer 2021a). Our second matching approach is distance matching using 1:1 nearest neighbor matching with replacement based on propensity scores estimated with a generalized linear model. The matching covariates are: prior right-wing unrest (logged and population normalized), prior right-

19

wing unrest in neighboring counties (logged and population normalized), U.S. state, the month the outcome is observed, the percent of residents identifying as white, median household income, unemployment rate for residents aged 20 to 64, the proportion of residents between 15 and 44 years old, the proportion of residents between 18 and 24 years with a high school diploma, and the vote margin for Donald Trump in the 2016 election, which we use to measure the important pre- treatment political ideology of counties, which journalistic reporting on Parler has identified as the primary reason for using the platform (e.g., Isaac and Browning 2020; Nicas and Alba 2021). After matching, all treated units and 4,095 control units are kept. We again implement the matching using MatchIt version 4.2.0 (Ho, et al. 2011). After the distance matching, we again use the matching weights in a linear regression that regresses right-wing unrest (logged and population normalized) on Parler activity during the previous month (binary coding; 1 = at least one Parler post). The baseline and full specifications are the same as those used after the stratum matching. Also as before, when variables for the U.S. state and month of observation are included in the post-matching regression, we obtain the same results. We report cluster-robust standard errors with pair membership as the cluster variables. We again do not interpret the coefficients of the covariates. MatchIt requires the treatment to be binary. To estimate models using a continuous measurement of Parler activity, we weight the data using entropy balancing (Hainmueller 2012). The weighting variables are: prior right-wing unrest (logged and population normalized), prior right-wing unrest in neighboring counties (logged and population normalized), U.S. state, the month the outcome is observed, the percent of residents identifying as white, median household income, unemployment rate for residents aged 20 to 64, the proportion of residents between 15 and 44 years old, the proportion of residents between 18 and 24 years with a high school diploma, and the vote margin for Donald Trump in the 2016 election, and a binary variable indicating whether the county contains the state capital or not. We implement the weighting using WeightIt version 0.12.0 (Greifer 2021b). We then use the covariate weights in a linear regression that regresses right-wing unrest (logged and population normalized) on Parler activity during the previous month (logged and population normalized). As in the preceding analyses based on matching, we fit baseline and fully specified models. The specifications are the same as before, except the fully specified model that we report includes variables for counties’ state and the month the outcome is observed. We again do not interpret the coefficients of the control variables because they are confounded. We report robust

20

standard errors. We also calculate bootstrapped standard errors with 999 replicates. The overall statistical significance does not change: the 95% confidence interval of the coefficient estimate for prior Parler activity is 0.001 to 0.008 for both the baseline and fully specified model. After the matching and weighting analysis, we conduct the FE-IV analysis. The instrument, MBDS, is recorded by the FCC biannually by surveying all broadband providers in each census block. We sifted through these 97,363,598 entries from December 2019 and June 2020—the two records applicable to our study—to identify the maximum MBDS in each county at each time. Because the FCC records are biannual, we create a supplementary dataset for the FE-IV analysis by aggregating the monthly county-level observations used in the main analysis into two periods (N = 6,280; three counties were missing MBDS information). The first period is from January through June 2020, with January through March forming the lagged sub-period for the outcome sub-period of April through June. The second period is from June through November 2020 with June through August being the lagged sub-period for the outcome sub-period of September through November. If December 2020 is added to the latter outcome sub-period, the main findings remain unchanged; we omitted it from the primary supplementary analysis to have balanced three-month sub-periods. The main findings also remain unchanged if we use maximum upload speed. MBDS is measured in megabytes per second; for the analysis, it is rescaled between 0 and 1. We implement the FE-IV analysis by estimating two pairs of models. The first is a baseline bivariate model set comprising both an OLS and 2SLS model. Both of these models regress right- wing unrest in the outcome sub-period (logged and population normalized) on antecedent Parler activity (logged and population normalized) while omitting FEs and temporal and spatial lags. These models (and results) are analogous to Model 1 in Table 1. In the first stage of the 2SLS model, Parler activity is regressed on MBDS. The second model pair is the main model set, and comparable to Model 3 in Table 1. It consists of the same OLS and 2SLS models as the first pair, except each now includes FEs and period- and county-level lags of right-wing unrest (both logged and population normalized). The models do not include state-level FEs because they were colinear with the county- level FEs in the first stage of the 2SLS models.

21

REFERENCES

Adena, Maja, Ruben Enikolopov, Maria Petrova, Veronica Santarosa, and Ekaterina Zhuravskaya. 2015. “Radio and the Rise of Nazis in Prewar Germany”, Quarterly Journal of Economics 130(4): 1885- 1939.

Aliapoulios, Max, Emmi Bevensee, Jeremy Blackburn, Barry Bradlyn, Emiliano De Cristofaro, Gianluca Stringhini, and Savvas Zannettou. 2021. “An Early Look at the Parler Online Social Network”, arXiv: 2101.03820v3.

Asimovic, Nejla, Jonathan Nagler, Richard Bonneau, and Joshua Tucker. 2021. “Testing the Effects of Facebook Usage in an Ethnicity Polarized Setting”, Proceedings of the National Academy of Sciences 118(25): e2022819118.

Bail, Christopher A., Lisa P. Argyle, Taylor W. Brown, John P. Bumpus, Haohan Chen, M. B. Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. “Exposure to Opposing Views on Social Media Can Increase Political Polarization”, Proceedings of the National Academy of Sciences 115(37): 9216-9221.

Barberá, Pablo, Ning Wang, Richard Bonneau, John T. Jost, Jonathan Nagler, Joshua Tucker, and Sandra González-Bailón. 2015. “The Critical Periphery in the Growth of Social Protests”, PLOS One 10(11): e0143611.

Beck, Nathaniel, and Jonathan N. Katz. 2001. “Throwing Out the Baby With the Bath Water: A Comment on Green, Kim, and Yoon”, International Organization, 55(2): 487-495.

Beauchamp, Nicholas. 2017. “Predicting and Interpolating State-Level Polls Using Twitter Textual Data”, American Journal of Political Science 61(2): 490-503.

Bond, Robert M., Christopher J. Fariss, Jason J. Jones, Adam D. I. Kramer, Cameron Marlow, Jaime E. Settle, and James H. Fowler. 2012. “A 61-Million-Person Experiment in Social Influence and Political Moblization”, Nature 489: 295-298.

Boulianne, Shelley. 2015. “Social Media Use and Participation: A Meta-Analysis of Current Research”, Information, Communication & Society 18(5): 524-538.

Buhaug, Halvard. 2010. “Climate Not to Blame for African Civil Wars”, Proceedings of the National Academy of Sciences 107(38): 16477-16482.

Bursztyn, Leonardo, Georgy Egorov, and Stefano Fiorin. 2020. “From Extreme to Mainstream: The Erosion of Social Norms", American Economic Review, 110(11): 3522-3548.

Cameron, Dell. 2021. “Every Deleted Parler Post, Many With Users’ Location Data, Has Been Archived”, Gizmodo. https://gizmodo.com/every-deleted-parler-post-many-with-users-location-dat- 1846032466.

Caren, Neal, Kenneth T. Andrews, and Todd Lu. 2020. “Contemporary Social Movements in a Hybrid Media Environment”, Annual Review of Sociology 46: 443-465.

22

Cinelli, Matteo, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini. 2021. “The Echo Chamber Effect on Social Media”, Proceedings of the National Academy of Sciences 118(9): e2023301118. Choirat, Christine, James Honaker, Kosuke Imai, Gary King, and Olivia Lau. 2020. Zelig: Everyone’s Statistical Software, version 5.1.7. Comprehensive R Analysis Network. https://cran.r- project.org/web/packages/Zelig/index.html.

DellaVigna, Stefano, Ruben Enikolopov, Vera Mironova, Maria Petrova, and Ekaterina Zhuravskaya. 2014. “Cross-Border Media and : Evidence from Serbian Radio in Croatia”, American Economics Journal: Applied Economics 6(3): 103-132.

Eggers, Andrew C., Guadalupe Tuñón, and Allan Dafoe. 2021. “Placebo Tests for Causal Inference”. Working Paper available at https://pelg.ucsd.edu/Eggers_2021.pdf.

Enikolopov, Ruben, Alexey Makarin and Maria Petrova. 2020. “Social Media and Protest Participation: Evidence From Russia”, Econometrica 88(4):1479-1514.

Flores, René D. 2017. “Do Anti-Immigrant Laws Shape Public Sentiment? A Study of Arizona’s SB 1070 Using Twitter Data”, American Journal of Sociology 123(2): 333-384.

Foos, Florian, Lyubomir Kostadinov, Nikolay Marinov, and Frank Schimmelfennig. 2020. “Does Social Media Promote Civic Activism? A Field Experiment with a Civic Campaign”, Political Science Research and Methods: 10.1017/psrm.2020.13.

Fujiwara, Thomas, Karsten Müller, and Carlo Schwartz. 2020. “The Effect of Social Media on Election: Evidence from the United States.” SSRN: 10.2139/ssrn.371998.

Griefer, Noah. 2021a. “MatchIt: Estimating Effects After Matching”, May 26: https://kosukeimai.github.io/MatchIt/articles/estimating-effects.html.

Griefer, Noah. 2021b. “WeightIt: Weighting for Covariate Balance in Observation Studies”, R package version 0.12.0. https://CRAN.R-project.org/package=WeightIt.

Griffith, Daniel. 1987. Spatial Autocorrelation: A Primer. Resource Publications in Geography, Association of American Geographers.

González-Bailón, Sandra and Ning Wang. 2016. “Networked Discontent: The Anatomy of Protest Campaigns in Social Media”, Social Networks 44(1): 95-104.

Guess, Andrew M., Brendan Nyhan, and Jason Reifler. 2020. “Exposure to Untrustworthy Websites in the 2016 US Election”, Nature Human Behavior 4: 472-480.

Haas, Ryan, Sergio Olmos, and Bradley W. Parks. 2020. “Protests Fight Using Pepper Spray, Baseball Bats in Portland on Saturday”, Oregon Public Radio. August 22: https://www.opb.org/article/2020/08/22/conservative-protesters-plan-rallies-in-downtown- portland/.

23

Hainmueller, Jens. 2012. “Entropy Balancing for Causal Effects: A Multivariate Reweighting Method to Produce Balanced Samples in Observation Studies”, Political Analysis 20(1): 25-46.

Herman, Peter, Marissa J. Lang, and Clarence Williams. 2020. “Pro-Trump Rally Descends into Chaos as Proud Boys Roam D.C. Looking to Fight”, . December 13: https://www.washingtonpost.com/local/public-safety/proud-boys-protest-stabbing- arrest/2020/12/13/98c0f740-3d3f-11eb-8db8-395dedaaa036_story.html.

Ho, Daniel E., Kosuke Imai, Gary King, and Elizabeth A. Stuart. 2007. “Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference”, Political Analysis 15(3): 199-236.

Ho, Daniel E., Kosuke Imai, Gary Kind, and Elisabeth A. Stuart. 2011. “MatchIt: Nonparametric Preprocessing for Parametric Causal Inference”, Journal of Statistical Software 42(8): 10.18637/jss.v042.i08.

HSAGAC and CRA. 2021. “Examining the U.S. Capitol Attack: A Review of the Security, Planning, and Response Failures of January 6”, United States Senate Committee on Homeland Security and Governmental Affairs and Committee on Rules and Administration. https://www.rules.senate.gov/imo/media/doc/Jan%206%20HSGAC%20Rules%20Report.pdf.

Hsiao, Yuan. 2021. “Evaluating the Mobilization Effect of Online Political Network Structures: A Comparison between the Network and Ideal Type Network Configurations”, Social Forces 99(4): 1547-1574.

Isaac, Mike and Kellen Browning. 2020. “Fact-Checked on Facebook and Twitter, Conservatives Switch Their Apps”, . November 11, Section B.

Iqbal, Mansoor. 2021. “Facebook Revenue and Usage Statistics (2021)”, Business of Apps. May 24: https://www.businessofapps.com/data/facebook-statistics/.

Jasser, Greta, Jordan McSwiney, Ed Pertwee, and Savvas Zannettou. 2021. “‘Welcome to the #GabFam’: Far-Right Virtual Community on ”, New Media & Society: 10.1177/1461444821024546.

Kim, Yoonsang and Rachel Nordgren, and Sherry Emery. 2020. “The Story of Goldilocks and Three Twitter APIs: A Pilot Study on Twitter Data Sources and Disclosure”, International Journal of Environmental Research and Public Health 17(3): 864.

King, Gary and Langche Zeng. 2001. “Logistic Regression in Rare Events Data”, Political Analysis 9(2): 137-163.

Kirkpatrick, David D. and Alan Feuer. 2021. “Police Shrugged Off the Proud Boy, Until They Attacked the Capitol”, The New York Times. March 14: https://www.nytimes.com/2021/03/14/us/proud-boys-law-enforcement.html.

24

Laidley, Thomas and Dalton Conley. 2018. “The Effects of Active and Passive Leisure on Cognition in Children: Evidence from Exogenous Variation in Weather”, Social Forces 97(1): 129-156.

Larson, Jennifer M., Jonathan Nagler, Jonathan Ronen, Joshua A. Tucker. 2019. “Social Networks and Protest Participation: Evidence from 130 Million Twitter Users”, American Journal of Political Science 63(3): 690-705.

MacGillis, Alec. 2021. “Inside the Capitol Riot: What the Parler Videos Reveal”, ProPublica. January 17, https://www.propublica.org/article/inside-the-capitol-riot-what-the-parler-videos-reveal.

Malik, Momin M., Hemank Lamba, Constantine Nakos, and Jurgen Pfeffer. 2015. “Population Bias in Geotagged Tweets.” 9th International AAAI Conference on Weblogs and Social Media: 18-27.

Marwick, Alice E. and Benjamin Clancy. 2020. “: A Literature Review”, Extreme Right Radicalization Workshop, Social Science Research Council. July 28, virtual format.

McCormick, Tyler H., Hedwig Lee, Nina Cesare, Ali Shojaie, and Emma S. Spiro. 2017. “Using Twitter for Demographic and Social Science Research: Tools for Data Collection and Processing”, Sociological Methods & Research 46(3): 390-421.

McGraw, Meridith, Tina Nguyen, and Cristiano Lima. 2021. “Team Trump Quietly Launches New Social Media Platform”, . July 1: https://www.politico.com/news/2021/07/01/gettr-trump- social-media-platform-497606.

MIT Election Data and Science Lab. 2018. “County Presidential Election Returns 2000-2016”, Harvard Dataverse, DOI: 10.7910/DVN/VOQCHQ.

Mitts, Tamar. 2019. “From Isolation to Radicalization: Anti-Muslim Hostility and Support for ISIS in the West”, American Political Science Review 113(1): 173-194.

Müller, Karsten and Carlo Schwarz. 2020a. “Fanning the Flames of Hate: Social Media and ”, Journal of the European Economic Association. DOI:10.1093/jeea/jvaa045.

Müller, Karsten and Carlo Schwarz. 2020b. “From Hashtag to Hate Crime: Twitter and Anti- Minority Sentiment”, SSRN: 10.2139/ssrn.3149103.

Nicas, Jack. 2021. “Parler, a Social Network That Attracted Trump Fans, Returns Online”, The New York Times. February 15: https://www.nytimes.com/2021/02/15/technology/parler-back- online.html.

Nicas, Jack and Davey Alba. 2021. “How Parler, a Chosen App of Trump Fans, Became a Test of Free Speech”, The New York Times. January 10, Section A.

PRRI. 2021. “Understanding QAnon’s Connection to American , Religion, and Media Consumption”, Public Religion Research Institute. May 27: https://www.prri.org/research/qanon- conspiracy-american-politics-report/.

25

Raleigh, Clionadh, Andrew Linke, Havard Hegre, Jaokim Karlsen. (2010). “Introducing ACLED: An Armed Conflict Location and Event Dataset”, Journal of Peace Research 47(5): 651-660.

Roose, Kevin, Mike Issac, and Sheera Frenkel. 2020. “Facebook Struggles to Balance Civility and Growth”, The New York Times. November 24: https://www.nytimes.com/2020/11/24/technology/facebook-election-misinformation.html.

Siegel, Alexandra A. and Vivienne Badaan. 2020. “#No2Sectarianism: Experimental Approaches to Reducing Sectarian Hate Speech Online”, American Political Science Review 114(3): 837-855.

Steinert-Threlkeld, Zachary C. 2017. “Spontaneous Collective Action: Peripheral Mobilization During the Arab Spring”, American Political Science Review (111)2: 379-403.

Stuart, Elizabeth A. 2010. “Matching Methods for Causal Inference: A Review and Look Forward”, Statistical Science 25(1): 1-21.

Theocharis, Yannis and Will Lowe. 2016. “Does Facebook Increase Political Participation? Evidence From a Field Experiment”, Information, Communication & Society 19(10): 1465-1486.

Tufekci, Zeynep. 2017. Twitter and Teargas: The Power and Fragility of Networked Protest. New Haven: Yale University Press.

Tufekci, Zeynep and Deen Freelon. 2013. “Introduction to the Special Issue on New Media and Social Unrest”, American Behavioral Scientist 57(7): 843-847.

Van Dijcke, David and Austin L. Wright. 2021. “Profiling Insurrection: Characterizing Collective Action Using Mobile Device Data”, SSRN: 10.2139/ssrn.3776854.

Weidmann, Nils B. and Espen Geelmuyden Rød. 2019. The Internet and Political Protest in Autocracies. New York: Oxford University Press.

Williams, Matthew L., Pete Burnap, Amir Javed, Han Lui, and Sefa Ozalap. 2020. “Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime”, The British Journal of Criminology 60(1): 93-117.

Yanagizawa-Drott, David. 2014. “Propaganda and Conflict: Evidence from the Rwandan Genocide”, Quarterly Journal of Economics 129(4): 1947-1994.

26