John Benjamins Publishing Company

This is a contribution from Journal of Language and Politics 17:2 © 2018. John Benjamins Publishing Company

This electronic file may not be altered in any way. The author(s) of this article is/are permitted to use this PDF file to generate printed copies to be used by way of offprints, for their personal use only. Permission is granted by the publishers to post this file on a closed server which is accessible only to members (students and faculty) of the author's/s' institute. It is not permitted to post this PDF on the internet, or to share it on sites such as Mendeley, ResearchGate, Academia.edu. Please see our rights policy on https://benjamins.com/content/customers/rights For any other use of this material prior written permission should be obtained from the publishers or through the Copyright Clearance Center (for USA: www.copyright.com). Please contact [email protected] or consult our website: www.benjamins.com Moral discourse in the Twitterverse Effects of ideology and political sophistication on language use among U.S. citizens and members of Congress

Joanna Sterling and John T. Jost Department of Psychology and Woodrow Wilson School, Princeton University / Department of Psychology, University

We analyzed Twitter language to explore hypotheses derived from moral founda- tions theory, which suggests that liberals and conservatives prioritize differ- ent values. In Study 1, we captured 11 million tweets from nearly 25,000 U.S. residents and observed that liberals expressed fairness concerns more often than conservatives, whereas conservatives were more likely to express concerns about group loyalty, authority, and purity. Increasing political sophistication exacerbat- ed ideological differences in authority and group loyalty. At low levels of sophis- tication, liberals used more harm language, but at high levels of sophistication conservatives referenced harm more often. In Study 2, we analyzed 59,000 tweets from 388 members of the U.S. Congress. Liberal legislators used more fairness- and harm-related words, whereas conservative legislators used more authority- related words. Unexpectedly, liberal legislators used more language pertaining to group loyalty and purity. Follow-up analyses suggest that liberals and conserva- tives in Congress use similar words to emphasize different policy priorities.

Keywords: political ideology, psycholinguistics, morality, basic values, social cognition

Men’s virtues have their seasons even as fruits have. (La Rochefoucauld)

1. Introduction

It is a unique feature of the present historical moment that lively discussions of moral issues often play out in the context of online social media platforms. On such polarizing issues as same-sex marriage, gun control, and anthropogenic

https://doi.org/10.1075/jlp.17034.ste | Published online: 26 February 2018 Journal of Language and Politics 17:2 (2018), 195–221. issn 1569–2159 / e-issn 1569–9862 © John Benjamins Publishing Company 196 Joanna Sterling and John T. Jost

climate change, ordinary citizens and elite politicians take to Facebook and Twitter on a regular basis to spread messages of moral significance (e.g., Brady et al. 2017). Internet-based technologies merely provide the newest methods of communicat- ing about themes – such as fairness, loyalty, and the avoidance of harm – that human beings have been discussing for millennia, probably since the advent of spoken language. From the perspective of researchers in social science who spe- cialize in the analysis of discourse, it is a tremendous boon that such platforms not only spread but also archive vast quantities of text messages on a range of topics that is virtually inexhaustible. In the research program summarized here we used automated computational methods to analyze the contents of millions of Twitter messages sent from the accounts of U.S. citizens and members of Congress. In particular, we explored hypotheses derived from a theory of moral psychology that advances a number of empirically testable propositions about ideological differences in adherence to certain values. Although previous attempts to use computer-based methods have failed to identify consistent and robust ideological differences in moral discourse (Clifford and Jerit 2013; Graham, Haidt, and Nosek 2009; Neiman et al. 2016a, 2016b; Prims, Melton, and Motyl in press), we observed a number of significant differences in the language used by liberal and conservative elites as well as ordi- nary citizens (see also Jones et al. 2017). Our results are consistent with longstanding tenets in the field of critical dis- course studies, namely that “social groupings and relationships influence the lin- guistic behavior of speakers and writers,” and that linguistic analysis is “a powerful tool for the study of ideological processes which mediate relationships of power and control” (Fowler and Kress 1979, 185–186; see also Fairclough and Wodak 1997). That is, we use methods of linguistic analysis to explore the verbal behavior of social groups that diverge in terms of political ideology and sophistication (or power/expertise). Our work is guided, in part, by the assumption that “ideology is inherently linked to social and thus discursive action by means of which different representations of social and political order are enacted and disseminated in dif- ferent spheres” (Krzyżanowski 2010, 73). We focus on left-right ideological asym- metries in discursive representations that are drawn from theory and research in political psychology, to which we now turn.

2. Psychological differences between liberals and conservatives

Psychologically speaking, there are myriad ways in which liberals and leftists are known to diverge from conservatives and rightists (Jost 2006, 2017). Most obvi- ously, those on the political left and right diverge in terms of their attitudes and

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 197 beliefs. As a rule, conservatives are more satisfied with mainstream cultural insti- tutions (including religious and economic institutions), less interested in prog- ress and social change, and more zealous about the preservation of law and order, in comparison with liberals (e.g., Altemeyer 1998; Conover and Feldman 1981; Evans, Heath, and Lalljee 1996; Jost, Nosek, and Gosling 2008; Sidanius and Pratto 1999; Wilson 1973). Some of these attitudinal differences, in turn, are believed to emerge from underlying variations in personality and temperament (Adorno et al. 1950; Block and Block 2006; Fraley et al. 2012; Kandler, Bleidorn, and Riemann 2012). Conservatives, for instance, are more conventional, orderly, conscientious, and polite, whereas liberals are more curious, creative, compassionate, and open to new experiences (Carney et al. 2008; Gerber et al. 2010; Hirsh, DeYoung, Xu, and Peterson 2010; Mondak 2010; Tomkins 1965). One scheme for the classification of ideological differences in the endorse- ment of moral values has been proposed by Haidt and Graham (2007). These au- thors adopt the fairly grandiose terminology of “moral foundations” to emphasize what they take to be the evolutionary origins of personal value preferences (see Suhler and Churchland 2011, for a critique of their evolutionary assumptions). Consistent with this formulation, Graham, Haidt, and Nosek (2009) found that liberals tend to place greater subjective value than conservatives on fairness and the avoidance of harm, whereas conservatives place greater value than liberals on group loyalty, obedience to authority, and the enforcement of purity standards. Haidt and Graham (2007) insist that all five of these values (fairness, harm avoid- ance, group loyalty, authority, and purity) are equally “moral” in nature, and they frequently admonish liberals for failing to weight all of them positively. However, there is growing empirical evidence that the so-called “binding” moral founda- tions – group loyalty, authority, and purity – are associated with authoritarianism, social dominance, prejudice, and social exclusion (Federico et al. 2013; Kugler, Jost, and Noorbaloochi 2014; Milojev et al. 2014; Sinn and Hayes 2016). Thus, it is more appropriate to view ideological differences in the endorsement of these values as “moralistic” (and judgmental) rather than “moral” in any objective (or normatively defensible) sense (Jacobsin 2008; Jost 2012; Nagel 2012). For the most part, studies of liberal-conservative divergence in moral values have relied upon direct, reactive, self-report measures (such as survey question- naires) that raise methodological concerns about social desirability and other po- tential biases in responding (e.g., Webb et al. 1966). Although one could argue that values are inherently subjective and self-consciously endorsed – and there- fore it is entirely appropriate to study them using self-report indicators, it is at least conceivable that one would reach different conclusions about the value pri- orities of liberals and conservatives based upon unobtrusive, objective analyses of patterns of speech and communication, as opposed to data from questionnaires.

© 2018. John Benjamins Publishing Company All rights reserved 198 Joanna Sterling and John T. Jost

Furthermore, the question of whether there are significant ideological differences in the transmission of value-laden language (among ordinary citizens as well as political elites) is itself a question of theoretical and practical interest, independent of the question of whether liberals and conservatives self-consciously endorse dif- ferent values. For example, citizens identify more strongly with political parties to the extent that they perceive an alignment between their own values and those expressed by party elites (Wan, Tam, and Chiu 2010). More generally, left-right di- vergences in the use of language are important to the extent that one regards polit- ical ideologies as, at least in part, conceptual networks, rhetorical devices, discur- sive performances, social constructions, or collective representations (e.g., Billig 1991; Condor, Tileagă, and Billig 2013; Durrheim and Dixon 2005; Freeden 1998; Homer-Dixon et al. 2013; Jost, Federico, and Napier 2013; Moscovici 1988; van Dijk 2006). In contemporary society, politicized discussions of morality often take place on social media platforms, such as Twitter and Facebook, as noted above. There have been a few notable studies of ideological differences in the con- tents of moral – or moralistic – language on social media (Clifford and Jerit 2013; Graham et al. 2009; Jones et al. 2017; McAdams et al. 2008; Neiman et al. 2016a, 2016b; Prims et al. in press). For the most part, attempts to characterize morality- laden aspects of the language of liberals and conservatives have focused exclu- sively on political elites or other atypical samples. Taken as a whole, evidence from these studies has been mixed. For example, Graham et al. (2009) and McAdams (2008) explored the language of religious sermons and life-narratives from a high- ly religious sample, and found that religious conservatives used more language pertaining to authority and purity, whereas religious liberals used more language pertaining to fairness and harm. Neiman and colleagues (2016a, 2016b) observed that Democratic politicians used more fairness language than Republicans, but Republican politicians used more harm language than Democrats. In contrast, a study of media discourse revealed that New York Times editorials expressing liberal support for stem cell research included more harm-related language, whereas con- servative editorials included more language about purity (Clifford and Jerit 2013). None of these studies speak clearly to the question of whether there are mean- ingful relationships between political ideology and moral discourse in the general public. It remains unclear whether the inconsistent effects obtained in prior re- search are attributable to sampling differences or a more general lack of consis- tency or reliability in the relationships between ideology and the expression of moral values. To overcome this deficit, we provide a detailed analysis focusing on the connection between liberalism-conservatism and moralistic language in samples of ordinary citizens and political elites using the same communication platform, namely Twitter.

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 199

3. The present research program

Previous studies turned up surprisingly weak, inconsistent, and, in several cases, nonsignificant effects of political partisanship with respect to the linguistic expres- sion of moral values in elite samples. Neiman et al. (2016a, 19) concluded that the lack of clear ideological or partisan differences in value-laden discourse may be due to the fact that politicians are “supposed to speak a certain way and, regardless of party . . . they tend not to deviate much from that script”. They recommended that researchers explore informal rather than formal modes of discourse, in order to obtain evidence of ideological differences in moral discourse (see also Graham et al. 2009). For these reasons, we conducted parallel investigations of value-laden language used by ordinary citizens and members of the U.S. Congress in a relative- ly informal context of communication, namely Twitter. Because previous research demonstrates that the correspondence between political ideology and various psy- chological characteristics and tendencies is substantially stronger when political interest, involvement, and sophistication are high rather than low (e.g., Federico and Goren 2009; Jost, Federico, and Napier 2009; Leone, Livi, and Chirumbolo 2015), we expected that if there were clear differences between liberal and conser- vative citizens with respect to linguistic emphasis on different moral themes, these differences would be stronger with increasing levels of political sophistication.

3.1 Measuring moral intuitions through language

We employed dictionaries developed by Graham and colleagues (2009) to de- termine whether tweets referred to one or more of the five moral “foundations,” which have been conceptualized as largely non-conscious emotional intuitions (Haidt 2001). Theoretically, these intuitions should be well-suited to measurement using dictionary-based methods, insofar as linguistic choices help to reveal a per- son’s underlying psychological tendencies. To develop their dictionaries, Graham et al. (2009) generated a list of words associated with each foundation and edited the list to remove words that did not directly relate to the foundations in most instances of word use (e.g., “just” usually connotes only rather than fair). The dictionary used to measure language - con cerning harm avoidance included words such as “guard,” “exploit,” and “shield” (forty words in total). The fairness dictionary contained words such as “egalitar- ian,” “prejudice,” and “balance” (thirty-nine words). The group loyalty dictionary contained words such as “traitor,” “collective,” and “foreign” (forty-three words). The deference to authority dictionary contained words such as “abide,” “obstruct,” and “illegal” (sixty-one words). The purity dictionary contained words like “pro- miscuous,” “tarnish,” and “wholesome” (eighty words).

© 2018. John Benjamins Publishing Company All rights reserved 200 Joanna Sterling and John T. Jost

Before coding the tweets, we pre-processed the data to remove URLs, hashtags, and punctuation. All tweets were coded using the LIWC 2015 program. Dictionaries were manually converted to a LIWC format prior to conducting anal- yses. LIWC outputs the percentage of words that make up each category of inter- est out of the total number of words used in the document (in our case, a tweet). Twitter’s character constraints (140 characters) make raw percentages less than ideal. Therefore, we recoded our dependent measures so that they were dichoto- mous indicators of whether a given tweet did (1) or did not (0) contain at least one word from the relevant category.

3.2 Political ideology

We imputed measures of ideology based on behavior rather than self-report indi- cators. In Study 1 we estimated ideological position based on follower networks on social media (Barberá 2015), and in Study 2 we used ideological estimates based on voting behavior in Congress (i.e., Poole and Rosenthal’s [1985] DW-Nominate scores). One benefit of using these estimates of ideology is that they correspond closely to one another. For example, Barberá (2015) found that ideal-point esti- mates derived from the social networks of members of Congress correlated with DW-Nominate scores at r = .94 or above.

3.3 Topic-related differences in moral language

The value-laden nature of political discussion makes it important to understand the degree to which differences in moralistic language use may be driven by ideo- logical differences in the prioritization of various political issues. We accounted for ideological differences in discussion rates of specific topics by adjusting for the type of issue mentioned in a tweet (Study 1) and the type of legislative bill under discussion on the floor of Congress (Study 2).

4. Study 1: Ordinary citizens

4.1 Method

4.1.1 Political ideology To obtain estimates of each Twitter user’s political ideology, we leveraged the online social networks in which people are embedded (Barberá 2015; Barberá et al. 2015). Rather than assigning ideological weights to elites through tradi- tional means, such as expert coding, a statistical model estimates the ideological

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 201 placements of elite and ordinary users based on patterns of “following” behavior. This method of imputing ideology relies upon an assumption of political homoph- ily: Twitter users generally prefer to follow political elites who hold ideological positions that are similar to their own (as in spatial voting models; see Enelow and Hinich 1984). Ideological estimates obtained in this way have been validated at both elite and mass levels of analysis through expert ratings, state-level ideol- ogy estimates, individual users’ party registrations, and campaign contributions (where available), as well as self-identification on Twitter (Barberá 2015). The av- erage ideological position in our sample was extremely close to the scale midpoint (M = −.01, SD = 1.14).

4.1.2 Political sophistication To obtain relatively accurate ideological estimates, Twitter users must follow at least three of the political elites included in Barberá’s (2015) sample of politicians, think tanks, news outlets, and interest groups (see https://github.com/pablobar- bera/twitter_ideology/blob/master/2016-election/01-get-twitter-data.R). At the same time, there was no upward limit with respect to how many sources one could follow. We therefore used the number of elites our Twitter users followed as a proxy for political sophistication based on the assumption that more politi- cally sophisticated individuals tend to follow a greater number of political elites (M = 8.26, SD = 11.27).

4.1.3 Coding of political topics We adjusted for the statistical effect of the political topic under discussion. Using existing dictionaries designed to capture policy agendas in the and the (Albaugh et al. 2013), we identified seven different political topics, namely: the economy (eighty-four dictionary words; e.g., “budget,” “tax,” “inflation”), civil rights (eighty-six words; e.g., “racism,” “inequality,” “privacy”), healthcare (128 words; e.g., “doctor,” “HIV,” “therapy”), immigration (thirty-three words; e.g., “border,” “foreigner,” “citizenship”), the environment (thirty-seven words; e.g., “extinct,” “pollution,” “ozone”), crime (ninety-two words; e.g., “felon,” “justice,” “shooting”), and welfare (twenty-three words; e.g., “pension,” “shelter,” “poverty”). For each topic a dummy variable was created (1 if a tweet contained at least one word in the topic dictionary; 0 if a tweet contained no words from that dictionary). Tweets that included no words from any of the seven topic dictionar- ies were classified as “Other.” The “Other” category served as our reference group for all analyses.

© 2018. John Benjamins Publishing Company All rights reserved 202 Joanna Sterling and John T. Jost

4.1.4 Sample generation We selected a random sample of Twitter users from the 2016 Election dataset, which contained over 20 million Twitter accounts in total (https://github.com/ pablobarbera/twitter_ideology/tree/master/2016-election). After excluding non- English language accounts, we were left with a sample of 24,988 Twitter users, each of whom contributed between 1 and 3,244 Tweets, resulting in a total sample of 11,703,650 tweets. Then we sampled without replacement 500 of each user’s tweets (or, if they sent less than 500 tweets, we retained their total number) to accommo- date the matrix limitations on most statistical software.

4.1.5 Statistical modelling We conducted a series of multilevel mixed-effects logistic regression analyses with random intercepts for Twitter users (using Stata, Version 13). We regressed the probability that a tweet contained at least one word from each of the moral dic- tionaries on the Twitter user’s mean-centered ideological position. We adjusted for the log-transformed word count of each tweet, because longer tweets are more likely to contain all types of language. We also adjusted for the political topic men- tioned in the tweet, on the assumption that some topics are more likely to con- tain moralistic language than others. The data set was organized at the level of the individual tweet. Tweet was nested within Twitter user, with a minimum of 1 and a maximum of 500 tweets composed by a single Twitter user (M = 416.64, SD = 144.88). The use of effect sizes is not standard for logistic regressions, so we report odds ratios instead. Odds ratios confer intuitive measures of effect size. The odds ratio can be interpreted as the increase in the odds of a tweet containing a particular type of moral-laden language for each unit increase in ideology. Because ideology was coded so that higher numbers indicated greater conservatism, numbers larger than one reveal that conservatives used more words in a given category (compared to liberals), whereas numbers less than one reveal that liberals used more words than conservatives in that category.

4.2 Results and discussion

4.2.1 Using political ideology to predict moral themes Analyzing a sample of tweets from nearly 25,000 U.S. citizens, we investigated the relationship between political ideology and the use of language pertaining to five different moral “foundations.” As hypothesized, liberals were more likely than conservatives to use language concerning fairness, whereas conservatives were more likely than liberals to use language related to group loyalty, authority, and purity (the so-called “binding moral foundations;” see Table 1). Unexpectedly,

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 203 conservatives were more likely than liberals to send tweets containing harm-re- lated language.

Table 1. Political ideology and sophistication predicting moralistic language use for U.S. citizens on Twitter Moralistic language Predictor variables Model 1 Model 2 Fairness Ideology 0.939 [0.924, 0.955] 0.930 [0.915, 0.946] Political sophistication 1.006 [1.004, 1.007] Ideology * sophistication 1.001 [0.999, 1.003] Harm Ideology 1.015 [1.003, 1.027] 1.006 [0.994, 1.018] Political sophistication 1.007 [1.005, 1.008] Ideology * sophistication 1.002 [1.001, 1.003] Group loyalty Ideology 1.077 [1.065, 1.090] 1.063 [1.050, 1.075] Political sophistication 1.009 [1.008, 1.010] Ideology * sophistication 1.003 [1.001, 1.004] Authority Ideology 1.090 [1.078, 1.102] 1.079 [1.067, 1.091] Political sophistication 1.006 [1.005, 1.007] Ideology * sophistication 1.003 [1.002, 1.004] Purity Ideology 1.059 [1.045, 1.074] 1.058 [1.044, 1.073] Political sophistication 0.999 [0.998, 1.001] Ideology * sophistication 1.001 [0.999, 1.003] Note. Odds ratio estimates of the effect of political ideology and sophistication on the moral founda- tions language categories from a sample of approximately 25,000 ordinary citizen Twitter users. The 95% confidence intervals are reported in brackets. Effects computed using multilevel mixed-effects logistic regression analyses with random intercepts for Twitter users. All effects reported after adjusting for the log-transformed word count for tweets, and whether or not tweets contained words related to each of the seven political topics.

4.2.2 Moderating role of political sophistication In light of prior evidence in political psychology, we also considered the possibility that ideological differences would be more pronounced when political sophistica- tion was high (vs. low). To capture the effects of political sophistication on the use of moral language, we included the mean-centered count of political elites (in

© 2018. John Benjamins Publishing Company All rights reserved 204 Joanna Sterling and John T. Jost

our sample) that each user followed. We then interacted that count with mean- centered ideological position to investigate the moderating role of sophistication on ideology. Fairness. Over and above the effect of ideology, political sophistication was posi- tively associated with the likelihood of using fairness language. The interaction between ideology and sophistication failed to approach significance. Liberals were more likely than conservatives to use fairness language regardless of whether they were high or low in sophistication. Harm. After adjusting for the effect of political sophistication, the relationship between ideology and the likelihood of using harm-related language became non- significant (see Table 1). The analysis did, however, yield a main effect of sophisti- cation; Twitter users who were high (vs. low) in political sophistication were more likely to send Tweets containing harm-related words. This main effect was quali- fied by an interaction between ideology and sophistication.

Harm avoidance −. Liberals Conservatives −.

−.

−.

−.

−.

−. Log odds of using harm-related language harm-related using of odds Log −.

−. Low High Political sophistication Figure 1. Interaction of Political Ideology and Political Sophistication Predicting Harm- Related Language. “Liberals” are one standard deviation below the mean of political ideology; “Conservatives” are one standard deviation above the mean of political ideol- ogy. “Low” political sophistication represents one standard deviation below the mean of political sophistication; “High” political sophistication represents one standard deviation above the mean of political sophistication

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 205

As shown in Figure 1, the effect of ideology on the use of harm-related lan- guage depended upon the level of political sophistication. When sophistication was low, there was a marginal tendency for liberals to mention harm more of- ten than conservatives, OR = 0.982, Z = −1.81, p = .07, 95% CI [0.963, 1.002], but when sophistication was high, conservatives mentioned harm more often than liberals, OR = 1.030, Z = 3.18, p = .001, 95% CI [1.011, 1.048]. Liberals and con- servatives were more likely to use harm-related language when they were higher in political sophistication OR = 1.004, Z = 4.02, p < .001, 95% CI [1.002, 1.006], and OR = 1.009, Z = 10.28, p < .001, 95% CI [1.007, 1.011], but this effect was stronger for conservatives than liberals, Z = −3.43, p < .001. Group loyalty. In addition to the main effect of ideology, political sophistication was positively associated with the likelihood of using language pertaining to group loyalty (see Table 1). These two main effects were qualified by an interaction be- tween ideology and sophistication.

Group loyalty −. Liberal Conservative −. e g u a

n g −. a l

y l t a o y

l −. oup g r −. usin g − odds o f

o g

L −.

−. Low High Political sophistication Figure 2. Interaction of political ideology and political sophistication predicting lan- guage concerning group loyalty. “Liberals” are one standard deviation below the mean of political ideology; “Conservatives” are one standard deviation above the mean of political ideology. “Low” political sophistication represents one standard deviation below the mean of political sophistication; “High” political sophistication represents one standard deviation above the mean of political sophistication.

© 2018. John Benjamins Publishing Company All rights reserved 206 Joanna Sterling and John T. Jost

Political sophistication exacerbated the ideological difference in the use of lan- guage pertaining to group loyalty (Figure 2). When sophistication was low, con- servatives cited concerns about group loyalty more than liberals did (OR = 1.031, Z = 3.15, p = .002, 95% CI [1.012, 1.050]), and this difference was even greater when sophistication was high (OR = 1.096, Z = 10.40, p < .001, 95% CI [1.077, 1.115]), Z = −4.68, p < .001. Liberals (OR = 1.006, Z = 5.97, p < .001, 95% CI [1.004, 1.008]) and conservatives (OR = 1.012, Z = 14.58, p < .001, 95% CI [1.011, 1.014]) were more likely to use group loyalty language when they were higher in political sophistication, but this effect was stronger for conservatives than liberals, Z = −4.63, p < .001. Authority. After adjusting for political sophistication, conservatives were still more likely than liberals to compose tweets pertaining to authority (see Table 1). Sophistication was also positively associated with the likelihood of using author- ity-related language. These two main effects were qualified by an interaction be- tween ideology and sophistication.

Authority −. Liberals −. Conservatives

−. e g

u a −. n g a l

y −. i t

−. autho r

−.

usin g −

odds o f −.

o g L −.

−.

−. Low High Political sophistication Figure 3. Interaction of Political Ideology and Political Sophistication Predicting Authority-Related Language. “Liberals” are one standard deviation below the mean of political ideology; “Conservatives” are one standard deviation above the mean of politi- cal ideology. “Low” political sophistication represents one standard deviation below the mean of political sophistication; “High” political sophistication represents one standard deviation above the mean of political sophistication

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 207

As shown in Figure 3, sophistication exacerbated ideological differences in language pertaining to authority. When sophistication was low, conservatives mentioned authority more than liberals did (OR = 1.043, Z = 4.64, p < .001, 95% CI [1.025, 1.062), and this difference was even greater when sophistication was high (OR = 1.116, Z = 13.31, p < .001, 95% CI [1.098, 1.135]), Z = −5.56, p < .001. Both liberals (OR = 1.003, Z = 2.55, p = .011, 95% CI [1.001, 1.004]) and conserva- tives (OR = 1.009, Z = 11.83, p < .001, 95% CI [1.008, 1.011]) were more likely to use authority language when they were higher in political sophistication, but this effect was again stronger for conservatives than liberals, Z = −5.45, p < .001. Purity. After adjusting for political sophistication, conservatives were still more likely than liberals to compose tweets pertaining to purity (see Table 1). Unlike all other language categories, however, sophistication was unrelated to the likelihood of sending a tweet containing purity language.

4.3 Summary

As hypothesized, liberals were more likely than conservatives to use language con- cerning fairness, whereas conservatives were more likely than liberals to use lan- guage related to group loyalty, authority, and purity. These results are consistent with past research based on self-reported differences between liberals and conser- vatives in terms of value prioritization. After adjusting for political sophistication, there was no ideological difference with respect to the use of language related to harm avoidance. Political sophistication was associated with the increased use of moralistic language in four out of five cases. The effect of ideology on moral language was moderated by political sophisti- cation in three out of five cases. With respect to the use of language pertaining to authority and group loyalty, ideological differences were more pronounced when sophistication was high (vs. low). Thus, sophisticated conservatives were espe- cially likely to express concerns about authority and group loyalty. With respect to harm-related language, we observed a crossover pattern of interaction. At low levels of sophistication, liberals were slightly more likely than conservatives to ex- press concerns about harm, but at high levels of sophistication, conservatives were more likely than liberals to use the language of harm. In summary, ideology and political sophistication were significant predic- tors – both independently and in statistical interaction – of the personal expres- sion of specific moral values in a large sample of U.S. citizens on social media. These results are clearer and stronger than those obtained in previous studies of left-right ideological differences in the use of moral language (Clifford and Jerit 2013; Neiman et al. 2016a, 2016b). They suggest that it is indeed possible to use Twitter data to investigate the moral discourse of liberals and conservatives.

© 2018. John Benjamins Publishing Company All rights reserved 208 Joanna Sterling and John T. Jost

5. Study 2: U.S. Congress members

In a second study, we investigated the moral discourse of an extremely politically sophisticated sample of Twitter users, namely members of the U.S. Congress. We expected to observe fairly similar correspondences between ideology and language use pertaining to specific moral values as we observed in the highly sophisticated sample of ordinary citizens in the first study. That is, we hypothesized that liberal members of Congress would use fairness-related language more frequently than conservative members of Congress, whereas conservative members of Congress would use language pertaining to group loyalty, authority, and purity more fre- quently than liberal members of Congress. Given the results of our first study, we were unsure what to expect with respect to harm-related language.

5.1 Methods

5.1.1 Language sample To construct our dataset, we sampled all of the tweets sent by the official Twitter accounts of members of the U.S. House of Representatives and the Senate over a four-month period (February 9-May 28, 2014). The dataset contained 388 legisla- tors who contributed between 2 and 940 tweets each (M = 152.7).

5.1.2 Political ideology To obtain continuous ratings of ideological position for every member of Congress, we used Poole and Rosenthal’s (1985) DW-Nominate scores, which are based on roll-call voting. This method reverse-engineers “ideology” on the basis of who votes with whom, without taking into account the contents of specific bills. Positive scores indicate greater conservatism, whereas negative scores indicate greater liberalism (M = 0.08, SD = 0.47).

5.1.3 Structural variables Our primary research question was whether ideological differences in moral in- tuitions would manifest themselves in legislators’ language on Twitter. However, there are some structural features of Congress that might contribute to linguis- tic differentiation (in general) among Democrats and Republicans. To help in- sure that the language differences we observed were due to ideological divergence rather than other asymmetries, we adjusted for several structural elements of the Congressional context. Specifically, we included variables for legislative chamber (Senate vs. House) and the types of bills introduced in each chamber at the time of data collection. We effect-coded chamber (so that Senate was coded as −1 and House as 1). Bills under

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 209 discussion were grouped into six topics (budget, surveillance, welfare, immigra- tion, gun control, and defense); there was also an additional “Other” category that included bills that were not subsumed by the first six categories. Information about topics and bill introduction dates was distilled from GovTrack, a non-gov- ernmental website dedicated to tracking the activity of the U.S. Congress, using the “Browse Bills by Subject” search tool (website: https://www.govtrack.us/congress/ bills/#subjects). Bill topic was dummy-coded (1 for days on which a bill on that topic was introduced; 0 for days on which no bill on that topic was introduced). Bill topic was coded separately for Senators and Representatives (so that only bills introduced in the appropriate chamber were coded as 1). The “Other” category served as our reference group for all analyses.

5.1.4 Demographic variables Although age and sex of the legislator were not of primary research interest, pre- vious work suggests that these two variables significantly influence the language people use (Newman, Groom, Handelman, Pennebaker 2008; Pennebaker and Stone 2003; Yu 2014). Furthermore, age and sex were correlated with political ide- ology in our sample of legislators. Democrats were slightly older and more likely to be female, in comparison with Republicans. Although we did not have infor- mation about sex and age in the sample analyzed in Study 1, these characteristics were easily accessible for political elites. Therefore, we included age and sex in the model to adjust for linguistic variability that would be primarily associated with these demographic characteristics. The average age in our sample was 51.90 years old (SD = 11.21); there were 71 females and 317 males.

5.1.5 Statistical analysis We conducted a multilevel mixed-effects logistic regression analysis with a random intercept for legislators (using Stata, Version 13). Data was organized at the level of the individual tweet, and tweets were nested within legislators. All effects are reported as odds ratios. All person-level and document-level predictor variables described above were adjusted for in reported effects. For person-level parameters, we included our main predictors of interest: ideological positions of members of Congress, their chambers, and the interaction between ideology and chamber. We also included sex and age as adjustment variables. For document-level parameters, we included variables in our model to account for the six different topics of legisla- tion so that we could adjust for linguistic differences that were purely topic-driven. We also adjusted for the total number of words per document.

© 2018. John Benjamins Publishing Company All rights reserved 210 Joanna Sterling and John T. Jost

5.2 Results

Consistent with previous research on , tweets sent by liberal (vs. conservative) legislators were more likely to contain fairness and harm- related language (see Table 2). In addition, tweets sent by conservative (vs. liberal) legislators were more likely to contain authority-related language. Contrary to ex- pectations, however, the tweets of liberal (vs. conservative) legislators were more likely to include language pertaining to group loyalty and purity concerns.

Table 2. Political ideology predicting moralistic language use for members of U.S. Congress on Twitter Moralistic language Relationships with political ideology Fairness 0.25 [0.19, 0.31] Harm 0.84 [0.73, 0.98] Group loyalty 0.83 [0.73, 0.94] Authority 1.17 [1.04, 1.31] Purity 0.56 [0.42, 0.76] Note. Odds ratio estimates of the effect of political ideology and sophistication on the moral foundations language categories from a sample of 388 members of Congress. The 95% confidence intervals are reported in brackets. Effects computed using multilevel mixed-effects logistic regression analyses with random intercepts for members of Congress. All effects reported after adjusting for the log-transformed word count for tweets, and whether or not tweets were sent on days in which each type of bill was discussed on the floor of Congress.

5.2.1 Exploratory analyses To interpret the somewhat surprising findings that liberal members of Congress used more group loyalty and purity language than their conservative counterparts, we conducted a series of exploratory analyses. These were run in R (version 3.3.2) using the package Quanteda (https://github.com/kbenoit/quanteda). After split- ting the dataset according to party, we converted all letters into lower case and re- moved stop words (commonly used words such as “a,” “the,” “and”). We imported the relevant dictionaries and conducted separate exploratory analyses on tweets sent by Democratic and Republican members of Congress. First, we extracted the top 10 most frequently used words from the purity and group loyalty dictionaries for Democrats and Republicans separately (see Table 3). The most commonly used words in the group loyalty category pertained to nation- al groups (e.g., “nation,” “foreign”) and bringing people together in general (e.g., “community,” “together,” “unite”). The most commonly used words in the purity category had to with health concerns (e.g., “disease,” “sick,” “clean”) and religious morality (e.g., “innocent,” “holy,” “saint”). Democratic and Republican members

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 211 of Congress used nearly identical lists of frequent terms, indicating that – at least on the surface – they were referring to similar moral themes.

Table 3. Words from group loyalty and purity dictionaries that were most frequently used by Democratic and Republican legislators Moral dictionary Top words used by Democrats Top words used by Republicans Group loyalty nation (.30), community (.14), nation (.29), member (.15), com- member (.14), family (.12), group munity (.12), family (.11), group (.07), together (.05), individual (.08), individual (.04), foreign (.03), foreign (.03), unite (.03), (.03), together (.03), fellow (.03), fellow (.03) unite (.03) Purity clean (.18), disease (.08), sick (.07), clean (.13), disease (.09), humble humble (.03), integrity (.02), ex- (.05), integrity (.03), exploit (.03), ploit (.02), sacred (.02), holy (.02), sick (.03), saint (.03), holy (.02), saint (.02), innocent (.01) tramp (.02), innocent (.01)

Note. Numbers in parentheses indicate the proportional use of that word (relative to the total number of words used from that category). For instance, the word “nation” constituted 30% of Democrats’ and 29% of Republicans’ mentions of words pertaining to group loyalty. Words that were used frequently by members of one party but not the other have been italicized.

To dig deeper, we conducted collocation analyses (e.g., Baker et al. 2008) based on the five most frequently used words in each dictionary. More specifically, we sampled words that occurred up to three words before or after the target word. (Because moral words were relatively infrequent in general, a dataset based on as- sociated words becomes too sparse to convey much information after the five most frequently used words.) We ranked associated words according to their frequency and retained the top ten associates for each term, again separately for Democrats and Republicans (see Table 4). This step allowed us to consider the possibility that liberals and conservatives were using the same moral terms but in qualitatively different ways. When discussing group loyalty, we observed that Democrats most often referred to specific groups in society (e.g., “women,” “students,” “teachers,” “small businesses”) and the needs of these groups (e.g., “paid leave,” “#whatmoth- ersneed,” “support”). By contrast, Republicans tended to highlight security con- cerns (e.g., “defense,” “guard,” “fort hood”) and abstract sentiments, including reli- gious terms (“faith,” “freedom,” “prayer,” “easter”). Purity words were used much less frequently overall, so comparisons based on associated words were not especially revealing. However, it does seem that Democrats were more likely to mention environmental (“energy,” “environ- ment,” “air”) and health concerns (“paid leave,” “sick days,” “#whatmothersneed”).

© 2018. John Benjamins Publishing Company All rights reserved 212 Joanna Sterling and John T. Jost

Republicans mentioned environmental issues as well, but typically from the standpoint of regulations (e.g., “epa,” “federal,” “clean water act”). Republicans were also more likely than liberals to mention patriotic groups (e.g., “americans,” “vets,” “heroes”).

Table 4. Words associated with the top five words most frequently used by Democratic and Republican legislators from group loyalty and purity dictionaries Moral dictionary Top words Words associated with top Words associated with top words (Democrats) words (Republicans) Group loyalty nation week (113), day (45), today day (87), security (60), today (41), health (37), women (60), week (43), defense (28), (31), small (30), business great (28), act (25), prayer (26), security (20), service (25), debt (24), guard (23) (20), teacher (20) community great (32), leaders (27), to- leaders (29), great (23), day (32), college (19), center service (22), center (20), hood (16), service (16), students (17), business (14), fort (14), (15), work (15), join (14), today (14), work (13), college support (14) (11) member congress (39), service (24), staff (39), today (35), congress today (19), great (17), com- (34), meeting (30), great (26), munity (14), house (14), will (25), rep (23), enjoyed meeting (14), meet (13), (22), service (22), mckinley family (12), fellow (12) (21) family thoughts (34), friends (32), prayers (40), friends (33), leave (31), paid (30), prayers great (17), happy (17), today (29), #whatmothersneed (15), thoughts (14), faith (12), (28), time (17), raise (12), freedom (12), easter (11), care (11), go (11) wishing (11) group great (25), bipartisan (14), conservative (41), targeting today (12), thanks (9), met (40), great (25), irs (20), #irs (8), students (8), women (16), meeting (13), students (8), joined (7), donated (6), (13), working (13), met (12), incredible (6) political (12) Purity clean energy (28), water (15), water (14), act (10), come (5), air (12), debt (8), jobs (8), bill (4), epa (3), federal (3), ceiling (7), environment (7), make (3), will (3), #coal (2), infrastructure (6), will (6), #vascandal (2) bill (5)

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 213

Table 4. (continued) Moral dictionary Top words Words associated with top Words associated with top words (Democrats) words (Republicans) disease heart (16), lyme (7), rare (6), rare (8), pediatric (6), fight research (6), women (5), cost (5), research (5), kidney (4), (4), effective (4), fight (4), advance (3), alzheimer’s (3), learn (4), need (4) americans (3), awareness (3), conventions (3) sick paid (33), days (19), #what- americans (3), people (3), care mothersneed (16), leave (12), (2), vets (2) day (10), get (8), care (7), congress (6), working (6), child (5) humble honored (4), receive (2) honor (5), meet (4), heroes (3), today (3), honored (2), incredibly (2), participate (2), resolutions (2), senate (2), words (2) integrity sovereignty (2), strength (2), territorial (6), sovereignty (3), territorial (2) ukraine (3), #ukraine (2), at- tack (2), honor (2), indepen- dence (2), integrity (2), man (2), people (2)

Note. Numbers in parentheses indicate the total number of times each associated word occurred in the linguistic corpus for each party. Words that were used only one time are omitted. Words that were used frequently by members of one party but not the other have been italicized.

5.3 Discussion

In Study 2, we investigated whether liberal and conservative members of the U.S. Congress would differ in terms of their use of moral(istic) language. Because pre- vious analyses of formal speeches and interviews failed to document meaning- ful ideological differences in the use of moral language, we focused on language used in social media posts. Specifically, we used the same methods developed in Study 1 to analyze over 59,000 tweets sent from the official accounts of 388 mem- bers of Congress. Consistent with the assumptions of moral foundations theory, we observed that liberal legislators used more language pertaining to fairness and harm, whereas conservative legislators used more language pertaining to author- ity. Contrary to expectation, we found that liberal legislators were also more likely than conservative legislators to use group loyalty and purity language.

© 2018. John Benjamins Publishing Company All rights reserved 214 Joanna Sterling and John T. Jost

Upon further inspection, we concluded that Democrats tended to use the lan- guage of group loyalty to highlight the needs and interests of groups in society (such as “women,” “students,” “teachers,” and “paid leave”), whereas Republicans used similar language to highlight national security concerns and religious com- mitments (e.g., “defense,” “guard,” “faith,” and “prayer”). With respect to purity language, legislators from both parties discussed environmental issues and related topics. At the same time, Democrats were more likely to focus on questions of pollution and sickness (“energy,” “air,” and “sick days”), whereas Republicans were more likely to mention federal regulations and patriotic groups (e.g., “epa,” “clean water act”, “vets,” and “heroes”). These results demonstrate not only that liberal and conservative legislators prioritize different moral values but that such differences in value priorities can be studied unobtrusively through the analysis of language.

6. Concluding remarks

Previous efforts to use automated methods of textual analysis to document ide- ological differences in “moral foundations” have met with very limited success (Neiman et al. 2016a, 2016b; Prims 2017). In some cases, the lack of clearly in- terpretable patterns was attributed to the fact that studies had focused on rela- tively formal contexts of communication, in which politicians simply followed “the script.” In the present set of studies, we confirmed that it is indeed possible to capture ideological differences in moral discourse using unobtrusive methods to analyze relatively informal language on a social media platform. We conducted parallel investigations of the language used by ordinary citizens and members of the U.S. Congress on Twitter. According to moral foundations theory, liberals are expected to value fairness and harm avoidance more than con- servatives, whereas conservatives are expected to value group loyalty, authority, and purity more than liberals. In an analysis of over 11 million tweets from nearly 25,000 ordinary citizens, we confirmed that these ideological differences in value priorities were expressed in linguistic behavior (see also Jones et al. 2017). The one exception was for harm-related language. Somewhat unexpectedly, we observed that conservative citizens mentioned harm more often than liberal citizens, al- though this difference became nonsignificant when we adjusted for the effect of political sophistication. Because previous research demonstrates that political interest, involvement, and sophistication can strengthen relationships between psychological character- istics and ideological preferences (e.g., Jost et al. 2009), we explored this variable as a potential moderator. With respect to language concerning authority and group loyalty, we found that political sophistication did indeed exacerbate ideological

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 215 differences. Thus, sophisticated conservatives were especially likely to use words related to authority and group loyalty. With respect to harm-related language, we observed a cross-over pattern of interaction. At low levels of sophistication lib- erals expressed more harm-related concerns, but at high levels of sophistication conservatives expressed more harm-related concerns. The fact that we observed qualitatively different patterns for the so-called “binding foundations” (group loy- alty, authority, and purity) in comparison with concerns about fairness and harm avoidance is broadly consistent with the notion that these two general types of moral intuitions are different from one another and should not necessarily be treated as psychologically equivalent (Federico et al. 2013; Kugler et al. 2014; Milojev et al. 2014; Sinn and Hayes 2016). In a second study, we investigated the possibility that political elites – in this case, members of the U.S. Congress (and their staff members) – would also use language differently as a function of political ideology. Thus, we analyzed more than 59,000 tweets from the official accounts of 388 members of Congress and observed that, as hypothesized, liberal legislators used more fairness- and harm- related words, whereas conservative legislators used more authority-related words. Unexpectedly, we also found that liberals used more language pertaining to group loyalty and purity. Follow-up analyses suggested that liberal and conservative members of Congress use similar moral vocabularies, but they use the same words to emphasize different types of concerns. Whereas liberals used words related to purity and group loyalty to highlight environmental and health-related issues, as well as the needs of specific social groups in society (such as women, teachers, and students), conservatives used similar words to address environmental regulations, national security, and patriotism. These results highlight the need to move beyond simple word counts when it comes to understanding psychological and other dif- ferences in the moral discourse of the left and right. In a few cases, we observed that ideological differences played out identically with respect to ordinary citizens and Congressional elites. In both studies, liberals expressed more fairness concerns, whereas conservatives expressed more author- ity concerns. There were also some potentially interesting differences. Whereas conservative citizens (especially those who were high in political sophistication) tended to use more language pertaining to group loyalty and purity, liberal legisla- tors were more likely than conservative legislators to use these types of language (as well as harm-related language). It is conceivable, at least, that (on some level) liberal members of Congress were seeking to broaden their appeal by incorporat- ing values that are typically associated with more conservative constituencies. This would be consistent with the notion that ideological speech is not merely an “ex- pressive act” but also a “performative act,” insofar as it reflects rhetorical strategies and both conscious and nonconscious goals to persuade others (e.g., Billig 1987,

© 2018. John Benjamins Publishing Company All rights reserved 216 Joanna Sterling and John T. Jost

1991; Condor et al. 2013; van Dijk 2006). Another possibility, which we explored empirically, is that liberals and conservatives use similar language to highlight quite different moral and political concerns. These and other questions pertaining to the effects of ideology and sophistication on moral discourse are ripe for further investigation using the kinds of automated techniques we have described. Critical discourse analysts have long recognized that, in a certain sense, all linguistics is sociolinguistics (Fowler and Kress 1979, 186–187). That is, the way in which people use language is invariably linked up to their group memberships, so- cial identities, and ideological commitments (see also Baker et al. 2008; Fairclough and Wodak 1997; Krzyżanowski 2010). We would go even further: these sociologi- cal characteristics are themselves connected to underlying psychological factors (Jost 2006, 2017; see also Cichocka et al. 2016), including cognitive and motiva- tional processes that have the potential to turn abstract moral values into a basis for communication (Brady et al. 2017) and, ultimately, political action (Barberá et al. 2015). If this is correct, a tweet is not just a tweet, whether it comes from a member of Congress or an ordinary citizen. It is very often an invitation to partici- pate in an act of moralization – on the grounds of, say, fairness or authority – and, therefore, to put forth one image of the good society – as against others.

Funding

This research was supported by the INSPIRE program of the National Science Foundation (Awards # SES-1248077 and # SES-1248077-001) as well as the New York University Global Institute for Advanced Study (GIAS).

Acknowledgments

Joanna Sterling and John T. Jost are members of the Social Media and Political Participation (SMaPP) Lab at New York University (NYU). This research was supported by the INSPIRE program of the National Science Foundation (Awards # SES-1248077 and # SES-1248077-001) as well as the New York University Global Institute for Advanced Study (GIAS). We gratefully acknowledge the support of computer programmers Duncan Penfold-Brown, Jonathan Ronen, and Yvan Scher and the advice of Michał Krzyżanowski and Joshua Tucker.

References

Adorno, Theodor W., Else Frenkel-Brunswik, Daniel J. Levinson, and R. Nevitt Sanford. 1950. The Authoritarian Personality. New York: Harper.

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 217

Albaugh, Quinn, Julie Sevenans, Stuart Soroka, and Peter John Loewen. 2013. “The automat- ed coding of policy agendas: A dictionary-based approach.” In 6th Annual Comparative Agendas Conference, Antwerp, Belgium. Altemeyer, Bob. 1998. “The Other ‘Authoritarian Personality’.” Advances in Experimental 30: 47–92. doi: 10.1016/S0065-2601(08)60382-2 Baker, Paul, Costas Gabrielatos, Majid KhosraviNik, Michał Krzyżanowski, Tony McEnery, and Ruth Wodak. 2008. “A Useful Methodological Synergy? Combining Critical Discourse Analysis and Corpus Linguistics to Examine Discourses of Refugees and Asylum Seekers in the UK Press.” Discourse and Society 19: 273–306. doi: 10.1177/0957926508088962 Barberá, Pablo. 2015. “Birds of the Same Feather Tweet Together. Bayesian Ideal Point Estimation using Twitter Data.” Political Analysis 23: 76–91. doi: 10.1093/pan/mpu011 Barberá, Pablo, John T. Jost, Jonathan Nagler, Joshua Tucker, and Richard Bonneau. 2015. “Tweeting from Left to Right: Is Online Political Communication More Than an Echo Chamber?” Psychological Science 26: 1531–1542. doi: 10.1177/0956797615594620 Barberá, Pablo, Ning Wang, Richard Bonneau, John T. Jost, Jonathan Nagler, Joshua Tucker, and Sandra González-Bailón. 2015. “The Critical Periphery in the Growth of Social Protests”. PLoS ONE, 10(11): e0143611. doi: 10.1371/journal.pone.0143611 Billig, Michael. 1987. Arguing and Thinking: A Rhetorical Approach to Social Psychology. Cambridge, UK: Cambridge University Press. Billig, Michael. 1991. Ideology and Opinions: Studies in Rhetorical Psychology. London: Sage. Block, Jack, and Jeanne H. Block. 2006. “Nursery School Personality and Political Orientation Two Decades Later.” Journal of Research in Personality 40: 734–749. ​ doi: 10.1016/j.jrp.2005.09.005 Brady, William, Julian Wills, John T. Jost, Joshua Tucker, and Jay Van Bavel. 2017. “Emotion Shapes the Diffusion of Moralized Content in Social Networks.”Proceedings of the National Academy of Sciences, 114: 7313–7318. Cichocka, Aleksandra, Michał Bilewicz, John T. Jost, Natasza Marrouch, and Marta Witkowska. 2016. “On the Grammar of Politics – or Why Conservatives Prefer Nouns.” Political Psychology 37: 799–815. Clifford, Scott, and Jennifer Jerit. 2013. “How Words do the Work of Politics: Moral Foundations Theory and the Debate over Stem Cell Research.” The Journal of Politics 75: 659–671. doi: 10.1017/S0022381613000492 Carney, Dana R., John T. Jost, Samuel D. Gosling, and Jeff Potter. 2008. “The Secret Lives of Liberals and Conservatives: Personality Profiles, Interaction Styles, and the Things They Leave Behind.” Political Psychology 29: 807–840. doi: 10.1111/j.1467-9221.2008.00668.x Condor, Susan, Cristian Tileaga, and Michael Billig. 2013. “Political Rhetoric.” In The Oxford Handbook of Political Psychology, ed. by Leonie Huddy, David O. Sears, and Jack S. Levy, 262–300. Oxford: Oxford University Press. Conover, Pamela Johnston, and Stanley Feldman. 1981. “The Origins and Meaning of Liberal/ Conservative Self-Identifications.” American Journal of Political Science: 617–645. doi: 10.2307/2110756 Durrheim, Kevin, and John Dixon. 2005. “Studying Talk and Embodied Practices: Toward a Psychology of Materiality of ‘Race Relations’.” Journal of Community and Applied Social Psychology 15: 446–460. doi: 10.1002/casp.839 Enelow, James M., and Melvin J. Hinich. 1984. The Spatial Theory of Voting: An Introduction. Cambridge: Cambridge University Press.

© 2018. John Benjamins Publishing Company All rights reserved 218 Joanna Sterling and John T. Jost

Evans, Geoffrey, Anthony Heath, and Mansur Lalljee. 1996. “Measuring Left-Right and Libertarian-Authoritarian Values in the British Electorate.” British Journal of Sociology: 93–112. doi: 10.2307/591118 Fairclough, Norman and Ruth Wodak. 1997. “Critical Discourse Analysis.” InDiscourse as Social Interaction, ed. By Teun A. van Dijk, 258–84. London: Sage. Federico, Christopher M., and Paul Goren. 2009. “Motivated Social Cognition and Ideology: Is Attention to Elite Discourse a Prerequisite for Epistemically Motivated Political Affinities.” In Social and Psychological Bases of Ideology and System Justification, ed. by John T. Jost, Aaron C. Kay, and Hulda Thorisdottir, 267–291. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780195320916.003.011 Federico, Christopher M., Christopher R. Weber, Damla Ergun, and Corrie Hunt. 2013. “Mapping the Connections Between Politics and Morality: The Multiple Sociopolitical Orientations Involved in Moral Intuition.” Political Psychology 34: 589–610. ​ doi: 10.1111/pops.12006 Fowler, Roger, and Gunther Kress. 1979. “Critical Linguistics.” In Language and Control, ed. by Roger Fowler, Bob Hodge, Gunther Kress, and Tony Trew, 185–213. London: Routledge and Kegan Paul. Fraley, R. Chris, Brian N. Griffin, Jay Belsky, and Glenn I. Roisman. 2012. “Developmental Antecedents of Political Ideology: A Longitudinal Investigation from Birth to Age 18 Years.” Psychological Science 23: 1425–1431. doi: 10.1177/0956797612440102 Freeden, Michael. 1998. “Is Nationalism a Distinct Ideology?.” Political Studies 46: 748–765. doi: 10.1111/1467-9248.00165 Gerber, Alan S., Gregory A. Huber, David Doherty, Conor M. Dowling, and Shang E. Ha. 2010. “Personality and Political Attitudes: Relationships Across Issue Domains and Political Contexts.” American Political Science Review 104: 111–133. ​ doi: 10.1017/S0003055410000031 Graham, Jesse, , and Brian A. Nosek. 2009. “Liberals and Conservatives Rely on Different Sets of Moral Foundations.” Journal of Personality and Social Psychology 96: 1029–1046. doi: 10.1037/a0015141 Haidt, Jonathan. 2001. “The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment.” Psychological Review 108, 814–834. doi: 10.1037/0033-295X.108.4.814 Haidt, Jonathan, and Jesse Graham. 2007. “When Morality Opposes Justice: Conservatives Have Moral Intuitions that Liberals May Not Recognize.” Social Justice Research 20: 98–116. doi: 10.1007/s11211-007-0034-z Hirsh, Jacob B., Colin G. DeYoung, Xiaowen Xu, and Jordan B. Peterson. 2010. “Compassionate Liberals and Polite Conservatives: Associations of Agreeableness with Political Ideology and Moral Values.” Personality and Social Psychology Bulletin 36: 655–664. ​ doi: 10.1177/0146167210366854 Homer-Dixon, Thomas, Jonathan Leader Maynard, Matto Mildenberger, Manjana Milkoreit, Steven J. Mock, Stephen Quilley, Tobias Schröder, and Paul Thagard. 2013. “A Complex Systems Approach to the Study of Ideology: Cognitive-Affective Structures and the Dynamics of Belief Systems.” Journal of Social and Political Psychology 1: 337–363. doi: 10.5964/jspp.v1i1.36 Jacobson, Daniel. 2008. “Does Social Intuitionism Flatter Morality or Challenge it.” In Moral Psychology: The Cognitive Science of Morality, ed. by Walter Sinnott-Armstrong, 219–232. Cambridge, MA: MIT Press.

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 219

Jones, Kevin L., Sharareh Noorbaloochi, John T. Jost, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker. 2017. “Liberal and Conservative Values: What we can Learn from Congressional Tweets.” Political Psychology. doi: 10.1111/pops.12415 Jost, John T. 2006. “The End of the End of Ideology.” American Psychologist 61: 651–670. doi: 10.1037/0003-066X.61.7.651 Jost, John T. 2012. “Left and Right, Right and Wrong.” Science 337: 525–526. ​ doi: 10.1126/science.1222565 Jost, J. T. 2017. “Ideological Asymmetries and the Essence of Political Psychology.” Political Psychology, 38: 167–208. Jost, John T., Christopher M. Federico, and Jaime L. Napier. 2009. “Political Ideology: Its Structure, Functions, and Elective Affinities.” Annual Review of Psychology 60: 307–337. doi: 10.1146/annurev.psych.60.110707.163600 Jost, John T., Christopher M. Federico, and Jaime L. Napier. 2013. “Political Ideologies and Their Social Psychological Functions.” In The Oxford Handbook of Political Ideologies, ed. by Michael Freeden, Lyman Tower Sargent, and Marc Stears, 232–250. Oxford: Oxford University Press. Jost, John T., Brian A. Nosek, and Samuel D. Gosling. 2008, “Ideology: Its Resurgence in Social, Personality, and Political Psychology.” Perspectives on Psychological Science 3: 126–136. doi: 10.1111/j.1745-6916.2008.00070.x Kandler, Christian, Wiebke Bleidorn, and Rainer Riemann. 2012. “Left or Right? Sources of Political Orientation: The Roles of Genetic Factors, Cultural Transmission, Assortative Mating, and Personality.” Journal of Personality and Social Psychology 102: 633–645. doi: 10.1037/a0025560 Krzyżanowski, Michał. 2010. The Discursive Construction of European Identifies. Frankfurt am Main: Peter Lang. Kugler, Matthew, John T. Jost, and Sharareh Noorbaloochi. 2014. “Another Look at Moral Foundations Theory: Do Authoritarianism and Social Dominance Orientation Explain Liberal-Conservative Differences in “Moral” Intuitions?.” Social Justice Research 27: 413– 431. doi: 10.1007/s11211-014-0223-5 Leone, Luigi, Stefano Livi, and Antonio Chirumbolo. 2015. “Political Involvement Moderates the Impact of Worldviews and Values on SDO and RWA.” European Journal of Social Psychology 4: 418–427. McAdams, Dan P. 2008. “Life Story.” In The Encyclopedia of Adulthood and Aging. McAdams, Dan P., Michelle Albaugh, Emily Farber, Jennifer Daniels, Regina L. Logan, and Brad Olson. 2008. “Family Metaphors and Moral Intuitions: How Conservatives and Liberals Narrate their Lives.” Journal of Personality and Social Psychology 95: 978–990. ​ doi: 10.1037/a0012650 Milojev, Petar, Danny Osborne, Lara M. Greaves, , Marc S. Wilson, Caitlin L. Davies, James H. Liu, and Chris G. Sibley. 2014. “Right-Wing Authoritarianism and Social Dominance Orientation Predict Different Moral Signatures.” Social Justice Research 27: 149–174. doi: 10.1007/s11211-014-0213-7 Mondak, Jeffery J. 2010. Personality and the Foundations of Political Behavior. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511761515 Moscovici, Serge. 1988. “Notes Towards a Description of Social Representations.” European Journal of Social Psychology 18: 211–250. doi: 10.1002/ejsp.2420180303 Nagel, Thomas. 2012. “The Taste for Being Moral.”New York Review of Books, December 6 issue, 40–42.

© 2018. John Benjamins Publishing Company All rights reserved 220 Joanna Sterling and John T. Jost

Neiman, Jayme L., Frank J. Gonzalez, Kevin Wilkinson, Kevin B. Smith, and John R. Hibbing. 2016a. “Speaking Different Languages or Reading from the Same Script? Word Usage of Democratic and Republican Politicians.” Political Communication 33: 212–240. Neiman, Jayme L., Frank J. Gonzalez, Kevin Wilkinson, Kevin B. Smith, and John R. Hibbing. 2016b. “Corrigendum: Speaking Different Languages or Reading from the Same Script? Word usage of Democratic and Republican politicians.” Political Communication 33: 346–349. Newman, Matthew L., Carla J. Groom, Lori D. Handelman, and James W. Pennebaker. 2008. “Gender Differences in Language Use: An Analysis of 14,000 Text Samples.” Discourse Processes 45: 211–236. doi: 10.1080/01638530802073712 Poole, Keith T., and Howard Rosenthal. 1985. “A Spatial Model for Legislative Roll Call Analysis.” American Journal of Political Science 29: 357–384. doi: 10.2307/2111172 Prims, J. P., Zachary Melton, and Matt Motyl. 2017. “Using Twitter to Understand Moral Differences Underlying Political Preferences in the 2016 US Presidential Primary.” In Why Irrational Politics Appeals: Understanding the Allure of Trump, ed. by M. Fitzduff. Sidanius, Jim, and Felicia Pratto. 1999. Social Dominance: An Intergroup Theory of Social Hierarchy and Oppression. Cambridge: Cambridge University Press. ​ doi: 10.1017/CBO9781139175043 Sinn, Jeffrey S., and Matthew W. Hayes. 2016. “Replacing the Moral Foundations: An Evolutionary‐Coalitional Theory of Liberal‐Conservative Differences.”Political Psychology. ​ doi: 10.1111/pops.12361 Pennebaker, James W., and Lori D. Stone. 2003. “Words of Wisdom: Language Use Over the Life Span.” Journal of Personality and Social Psychology 85: 291–301. ​ doi: 10.1037/0022-3514.85.2.291 Suhler, Christopher L., and Patricia Churchland. 2011. “Can Innate, Modular “Foundations” Explain Morality? Challenges for Haidt’s Moral Foundations Theory.” Journal of Cognitive Neuroscience 23: 2103–2116. doi: 10.1162/jocn.2011.21637 Tomkins, Silvan S. 1965. “Affect and the Psychology of Knowledge.” In Affect, Cognition, and Personality: Empirical Studies, ed. by Silvan S. Solomon and Carroll E. Izard, 72–97. New York: Springer. Van Dijk, Teun A. 2006. “Ideology and Discourse Analysis.” Journal of Political Ideologies 11: 115–140. doi: 10.1080/13569310600687908 Wan, Ching, Kim‐Pong Tam, and Chi‐Yue Chiu. 2010. “Intersubjective Cultural Representations Predicting Behaviour: The Case of Political Culture and Voting.” Asian Journal of Social Psychology 13: 260–273. doi: 10.1111/j.1467-839X.2010.01318.x Webb, Eugene J., Donald Thomas Campbell, Richard D. Schwartz, and Lee Sechrest. 1966. Unobtrusive Measures: Nonreactive Research in the Social Sciences. Chicago: Rand McNally. Wilson, Glenn D. 1973. The Psychology of Conservatism. London: Academic Press. Yu, Bei. 2014. “Language and Gender in Congressional Speech.” Literary and Linguistic Computing 29: 118–13. doi: 10.1093/llc/fqs073

© 2018. John Benjamins Publishing Company All rights reserved Moral discourse in the Twitterverse 221

Address for correspondence Joanna Sterling Department of Psychology and Woodrow Wilson School Princeton University Peretsman-Scully Hall Princeton, NJ 08540 USA [email protected]

Co-author details John T. Jost Department of Psychology and Politics New York University 6 Washington Place New York, NY 10003 USA [email protected]

Biographical notes Joanna Sterling is a postdoctoral research fellow at Princeton University in the Psychology Department and the Woodrow Wilson School of Public and International Affairs. Her research combines experimental methods, social media data, and quantitative text analysis to measure the cognitive and motivational underpinnings of political ideology and communication. John T. Jost is Professor of Psychology and Politics and Co-Director of the Center for Social and Political Behavior at New York University. His work focuses on the theoretical and empiri- cal implications of system justification theory, and the underlying cognitive and motivational differences between liberals and conservatives. His research has been funded by the National Science Foundation and has appeared in top scientific journals and received national and inter- national media attention.

Publication history

Date received: 22 February 2017 Date accepted: 30 September 2017 Published online: 24 November 2017

© 2018. John Benjamins Publishing Company All rights reserved