<<

POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

Political bullshit receptivity and its correlates: a cross-cultural validation of the

concept

Vukašin Gligorić1, Allard R. Feddes, & Bertjan Doosje

Department of , University of Amsterdam, The Netherlands

Draft version: 26/10/2020

This is an unpublished manuscript that has been submitted for publication.

Contents may change prior to publication

Conflict of Interest: Authors have no conflict of interest.

1 Corresponding author: Vukašin Gligorić, Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129B 1018 WS Amsterdam, The Netherlands. ORCID: 0000-0001-7528-6806 Email: [email protected]

1 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

Abstract

Frankfurt defined persuasive communication that has no regard for truth, knowledge, or evidence as bullshit. Although there has been a lot of psychological research on pseudo-profound bullshit, no study examined this type of communication in politics. In the present research, we operationalize political bullshit receptivity as endorsing vague political statements, slogans, and political bullshit programs. We investigated the relationship of these three measures with pseudo-profound bullshit, ideology (political ideology, support for neoliberalism), populism, and voting behavior. Three pre-registered studies in different cultural settings (the United States,

Serbia, The Netherlands; total N = 534) yielded medium to high intercorrelations between political bullshit measures and pseudo-profound bullshit, and good construct validity

(hypothesized one-factor solution). A Bayesian meta-analysis showed that all political bullshit measures positively correlated with support for the free market, while only some positively correlated with social (political statements and programs) and economic conservatism

(programs), and populism (programs). In the U.S., higher receptivity to political bullshit was associated with a higher probability that one voted for Trump (vs Clinton) in the past and higher intentions to vote for Trump (vs Biden and Sanders). In the Netherlands, higher receptivity to political bullshit predicted the intention to vote for the conservative-liberal People's Party for

Freedom and Democracy. Exploratory analyses on merged datasets showed that higher receptivity to political bullshit was associated with a higher probability to vote for right-wing candidates/parties and lower probability for the left-wing ones. Overall, political bullshit endorsement showed good validity, opening avenues for research in political communication, especially when this communication is broad and meaningless.

Keywords: pseudo-profound bullshit, bullshit, political bullshit, ideology, voting

2 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

Introduction

“That’s the whole point of good propaganda. You want to create a slogan that nobody’s going to be against, and everybody’s going to be for. Nobody knows what it means, because it doesn’t mean anything.”

Noam Chomsky (1991, p. 21-22)

As Chomsky noted about the political realm, political communication has become sloganized and essentially meaningless to a certain extent. Consider the on-going, 2020 U.S. presidential elections: Joe Biden believes that “America is an Idea”, while Donald Trump asserts that he would “Keep America great”. Although these statements seem to convey some meaning, they are too broad and vague to be tested for trustworthiness or evidence. Yet, they aim to persuade voters. Although prevalent, this political communication has not been investigated empirically. In the present paper, we present theoretical and empirical evidence for receptivity to this type of communication which we label “political bullshit” following previous work on pseudo-profound bullshit (Pennycook et al., 2015).

Frankfurt (1986/2005) was the first to outline the persuasive communication that was constructed without regard for truth, knowledge, or evidence. He noted the prevalence of this phenomenon and named it bullshit, launching an upsurge in academic work on the topic (Law,

2011; Hardcastle & Reisch, 2006). In his seminal essay “On Bullshit”, he made a clear distinction between bullshit and lies; while liars care about the truth in the sense of hiding it, bullshitters have no regard for it (Frankfurt, 1986/2005). Therefore, whenever one speaks on a topic with no knowledge about it, that person is engaging in bullshitting (Petrocelli, 2018).

3 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

Crucially, it is the speaker’s intention (i.e., how it is produced), and not recipients’ interpretation that qualifies something as bullshit (Pennycook et al., 2015; Pennycook et al., 2016).

Pseudo-profound bullshit

Although bullshit has been extensively discussed in philosophy (Carson, 2016), it has been only recently investigated by psychologists. Specifically, Pennycook and colleagues (2015) focused on one type of bullshitpseudo-profound bullshitwhich represents a collection of pompous words that have syntactical structure, but no actual meaning (e.g., “Wholeness quiets infinite phenomena”). Comparing the pseudo-profound bullshit to vague messages that New Age figures try to convey, Pennycook and colleagues (2015) showed that endorsement of this type of bullshit is related to lower analytical and higher intuitive thinking style, higher religiosity, and paranormal beliefs. These findings were later replicated (Čavojová et al., 2019) and extended, showing that people susceptible to pseudo-profound bullshit are also more prone to conspiratorial and schizotypal ideations (Hart & Graether, 2018), (i.e., set of unusual experiences and magical thinking; Bainbridge et al., 2019), illusory pattern perception (Walker et al., 2019), use of essential oils (Ackerman & Chopik, 2020), and less prosocial behavior

(Erlandsson et al., 2018). Focusing on contextual factors, research showed that attributing pseudo-profound bullshit to a famous person increases perceived profundity (Gligorić &

Vilotijević, 2019), while bullshit titles increase perceived profundity of abstract art (Turpin et al.,

2019).

Given that pseudo-profound bullshit might be only one type of bullshit (Pennycook et al.,

2015), there have been attempts to measure bullshit other than pseudo-profound; for example,

Evans and colleagues (2020) measured scientific bullshit (assembling random words from physics glossary), while Čavojeva and colleagues (2020) tried to measure general bullshit (i.e.,

4 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY no abstract buzzwords). Endorsement of both types of bullshit showed to be highly correlated with the receptivity to pseudo-profound bullshit (rs = .60 and .83 respectively). Another empirical line of research on bullshit focused on bullshitting as an act (i.e., producer), rather than on the bullshit as a product (i.e. receptivity). For example, Petrocelli (2018) showed that bullshitting occurs more when one has to provide an opinion and when one can get away with bullshitting easier. Similarly, Littrel and colleagues (2020) focused on the frequency with which individuals engage in bullshitting, measuring two aspects of it: persuasive and evasive. While the formers’ function is to persuade, impress, and achieve acceptance of others, the latter has the aim to avoid giving direct answers and evade any thorough inquiry. They also showed that people who engage in bullshitting tend to engage more in overclaiming, are less honest and sincere, and have a lower cognitive ability (Littrel et al., 2020). However, the dominant approach remains the one on the reception of pseudo-profound bullshit. More importantly for the present research, some studies investigated the relationship with political affinity.

Pseudo-profound bullshit and political affinities. Several studies demonstrated the relationship between ideological stances and receptivity to pseudo-profound bullshit. These discovered that, in the U.S., neoliberals (i.e., supporters of the free-market politics; Sterling et al., 2016), Republican supporters, and conservatives are more susceptible to pseudo-profound bullshit (Pfattheicher & Schindler, 2016). The findings for the free-market supporters, economic and social conservatism were later replicated (Evans et al., 2020), though again in a U.S. sample.

This relationship fits well within ideological asymmetries between conservatives and liberals in epistemic motivation so that conservatives rely more on intuitive thinking style, heuristic and simple information processing (e.g., Jost, 2017; Jost et al., 2016). However, the relationship between ideology and pseudo-profound bullshit has not been found consistently: for example,

5 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

Hart and Graether (2018) found a positive relationship with conservatism only in the first study.

Additionally, no consistent evidence was found for a relation with neoliberalism as the findings were not successfully replicated on a Serbian sample (Gligorić & Vilotijević, 2020). Some authors also criticized the analyses Sterling and colleagues performed (i.e., the use of quadratic regression which increases the rate of false positives; Simonsohn, 2018). Indeed, it seems that the relationship between ideology and pseudo-profound bullshit is more complex: Nilsson and colleagues (2019) found a positive relationship with social conservatism within a Swedish sample but failed to replicate one with economic ideological preference. They also found that economical centrist (and even leftist) were most inclined to fall for pseudo-profound bullshit.

This was possibly the case because the voters of the Green party (considered to be left-wing) were the most susceptible to pseudo-profound bullshit. Additionally, political ideology is differently conceptualized in the U.S. and Sweden so that the ideological scale is moved to the right. This might be one of the reasons for inconsistent results (Nilsson et al., 2019).

Overall, although a promising line of research, pseudo-profound bullshit and political ideology are relatively distant psychological constructs. Therefore, it is reasonable to investigate this relationship using more political content, which is why we explore another type of bullshitpolitical.

Political bullshit

We define political bullshit as statements of political content that intend to persuade voters but are so vague and broad that they are essentially meaningless. We agree with

Pennycook and colleagues’ (2016) notion that the politicians’ goal is often to “say something without saying anything; to appear competent and respectful without concerning themselves with the truth” (p. 125). Therefore, political bullshit represents political communication which (1) has

6 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY no regard for truth (Frankfurt, 1986/2005), (2) promotes one’s goals and agenda (i.e., persuades to vote; Reisch, 2006) and is (3) essentially unclarifiable (Cohen, 2002). To this end, we would view the political communication that Chomsky described as a prototypical bullshit communication.

The idea of bullshit in politics has been present in the literature for some time but it still has not been clearly conceptualized or attempted to measure. First, as Pennycook and colleagues

(2015) indicated, pseudo-profound bullshit is just one type of bullshit, “a rather extreme point on what could be considered a spectrum of bullshit” (p.550). To this end, bullshit in politics would not rely on abstract buzzwords as pseudo-profound bullshit does because “a politician might deliberately simplify language to broaden his or her appeal, stating something that is vague and platitudinous, like ‘I believe in America!’” (Sterling et al., 2016, p. 358). Second, there has been some theoretical work on bullshit that focused on the Brexit campaign (e.g., Hopkin &

Rosamond, 2018; O’Dwyer, 2018), Donald Trump’s presidency (Kristiansen & Kaussler, 2018), or international politics (Neumann, 2006). Similarly, the idea of political bullshit was also used to discredit political opponents (e.g. Trump; Heer, 2015) or political movements (e.g., Brexit campaign; Ball, 2017). Nevertheless, we stick to the rigorous analysis of the concept; empirical research is needed to examine whether individuals at different points of the political spectrum use more political bullshit or are more susceptible to it. Finally, as the reviewed research suggests, there are indications of a relationship between bullshit (so far pseudo-profound) and political ideology. However, as these are relatively distant constructs, we suggest that measuring endorsement of political bullshit would prove more relevant and insightful.

Our definition of political bullshit also resonates well with the finding that bullshitting (or bullshitters) has two facets: persuasive and evasive (Littrell et al., 2020). It is political

7 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY communication, in particular, that aims to persuade (to attract voters, to get support for a particular agenda, etc.), while politicians often keep their communication abstract in order to evade directly answering questions that might lead to a decrease in support (Rogers & Norton,

2011). Conjoining these two features, politicians might try to persuade using statements that are too vague to repulse voters (i.e., evades meaningful proposals). In this way, our notion of political bullshit departs from demagoguery which includes lying (bullshit is indifferent to truth;

Frankfurt, 1986/2005) or making impossible promises (bullshit is too vague to promise anything meaningful) (for a review on demagoguery, see Gustainis, 1990). Political bullshit is also different from populism as it does not imply any divisions between “us” and “them”, although populists might sometimes engage in bullshitting (Hopkin & Rosamond, 2018; O’Dwyer, 2018).

In the present study, we constructed political statements that we deemed as political bullshit whose endorsement should be a measure of political bullshit receptivity.2 The threefold aim of the present study is to: (1) theoretically outline the concept of political bullshit, (2) operationalize the receptivity to political bullshit and (3) explore the relation of political bullshit receptivity with other traits, ideology, and voting behavior. More specifically for the second and third aim, our goal was to develop a scale made of political bullshit statements which participants rate on voting persuasiveness.

We expected positive correlations between the endorsement of political bullshit statements, slogans, and bullshit political programs (Sterling et al., 2016) given that all three measures fulfill our criteria of political bullshit as embodied in sloganized communication

(convergent validity) (H1). We expected that each of these measures shows a one-factorial

2 As argued previously, “it is not the understanding of the recipient of bullshit that makes something bullshit, it is the lack of concern (and perhaps even understanding) of the truth or meaning of statements by the one who utters it” (Pennycook et al., 2016, p.125). Therefore, in our case, we made political statements that lack actual meaning (sentences cannot be tested for trustworthiness or evidence) and are indifferent to truth.

8 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY structure. We also included factual political statements that have the same format as political bullshit statements but should not be regarded as persuasive (see the difference between mundane and pseudo-profound statements; Pennycook et al., 2015). Next, we hypothesized that endorsement of political bullshit would be positively correlated with the receptivity of pseudo- profound bullshit (H2) because both represent one type of bullshit (for a similar approach with a general bullshit scale see Čavojová et al., 2020). Based on the literature on pseudo-profound bullshit, we hypothesized that receptivity to political bullshit positively correlates with conservatism (H3; Pfattheicher & Schindler, 2016) and neoliberalism (H4; Sterling et al.,

2016). Finally, we expected low or no correlation at all with populism because there is a clear distinction between these (discriminant validity) (H5). Prediction of voting behavior (specific parties) is exploratory as there are no conclusive results in the literature (Nilsson et al., 2019).

All these hypotheses have been pre-registered at the Open Science Framework (OSF; pre- registration link, project link)

To address these research questions, we conducted three pre-registered studies in three different cultural settingsthe U.S., the Netherlands, and Serbia. We also performed a meta- analysis to examine the average effects and test for homogeneity which would indicate if the pattern of results holds across countries. Socio-politically, these countries are to some extent similar (e.g., all are liberal democracies), but also have differences (e.g., U.S. is a presidential democracy, while Serbia and Netherlands are parliamentary democracies; Serbia is a non-

WEIRD3 country). This allowed us to test the validity and reliability of the concept across different political contexts and get more precise estimates with more participants.

Overview of the studies

3 Western, educated, industrialized, rich, democratic; Henrich et al., 2010

9 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

All three studies had a correlational design and were conducted using survey methodology. Each study was conducted in the native language (English, Serbian, and Dutch) after the process of translation and back-translation by native speakers. Thus, the participants answered the same scales, except for the voting behavior as parties/candidates differ across countries. Before the main studies, political bullshit statements, factual political statements, and slogans were piloted (N = 51). The method section was pre-registered at Open Science

Framework (OSF) where materials for the pilot and main studies (in all languages), datasets, analyses scripts, and outputs are available (link).

Bayesian inference

Instead of the frequentist approach to significance, we used the Bayesian framework to determine the evidence in favor of correlations and differences in means (i.e. t-tests). Although

Bayesian and frequentist approaches represent different types of statistical inference, they are based on the same statistics (e.g., correlation coefficient, means) which are calculated from the sample. Thus, the size of the relationship can be interpreted as in the frequentist approachcorrelation coefficients have the same value, while the question of significance is different (i.e., no p values) (Wagenmakers et al., 2018). The Bayesian framework also allows finding the evidence in favor of the null by quantifying how many times the data are more likely under H0 (no correlation) than Ha (there is a correlation). Therefore, for a given correlation (e.g., r = .1), one can obtain evidence both in favor of the correlation and in favor of no correlation.

Sample size across studies

Given that Bayesian inference is used, the evidence can be monitored as the data is collected. This means that the stopping rule is not determined using power analysis, but by evidence achieved or resources at hand (Rouder, 2014; Wagenmakers et al., 2018). Given that

10 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY our research question is answered by a pattern of results rather than a single test, our sample size is determined by available resources which is at least 150 participants in each study. This sample size was also indicated by Bayes factor > 3 in favor of a one-sided alternative for r = .2 (using stretched beta prior width of 1).

Pilot study

Fifty-one participants completed the pilot study (65% females, Mage = 23.1, SDage = 2.4) in which we pre-tested political bullshit statements, factual political statements, and slogans.

Although we developed our political bullshit measures as to resemble the communication exemplified in the introduction, we cannot assert that statements in Biden’s and Trump’s campaign are political bullshit. This is because we do not know the intention which the producer had (Frankfurt, 1986/2005). On the other hand, our statements are political bullshit as we produced them without regard for truth and made them unclarifiable with the aim of persuading the voters4.

Political bullshit statements were constructed as vague and motivational political statements that lack meaning but aim to persuade voters (e.g., “To politically lead the people means to always fight for them”). These were developed to mimic political communication in which politicians try to persuade without using meaningful, verifiable statements. We pretested

15 items. Participants were presented with the following instructions answering on the five-point

Likert scale (1 = not at all persuaded to 5 = very persuaded):

4 One way to test whether Biden’s and Trump’s statements can be classified into political bullshit might be to take the same approach as Pennycook and colleagues (2015) did with Deepak Chopra’s tweets and pseudo-profound bullshit. If Biden’s and Trump’s statements, and our political bullshit have the same psychological reality (i.e., high reliability, one-factor structure), they can be merged in the same category as was the case with Chopra’s tweets and pseudo-profound bullshit.

11 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

We are interested in how people perceive politicians and politics in general. Below are a

series of statements that have been said by some politicians. Please read each statement and

take a moment to think about what it might mean. Then please rate how persuasive you think

it is. That is, rate how much you would be persuaded to vote for the candidate who said it.

Factual political statements were presented together with political bullshit statements to obtain a scale which would be used to control for response (i.e., to control for general acquiescent response style; see Pennycook et al., 2015). We pre-tested 10 items that were not supposed to be regarded as persuasive (e.g., “The president and prime minister have important political functions”).

Exploratory of these 25 items clearly indicated the distinction between political bullshit and factual statements yielding a two-factor solution. For the main studies, we selected 10 political bullshit statements and five factual statements based on the highest factor loadings. As expected, political bullshit statements (M = 3.00, SD = .81) were rated as more persuasive than factual statements (M = 2.43, SD = .98), BF+0 = 153.6, t(50) = 3.856, p < .001,

Cohen’s d = .54. This was also true when the scales were formed from the selected items based on the highest factor loadings (M = 3.13, SD = .90 vs. M = 2.37, SD = 1.10), BF+0 = 207.4, t(50)

= 3.961, p < .001, Cohen’s d = .56. Pre-tested and selected items, and the results of the factor analysis are given in the Appendix A.

Slogan endorsement. We also pre-tested 10 slogans which participants rated on how convincing they are using a five-point scale (1 = not all convincing to 5 = very convincing).

These were focused on a fictitious country of Gonfel to avoid any possible bias of known names, parties, or countries (e.g. “I believe in Gonfel!”). Exploratory factor analysis indicated a one-

12 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY factor solution from which five items with the highest loadings were selected. Pre-tested and selected slogans and the results of the factor analysis are given in Appendix B.

Study 1 – the U.S. sample

Participants

For our first study, we recruited participants from Prolific, a U.K. online platform pool.

Prolific also has a substantial number of U.S. participants and provides good quality data (Palan

& Schitter, 2018). We included four filters to prescreen participants for our study, so we allowed only: 1) participants of the U.S. nationality, 2) participants who currently reside in the U.S., 3) minimum approval rate of 92 (out of 100), and 4) participants who indicated “yes” or “no” on voting at the previous U.S. elections thus filtering out those who answered “rather not say”. This minimized the possibility of avoiding answering questions on voting in our study. To ensure high-quality data, we also included two checks (see Materials); if one of them was not answered correctly, the survey would terminate, and the participant would not get paid. The survey took around nine minutes to complete and the participants were paid 1.09£

(approximately 1.25€) for their time. In total, 183 participants completed the survey. As pre- registered, we excluded four participants as multivariate outliers on continuous dependent variables (Mahalanobis distance), while no participants were excluded based on our pre- registered minimum duration of 180 seconds. Therefore, we performed our analyses on the data of 179 participants (58.1% females, 36.9% males, and 5.0% indicated “other”; Mage = 29.6, SDage

= 10.7). One participant indicated education less than the high school, 44 had completed high school, 72 were students, 24 had an undergraduate degree, and 38 had a graduate degree.

Materials

13 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

After reporting their demographic characteristics, participants completed two single-item indicators of the ideological position on social issues (e.g., minority rights and gender equality) and economic issues (planned vs. free-market economy) using a seven-point scale (1 = left-wing to 7 = right-wing). Participants next filled out the following scales in the order in which they are presented here (items were randomized within the scales). Means, standard deviations, and reliabilities of all scales are given in Table 1. The whole questionnaire is given on OSF.

Pseudo-profound bullshit receptivity was measured using the 10-item scale with items such as “Wholeness quiets infinite phenomena” (Pennycook et al., 2015). Participants indicated how profound the statement was on a five-point scale (1 = not at all profound to 5 = very profound). Five truly meaningful statements were included as fillers to conceal the emptiness of pseudo-profound items (e.g. “No man ever steps in the same river twice, for it's not the same river and he's not the same man”).

Bullshit political program endorsement. Another political bullshit measure, next to political bullshit statements and slogans, is the endorsement of three political programs for the presidential elections in a fictitious country of Gonfel. We constructed these programs to be meaningless and empty. Participants rated how much they 1) would support the program and how likely they 2) would vote for the candidate (1 = not at all to 5 = very). This means that six items in total measured bullshit political program endorsement. An example of a bullshit political program is:

Our political program is based on the unity of our people in Gonfel. We promise that the

government that we form will work for its people, and not against its people as it has been

the case for the last several decades. Our greatest effort will be put in returning dignity to our

14 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

country so that we do not put shame on our ancestors. Pride and dignity are our values, and I

pledge myself to fight for them.

Together with the bullshit political programs, three meaningful political programs were included as fillers. These were made to reflect concrete policy ideas with which one could agree or disagree, but without using abstract words. These were used to conceal the vanity of bullshit political programs. One of these programs was as follows:

We base our program on the reduction of costs for the citizens of Gonfel. We plan to reduce

the university tuition fees by 20% and provide affordable medical service to the citizens with

income lower than the average. Also, more effort will be put on the development of railroads

so that we double the production of trains compared to the last year. This will enable our

citizens to move through the country easier and improve our economy.

Slogan endorsement. Participants rated how convincing (1 = not all convincing to 5 = very convincing) five slogans are (e.g. “I believe in Gonfel!”). After this, participants rated political bullshit statements (10 items from the pilot) together with factual political statements (five items from the pilot) on how persuasive they are using a five-point scale (1 = not at all persuasive to 5 = very persuasive)

Support for populism was measured by indicating agreement with four items (e.g.,

“Politicians should listen more closely to the problems the people have”) on a five-point scale (1

= completely disagree to 5 = completely agree) (Elchardus & Spruyt, 2016). Support for

Neoliberalism was measured using two items from the Fair Market Ideology scale (Jost et al.,

2003) on which participants indicate their endorsement of free-market policies using a five-point

15 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY scale (1 = disagree to 5 = agree). The items were “The free market economic system is a fair system” and “The free market economic system is an efficient system” (for the same approach, see Caruso et al. 2013).

Voting behavior. Participants answered whether they (1) voted at the last U.S. presidential elections (2016) with the options: “yes”, “no”, and “I was not allowed to vote (e.g., minors)”. If they indicated “yes”, they were asked to (2) indicate for which of the following candidates they voted:

1) Donald Trump, 2) Hillary Clinton, 3) Gary Johnson, 4) Jill Stein, 5) Other candidate, 6) I would rather not say

Next, participants indicated whether they (3) planned to vote at the next U.S. presidential elections (2020) with the options “yes” and “no”. If they indicated “yes”, they were asked to (4) indicate for which candidate they would vote:

1) Donald Trump, 2) Bernie Sanders, 3) Joe Biden, 4) Other candidate, 5) I would rather not say

Attention checks. We also included two attention check questions in the survey. One of them was presented together with the political bullshit and factual statements (“This is an attention check. Please answer Very persuasive”), while the second one was presented with the scale of populism (“This is an attention check. Please answer disagree on this question”). As previously mentioned, failing one of these led to the automatic termination of the survey.

Results

16 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

Data analyses were performed in R programming language (R Core Team, 2013) and

JASP software for Bayesian analyses (JASP Team, 2020).

Factor structure of political bullshit measures

To check whether mean scores can be calculated for each of our political bullshit measures, we tested the factorial structure of political bullshit statements, slogan endorsement, and bullshit political programs. Confirmatory factor analysis (CFA) of ten political bullshit statements showed that proposed one-factor solution showed a good fit, χ2(35) = 73.45, p < .001,

RMSEA = .078, SRMR = .049, CFI = .934, TLI = .915. Applying CFA on five slogans showed that one-factor solution with no covariances did not show good fit, χ2(5) = 31.12, p < .001,

RMSEA = .171, SRMR = .071, CFI = .873, TLI = .747. For this reason, we applied exploratory factor analysis (EFA) on five slogans. EFA showed that one factor should be retained according to eigenvalues higher than 1 and scree plot. All slogans had factor loadings higher than .40 (see

Appendix C for factor loadings of all political bullshit measures in all three countries).

We next applied CFA to three bullshit political programs out of which each had two responseslikelihood to 1) vote for candidate and 2) support for the program. This means that six items were measuring bullshit political program endorsement. Applying CFA to these six items showed that one-factor solution (without covariances) did not fit the data, χ2(9) = 398.36, p

< .01, RMSEA = .492, SRMR = .164, CFI = .545, TLI = .242. This is reasonable because this factor structure does not assume covariances between the two questions related to the same program, or the same format of question across programs (i.e. voting for candidate vs. support for the program). However, we could not apply covariances in this manner because the model becomes saturated. Therefore, we performed EFA on six items concerning three bullshit political

17 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY programs which indicated a one-factor solution according to eigenvalues higher than 1 and scree plot. All items had factor loadings higher than .60 (Appendix C).

Overall, factor structure indicates that all three measures are best conceptualized as one- factor measures, which is why we calculated means as scores. Therefore, we proceeded with calculating the correlations between our political bullshit measures and other measures in the study.

Bayesian correlations

All correlations between measures, together with descriptives and reliabilities of the scales are given in Table 1. As the table shows, there are medium to high correlations between all bullshit measuresboth political (political bullshit statements, slogans, and bullshit political programs) and pseudo-profound. Endorsing political factual statements was also correlated with all bullshit measures suggesting that there was ; that is, some participants were susceptible to any political statements. To partial out this variance, we calculated partial correlations between bullshit measures when political factual statements are controlled for (Table

2). Partial correlations indicate that the correlation between bullshit measures remains even after controlling the response bias. This, together with the factor analyses, indicates that there is a susceptibility to this kind of bullshit other than simple response bias

18 POLITICAL BULLSHIT RECEPTIVITY: A CROSS-CULTURAL STUDY

Table 15 Means, standard deviations, reliabilities of the scales, and correlations between measures. One-item measures do not have Cronbach’s α. M (SD) Cronbach’s α 1. 2. 3. 4. 5. 6. 7. 8. 1. Political bullshit 3.14 (.74) .86 statements 2. Political factual 1.83 (.90) .83 .406*** statements 3. Slogans 2.47 (.78) .75 .533*** .275*** 4. Bullshit programs 2.74 (.90) .86 .327*** .241** .303*** 5. Pseudo-profound 2.36 (.83) .89 .432*** .361*** .449*** .353*** bullshit 6. Ideology social 2.59 (1.58)  .068 .120 .004†† .263*** -.020†† 7. Ideology economy 3.30 (1.72)  .033† .090 .063† .223** -.011†† .745*** 8. Neoliberalism support 3.02 (1.07) .77 .144 .118 .198* .333*** .094 .469*** .574*** 9. Populism 3.55 (.61) .51 .121 .012− .186+ .202+ .211+ .201+ .101 .026− *** ** * Note. Bayes factor in favor of alternative positive against null (BF+0) > 100; BF+0 > 15; BF+0 > 4; †† † Bayes factor in favor of null against alternative positive (BF0+) > 10; BF0+ > 4 + − Bayes factor in favor of alternative against null (BF10) > 3; BF in favor of null against alternative (BF01) > 3 Unmarked correlations have Bayes factor in favor or null or alternative positive between 1 and 3 meaning there is inconclusive evidence (more data is needed).

5 As mentioned earlier, correlation sizes in Bayesian framework are interpreted in the similar way as in the frequentist approach. However, note that table − + contains cross and minus signs (†, ) apart from the asterisk and plus (*, ). Crosses and minuses indicate that there is evidence in favor of the H0, i.e., that there is evidence of absence (as opposed to the absence of evidence in the frequentist statistics). Bayes factors between 3 and 10 is interpreted as moderate evidence, between 10 and 30 as strong, between 30 and 100 as very strong, and higher than 100 as an extreme evidence in favor of hypothesis (Lee & Wagenmakers, 2013). In this case, for example, there is an extreme evidence (BF > 100) in favor of the correlation between political bullshit and pseudo-profound bullshit of .432. That is, observed data is more than 100 times more likely under the hypothesis that these two scales are correlated than that they are not.

19 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Table 2 Partial correlations between bullshit measures when the relationship is controlled for political factual statements (i.e., acquiescence with political statements) 1. 2. 3. 1. Political bullshit statements 2. Slogans .480*** 3. Bullshit programs .259** .254** 4. Pseudo-profound bullshit .335*** .390*** .294***

*** ** Note. Bayes factor in favor of alternative (BF10) > 100; BF10 > 30; Bayes factor was computed in Bayesian linear regression where it was computed how much R2 changes when one of the bullshit variables was included over the null model (including only political factual statements).

Interestingly, only some of the political bullshit measures correlated with ideological measures (stance on social and economic issues, neoliberalism) and populism. Endorsement of bullshit political programs correlated with all ideological measures and populism. While slogans correlated with the support for neoliberalism and populism (but not with the two ideology measures), political bullshit statements did not correlate with any of them. We next tested whether the participants distinguished between political bullshit statements and factual ones.

Indeed, political bullshit statements were rated as more persuasive than the factual statements (M

= 3.14 vs. M = 1.83; BF+0 > 100).

Voting predictions

Finally, we tested whether political bullshit measures could predict6 voting behavior.

Table 3 shows how participants responded to four voting questions. As evident, there is no much variance on the first three questionsmost of the respondents voted in the 2016 elections (doing that for Hilary Clinton) and plan to vote in the 2020 elections. There is some variance when it

6 In all logistic regression analyses of voting behavior throughout the manuscript, we use terms “predict” and “prediction” in the correlational sense and we do not imply and causality (Starkweather & Moske, 2011)

20 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY comes to the candidates in the 2020 elections as Sanders, Biden, and Trump accounted for most of the planned votes.

Table 3 Responses to four voting questions. Voted in the 2016 elections Yes 119 (66.5%) No 27 (15.1%) Was not allowed to vote 33 (18.4%) Candidates 2016 Donald Trump 20 (16.8%) Hillary Clinton 75 (63.0%) Gary Johnson 7 (5.9%) Jill Stein 4 (3.4%) Other candidate 8 (6.7%) I would rather not say 5 (4.2%) Plan to vote in the 2020 elections Yes 163 (91.1%) No 16 (8.9%)

Candidates 2020 Donald Trump 26 (16.0%) Bernie Sanders 58 (35.6%) Joe Biden 59 (36.2%) Other candidate 11 (6.8%) I would rather not say 9 (5.5%) Note. Percentages might not total to 100 due to rounding. Questions about the candidate were answered only by those who indicated “yes” to the previous question.

Given that the normality assumption was met only for political bullshit statements (W

= .99, p = .18), but not for slogans and bullshit political programs (Ws = .96, ps < .05), we conducted logistic regressions instead of linear discriminant analysis. We tested whether political bullshit statements, slogans, and bullshit political programs predicted voting in the 2016 elections (binary logistic regression with outcomes “yes” and “no”), candidates in the 2016 elections (binary logistic regression with outcomes “Trump” and “Clinton”), and candidates in the 2020 elections (multinomial logistic regression with outcomes “Trump”, “Sanders”, and

“Biden”). As pre-registered, we only selected cells that contained more than 10% of data (only

21 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY two candidates in the 2016 elections and three in 2020), which is why we did not perform analyses on voting plan 2020 (“no” had less than 10% of responses). The results are given in

Table 4. While political bullshit did not predict whether one voted in the 2016 elections, it could distinguish between the candidates for which one voted: the probability that one voted for Trump

(vs Clinton) increased with higher scores on bullshit program endorsement. Similarly, predicting the voting choice of the 2020 elections, the likelihood to be Trump’s voter increased as the bullshit program endorsement grew, as opposed to Biden and Sanders (Figure 1). Importantly, the same pattern of results remains when the acquiescence with factual political statements is controlled for.

Interestingly, endorsing political bullshit statements was significant so that higher scores increased the probability to vote for Biden (vs Trump). However, investigating this relationship in more detail showed that political bullshit statement had a suppressing effect as it did not predict for which candidate one would vote when no other variables are included in the model,

2 2 χ (2) = .480, p = .787, R McF = .002. On the other hand, political bullshit program alone predicted

2 2 the outcome successfully, χ (2) = 6.97, p = .031, R McF = .023, with higher scores on bullshit program increasing the probability to vote for Trump, compared to Biden (Z = 2.555, p = .011) or Sanders (Z = 1.85, p = .065).

Table 4

22 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Results of two binary logistic regression (top two) and one multinomial regression (bottom of the table). Voted in the 2016 elections (1 = Yes, 2 = No) B SE z p (Intercept) -1.505 1.05 -1.428 .153 Political Bullshit Statements -.298 .36 -.818 .413 Slogans .565 .34 1.684 .092 Bullshit Program -.176 .26 -.684 .494 Candidates in the 2016 elections (1 = Trump, 2 = Clinton) B SE z p (Intercept) 3.411 1.35 2.525 .012 Political Bullshit Statements -.111 .48 -.023 .982 Slogans .022 .42 .052 .958 Bullshit Program -.737 .32 -2.275 .023 Candidates in the 2020 elections

2 2 (Overall model test: χ (6) = 11.9, p = .064, R McF = .04) Biden - Trump Predictor B SE z p (Intercept) 1.714 1.14 1.500 .133 Political Bullshit Statements .849 .40 2.114 .034 Slogans -.478 .38 -1.246 .213 Bullshit Program -.823 .31 -2.650 .008 Sanders - Trump Predictor B SE z p (Intercept) 1.683 1.12 1.503 .133 Political Bullshit Statements .596 .39 1.525 .127 Slogans -.413 .38 -1.096 .273 Bullshit Program -.575 .31 -1.878 .060 Biden - Sanders Predictor B SE z p (Intercept) .031 .86 .036 .971 Political Bullshit Statements .252 .31 .825 .410 Slogans -.065 .29 -.225 .822 Bullshit Program -.249 .22 -1.134 .257

23 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Figure 1. Bullshit program endorsement and voting in the 2020 elections: increase in bullshit program endorsement increased the probability to vote for Trump, decreased to vote for Biden, while the probability to vote for Sanders stayed relatively unchanged.

Discussion

In the first study, we found support for the convergent, discriminant, and concurrent validity of the political bullshit concept. Three measurespolitical bullshit statements, slogans, and bullshit programswere moderate to highly correlated with each other and with pseudo- profound bullshit, another type of bullshit (convergent validity). This relationship held even after controlling for answers on factual political statements. As predicted, political bullshit measures did not correlate or had small correlations with populism, demonstrating discriminant validity.

Two political bullshit measures (slogans and bullshit programs) correlated with neoliberalism, while program bullshit also correlated with both ideological measures (concurrent validity). We also found some support that program bullshit predicts voting behaviorTrump’s voters were more susceptible to it in both 2016 and 2020.

24 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Study 2 – Serbian sample

Participants

For our second study, participants were recruited by posting the link and short description on the Facebook posts of different news pages in Serbia (e.g., N1, Kurir). In total, 254 completed the survey voluntarily. However, a large number of participants were excluded due to failing attention check questions (68). One participant was removed as a multivariate outlier based on the Mahalanobis distance, while no participants were removed for speeding (completion faster than 180 seconds). This left the final sample of 185 participants (55.7% females, 44.3% males;

Mage = 37.3, SDage = 11.3). Three participants indicated education lower than the high school, 47 had completed high school, 26 were students, and 109 had a university degree.

Materials

Participants completed the Serbian-translated materials used with the U.S. sample. The translation was performed by two native Serbian speakers who were fluent in English. Measures were presented in the same order as in the first study and with the items randomized within the scales. As mentioned, the only difference was in questions about voting behavior. Participants in

Serbia also first indicated whether 1) they voted at the last parliamentary elections (2016) with the options: “yes”, “no”, and “I was not allowed to vote (e.g., minors). If they indicated “yes”, they were asked to (2) indicate for which of the following parties they voted:

1) Serbian Progressive Party coalition  SNS 2) Socialist Party of Serbia coalition  SPS-JS-ZS 3) Serbian radical party  SRS 4) Enough is Enough  DJB 5) Democratic Party coalition  DS 6) Dveri-DSS 7) Alliance for better Serbia  LDP-LSV-SDS 8) Other party 9) I would rather not say

25 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Finally, participants were presented with the following question: “If you were offered the following lists for the 2020 parliamentary elections, which one would you choose regardless of the boycott?”7 Participants could choose between the following options:

1) Serbian Progressive Party coalition  SNS 2) Socialist Party of Serbia coalition  SPS-JS-ZS 3) Serbian radical party  SRS 4) Civic Front  NDMBGD and other local parties 5) Alliance for Serbia  SZS (DS-Dveri-SSP-NS) 6) Movement of Free Citizens  PSG 7) Other party 8) I would not vote 9) I would rather not say

After completing the survey, participants were thanked and wished good luck in dealing with the Coronavirus crisis.

Results

Factor structure of political bullshit measures

As in Study 1, we first tested the factorial structure of political bullshit statements, slogan endorsement, and bullshit programs. CFA of ten political bullshit statements showed that proposed one-factor solution did not show a good fit, χ2(35) = 100.64, p < .001, RMSEA = .101,

SRMR = .069, CFI = .848, TLI = .805. Modification indices showed that covariance between the two items “It is of the utmost importance that our political leadership believes in their country” and “It is crucial that politicians believe in their voters” should be included. This covariance might have occurred because of the phrasing (“It is…”). Therefore, the covariance between the items could be explained by an underlying factor and similar phrasing. Indeed, after including the covariance between these two items, CFA showed a good fit, χ2(34) = 76.57, p < .001,

7 In time of conducting the study, there was a political crisis in Serbia in which main part of the opposition parties declared the boycott of the 2020 elections. Therefore, the question about the next elections could have not been phrased in the same way as in the U.S. and The Netherlands. That is, if the participants had been first asked whether they would vote on the elections, choosing the option “no” would have not distinguished between the lack of the intention and the intention to follow the declared boycott.

26 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

RMSEA = .082, SRMR = .060, CFI = .902, TLI = .870. The one-factor solution was also yielded by exploratory factor analysis (for factor loadings, see Appendix C).

Applying CFA on five slogans showed that a one-factor solution did not show good fit,

χ2(5) = 15.57, p < .01, RMSEA = .107, SRMR = .057, CFI = .942, TLI = .883. For this reason, we applied exploratory factor analysis (EFA) on five slogans. EFA showed that one factor should be retained, with all items having factor loadings between .33 and .74 (Appendix C).

We next applied CFA to three bullshit political programs out of which each had two responses; results showed that a one-factor solution (without covariances) did not fit the data,

χ2(9) = 385.60, p < .001, RMSEA = .476, SRMR = .144, CFI = .605, TLI = .341. Like Study 1, this is reasonable because this factor structure does not assume covariances between the two questions related to the same program, or the same format of question across programs. Again, we could not apply covariances in this manner because the model becomes saturated. Therefore, we performed EFA on six items concerning three bullshit political programs which indicated a one-factor solution according to eigenvalues higher than 1 and scree plot. All items had factor loadings higher than .60 (see Appendix C).

Overall, testing the factorial structure of political bullshit instruments indicated that all three measures are best conceptualized as one-factor measures. However, we note that some items could be improved to get better scales (as indicated by modification indices). As in Study

1, we calculated mean scores and proceeded with calculating the correlations between our political bullshit measures and other measures in the study.

Bayesian correlations

All correlations between measures, together with descriptives and reliabilities of the scales are given in Table 5. Similarly to the first study, there are moderate to high correlations

27 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY between all bullshit measuresboth political (political bullshit statements, slogans, and bullshit programs) and pseudo-profound. Endorsement of political factual statements was correlated with political bullshit statements and slogans, but not with bullshit programs and pseudo-profound bullshit. This again suggested some response bias. To partial out this variance, we calculated partial correlations between bullshit measures when political factual statements are controlled for

(Table 6). Partial correlations indicate that the correlation between bullshit measures remains even after controlling the response bias. This, together with the factor analyses, indicates that there is a susceptibility to this kind of bullshit other than simple response bias.

28 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Table 5 Means, standard deviations, reliabilities of the scales, and correlations between measures. One-item measures do not have Cronbach’s α. M (SD) Cronbach’s α 1. 2. 3. 4. 5. 6. 7. 8. 1. Political bullshit 3.05 (.78) .80 statements 2. Political factual 2.69 (.90) .64 .389*** statements 3. Slogans 2.33 (.84) .72 .538*** .316*** 4. Bullshit programs 2.75 (1.15) .89 .210* .107 .416*** 5. Pseudo-profound 2.57 (.81) .84 .310*** .128 .284*** .291*** bullshit 6. Ideology social 3.88 (1.80)  .141 .001†† .279*** .409*** .126 7. Ideology economy 3.59 (1.86)  .041† .033† .015† .045† .056† .250** 8. Neoliberalism support 3.34 (1.16) .81 .146 .079† .219* .040† .211* -.058†† .228** 9. Populism 3.47 (.74) .58 .040− .020− − .205+ .333+++ .169 .202+ -.018− − .115− *** ** * Note. Bayes factor in favor of alternative positive against null (BF+0) > 100; BF+0 > 20; BF+0 > 10; †† † Bayes factor in favor of null against alternative positive (BF0+) > 10; BF0+ > 3 +++ + Bayes factor in favor of alternative against null (BF10) > 100; BF10 > 3 − − − Bayes factor in favor of null against alternative (BF01) > 10; BF01 > 3 Unmarked correlations have Bayes factor in favor or null or alternative positive between 1 and 3 meaning there is inconclusive evidence (more data is needed).

29 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Table 6 Partial correlations between bullshit measures when the relationship is controlled for political factual statements (i.e., acquiescence with political statements). 1. 2. 3. 1. Political bullshit statements 2. Slogans .474*** 3. Bullshit programs .183* .405*** 4. Pseudo-profound bullshit .285*** .259** .281*** *** ** * Note. Bayes factor in favor of alternative (BF10) > 100; BF10 > 30; BF10 > 3; Bayes factor was computed in Bayesian linear regression where it was computed how much R2 changes when one of the bullshit variables was included over the null model (including only political factual statements).

In line with the first study, only some of the political bullshit measures correlated with ideological measures (stance on social and economic issues, neoliberalism) and populism.

Endorsement of both bullshit programs and slogans correlated with ideology on social issues and populism, while slogans also correlated with support for neoliberalism. As in Study 1, political bullshit statements did not correlate with any of the measures. Overall, the correlations pattern

moderate to high intercorrelations between bullshit measures, some positive correlations between political bullshit measures and ideological measures (ideology and neoliberalism support) and none to small correlations with populismare obtained in both studies.

We also compared the ratings of political bullshit statements and the factual ones. Same as in the first study, political bullshit statements were rated as more persuasive than the factual statements, M = 3.05 vs. M = 2.69; BF+0 > 100, indicating that our participants detected factual, non-persuasive statements.

Voting predictions

Finally, we tested whether political bullshit measures could predict voting behavior. Table

7 shows how participants responded to three voting questions. As evident, there was no much variance on the first questionmost of the respondents voted in the 2016 elections. However,

30 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY there is a high variance when it comes to the parties for which participants voted in the 2016 election and for which they planned to vote in the 2020 elections.

Table 7 Responses to three voting questions. Voted in the 2016 elections Yes 135 (73.0%) No 37 (20.0%) Was not allowed to vote 13 (7.0%) Parties 2016 Serbian Progressive Party coalition  SNS 16 (11.8%) Socialist Party of Serbia coalition  SPS-JS-ZS 3 (2.2%) Serbian radical party  SRS 2 (1.5%) Enough is Enough  DJB 21 (15.6%) Democratic Party coalition  DS 15 (11.1%) Dveri-DSS 9 (6.7%) Alliance for better Serbia  LDP-LSV-SDS 4 (3.0%) Other party 26 (19.3) I would rather not say 39 (28.9%)

Parties 2020 Serbian Progressive Party coalition  SNS 16 (8.6%) Socialist Party of Serbia coalition  SPS-JS-ZS 5 (2.7%) Serbian radical party  SRS 0 (0%) Civic Front  NDMBGD and other local parties 8 (4.3%) Alliance for Serbia  SZS (DS-Dveri-SSP-NS) 17 (9.2%) Movement of Free Citizens  PSG 11 (5.9%) Other party 55 (29.7%) I would not vote 42 (22.7%) I would rather not say 31 (16.8%) Note. Percentages might not total to 100 due to rounding. Questions about the parties 2016 were answered only by those who indicated “yes” to the previous question.

Given that the normality assumption was met only for political bullshit statements (W

= .99, p = .15), but not for slogans (W = .97, p < .01) and bullshit programs (W = .96, p < .01), we conducted logistic regressions instead of linear discriminant analysis. We tested whether political bullshit statements, slogans, and bullshit programs predicted voting in 2016 (binary logistic regression with outcomes “yes” and “no”), and for which parties participants voted in

2016 (multinomial logistic regression with outcomes “SNS”, “DJB”, and “DS”; only these

31 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY outcomes were above the pre-registered criterion of 10%). For the voting choice at the following

2020 elections, we could not perform analyses because none of the offered parties passed the threshold of 10% making the cell sizes too small to analyze.

Binary logistic regression showed that political bullshit predicted whether a person voted

2 2 or not in the 2016 elections, χ (3) = 15.584, p = .001, R McFadden = .09. Specifically, only bullshit program endorsement was a significant predictor (B = .684, SE = .20, p < .001), so that people who did not vote scored higher. Political bullshit statements (B = -.409, SE = .30, p = .166) and slogans (B = -.375, SE = .29, p = .195) did not contribute to the prediction. Political bullshit could not predict for which party (between SNS, DJB and DS) a participant voted, χ2(6) = 8.152,

2 p = .227, R McFadden = .07.

Discussion

This study yielded similar results to Study 1: we found support for the convergent, discriminant, and concurrent validity of political bullshit. Again, three measures of political bullshit were moderate to highly correlated with each other and with pseudo-profound bullshit, another type of bullshit (convergent validity). This relationship held even after controlling for answers on factual political statements. However, in contrast to Study 1, endorsement of one measure of political bullshitprogram bullshitcorrelated moderately with populism; the other two measures of political bullshit had low correlations with populism (discriminant validity).

As opposed to the U.S. study in which two political bullshit measures (slogans and bullshit programs) correlated with neoliberalism, in this study it was only the case with slogans.

Similarly, in this study, endorsing bullshit programs was not correlated with economic ideology.

On the other hand, social ideology correlated with slogan endorsement which was not the case in the U.S. study. Overall, an analysis of intercorrelations showed similar patterns in this study and

32 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Study 1, illustrating the validity of the concept, while some differences emerged on the ideological variables, possibly due to the cultural differences.

Finally, we could not predict voting behavior as the categories had a large variance and small cell sizes. However, we found that political bullshit could distinguish between voters and non-voters in the 2016 elections as voters were endorsing political bullshit programs less. This is possibly due to their greater interest in politics making them more attentive to bullshit in politics.

Given that the U.S. and Serbia are socio-politically different (e.g., Serbia is a European country with socialist history; it is a parliamentary democracy), and that studies employed different sample collection strategies, small differences in the relationships between the measures were not surprising. We now report the findings in the Netherlandsa country similar to the U.S. in some aspects (e.g., WEIRD country), and Serbia in others (e.g., European parliamentary democracy).

Study 3 – the Netherlands

Participants

We recruited participants in two ways. We recruited students who filled out the survey in return for research credits (n = 51). We also collected responses via Prolific (n = 143). In total,

172 completed the survey without failing the attention check (survey terminated when one failed any of two attention check questions). One participant was removed as a multivariate outlier based on the Mahalanobis distance, and one participant was removed for speeding (completion faster than 180 seconds). This left the final sample of 170 participants (45.3% females, 54.7% males; Mage = 27.1, SDage = 10.1). No participants indicated to have an education lower than high school, 23 had completed high school, 83 were students, and 37 had an undergraduate, while 27 had a postgraduate university degree.

33 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Materials

Participants completed the Dutch-translated materials used in previous studies. The translation was performed by two native Dutch speakers fluent in English. Measures were presented in the same order as in the previous studies and with the items randomized within the scales. As mentioned, the only difference was in questions about voting behavior. Participants in the Netherlands first indicated whether 1) they voted at the last parliamentary elections (2017) with the options: “yes”, “no”, and “I was not allowed to vote (e.g., minors). If they indicated

“yes”, they were asked to (2) indicate for which of the following parties they voted:

1. People's Party for Freedom and Democracy  VVD 2. Labour Party  PvdA 3. Party for Freedom  PVV 4. Socialist Party  SP 5. Christian Democratic Appeal  CDA 6. Democrats 66  D66 7. GroenLinks  GL 8. Other party 9. I would rather not say

Next, participants were asked if they intended to vote at the next (2021) parliamentary elections, “yes” or “no”. If they answered “yes”, they were asked to indicate the party for which they would vote (note that in the 2021 elections there is one additional party that was included,

Forum for Democracy):

1. People's Party for Freedom and Democracy  VVD 2. Labour Party  PvdA 3. Party for Freedom  PVV 4. Socialist Party  SP 5. Christian Democratic Appeal  CDA 6. Democrats 66  D66 7. GroenLinks  GL 8. Forum for Democracy  FvD 9. Other party 10. I would rather not say

Results

34 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Factor structure of political bullshit measures

As in the previous studies, we first tested the factorial structure of political bullshit statements, slogan endorsement, and bullshit programs. Confirmatory factor analysis (CFA) of ten political bullshit statements showed that the proposed one-factor solution had a good fit,

χ2(35) = 58.71, p < .01, RMSEA = .063, SRMR = .051, CFI = .943, TLI = .927. Applying CFA on five slogans showed that the one-factor solution did not have a good fit, χ2(5) = 15.94, p < .01,

RMSEA = .113, SRMR = .056, CFI = .933, TLI = .847. For this reason, we applied exploratory factor analysis (EFA) on five slogans. EFA showed that one factor should be retained based on the criterion of eigenvalues higher than 1 as well as the scree plot. All slogans had factor loadings higher than .45 (see Appendix C).

We next applied CFA to three bullshit programs out of which each had two responses; results showed that the one-factor solution (without covariances) did not fit the data, χ2(9) =

396.69, p < .001, RMSEA = .503, SRMR = .178, CFI = .515, TLI = .191. As in the two previous studies, this is reasonable because this factor structure does not assume covariances between the two questions related to the same program, or the same format of question across programs.

Again, we could not apply covariances in this manner because the model would become saturated. Therefore, we performed EFA on six items concerning three bullshit programs which indicated a one-factor solution according to eigenvalues higher than 1 and the scree plot. All items had factor loadings higher than .65 (see Appendix C).

Overall, testing the factorial structure of political bullshit instruments indicates that all three measures are best conceptualized as one-factor measures. As in both previous studies, we calculated means as scores and proceeded with calculating the correlations between our political bullshit measures and other measures in the study.

35 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Bayesian correlations

All correlations between measures, together with descriptives and reliabilities of the scales are given in Table 8. As in the previous two studies, there were moderate to high correlations between all bullshit measurespolitical (political bullshit statements, slogans, and bullshit programs) and pseudo-profound, except for the correlation between slogans and pseudo- profound bullshit which is positive but inconclusive. Endorsing political factual statements was correlated with all measures (even ideological) except bullshit programs and populism. This again suggested response bias, which was somewhat larger than in U.S. and Serbian samples where there were no correlations between ideological measures and factual statements. To partial out this variance, we calculated partial correlations between bullshit measures when political factual statements are controlled for (Table 9). Partial correlations indicate that the correlation between bullshit measures remains even after controlling the response bias. This, together with the factor analyses, indicate that there is a susceptibility to political bullshit other than simple response bias.

Similarly to the previous studies, only some of the political bullshit measures correlated with ideological measures (stance on social and economic issues, neoliberalism) and populism.

While political bullshit statements correlated with all ideological variables, endorsement of slogans did not correlate with any of them. On the other hand, bullshit programs correlated with social ideology and neoliberalism support, and it was the only political bullshit measure that correlated with populism. Overall, the pattern of the correlationsmoderate to high intercorrelations between bullshit measures, some positive correlations between political bullshit measures and ideological measures (ideology and neoliberalism support), and none to small correlations with populismare obtained in all three studies.

36 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Table 8 Means, standard deviations, reliabilities of the scales, and correlations between measures. One-item measures do not have Cronbach’s α. M (SD) Cronbach’s α 1. 2. 3. 4. 5. 6. 7. 8. 1. Political bullshit 3.02 (.68) .83 statements 2. Political factual 2.51 (1.06) .87 .513*** statements 3. Slogans 2.26 (.65) .71 .377*** .139 4. Bullshit programs 2.29 (.91) .86 .397*** .270*** .310*** 5. Pseudo-profound 2.58 (.79) .87 .402*** .252** .163 .318*** bullshit 6. Ideology social 3.26 (1.80)  .239** .303*** .077† .293*** .137 7. Ideology economy 3.87 (1.86)  .203* .262** -.010†† .165 .197* .607*** 8. Neoliberalism support 3.04 (1.16) .61 .249** .216** .145 .237** .193* .370*** .486*** 9. Populism 2.91 (.74) .69 .141 .102− -.104− − .204+ .151 .117− .049− -.017− − *** ** * Note. Bayes factor in favor of alternative positive against null (BF+0) > 100; BF+0 > 10; BF+0 > 3; †† † Bayes factor in favor of null against alternative positive (BF0+) > 10; BF0+ > 3 + Bayes factor in favor of alternative against null (BF10) > 3 − − − Bayes factor in favor of null against alternative (BF01) > 10; BF01 > 3 Unmarked correlations have Bayes factor in favor or null or alternative positive between 1 and 3 meaning there is inconclusive evidence (more data is needed).

37 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Table 9 Partial correlations between bullshit measures when the relationship is controlled for political factual statements (i.e., acquiescence with political statements). 1. 2. 3. 1. Political bullshit statements 2. Slogans .360*** 3. Bullshit programs .312*** .285*** 4. Pseudo-profound bullshit .329*** .134 .268** *** ** Note. Bayes factor in favor of alternative (BF10) > 100; BF10 > 70; Bayes factor was computed in Bayesian linear regression where it was computed how much R2 changes when one of the bullshit variables was included over the null model (including only political factual statements).

We also compared the ratings of political bullshit statements and the factual ones. Same as in the previous studies, political bullshit statements were rated as more persuasive than the factual statements, M = 3.01 vs. M = 2.51; BF+0 > 100, indicating that our participants detected factual, non-persuasive statements.

Voting predictions

Finally, we tested whether political bullshit measures could predict voting behavior.

Table 7 shows how participants responded to three voting questions. Мost of the respondents voted during the 2017 elections (if they were allowed to vote). There was high variance regarding parties for which participants voted, with only three parties passing the 10% threshold for the analyses (VVD, D66, GL), and for which they planned to vote during the 2021 elections.

Similarly for the 2021 elections, most participants indicated they would vote, with the same three parties passing the 10% threshold.

Table 10 Responses to three voting questions. Voted in the 2017 elections Yes 102 (60.0%) No 18 (10.6%)

38 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Was not allowed to vote 50 (29.4%) Parties 2017 People's Party for Freedom and Democracy  VVD 11 (10.8%) Labour Party  PvdA 6 (5.9%) Party for Freedom  PVV 5 (4.9%) Socialist Party  SP 7 (6.9%) Christian Democratic Appeal  CDA 0 (0%) Democrats 66  D66 16 (15.7%) GroenLinks  GL 34 (33.3%) Other party 19 (18.6%) I would rather not say 4 (3.9%) Vote in the 2021 elections Yes 153 (90.0%) No 17 (10.0%)

Parties 2020 People's Party for Freedom and Democracy  VVD 16 (11.8%) Labour Party  PvdA 14 (9.2%) Party for Freedom  PVV 4 (2.6%) Socialist Party  SP 7 (4.6%) Christian Democratic Appeal  CDA 5 (3.3%) Democrats 66  D66 17 (11.1%) GroenLinks  GL 38 (24.8%) Forum for Democracy  FvD 9 (5.9%) Other party 16 (10.5%) I would rather not say 25 (16.3%) Note. Percentages might not total to 100 due to rounding. Questions about the parties were answered only by those who indicated “yes” to the previous question.

Given that the normality assumption was violated for all three political bullshit measures statements (Ws < .99, ps < .05), we conducted logistic regressions instead of linear discriminant analysis. We tested whether political bullshit statements, slogans, and bullshit programs predicted voting in 2017 (binary logistic regression with outcomes “yes” and “no”), and for which parties participants voted during the 2017 elections (multinomial logistic regression with outcomes “VVD”, “D66”, and “GL”). We also tested whether these measures could predict whether a person would vote in 2021 (binary logistic regression with outcomes “yes” and “no”) and for which parties one would vote in the 2021 elections (multinomial logistic regression with outcomes “VVD”, “D66”, and “GL”).

39 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Binary logistic regression showed that political bullshit did not predict whether a person

2 2 voted or not in the 2017 elections, χ (3) = 4.646, p = .200, R McFadden = .05, with none of the bullshit measures attaining significance (zs < 1.6, ps > .10). On the other hand, the model predicting the party for which participant voted in 2017 (“VVD”, “D66”, and “GL”) showed

2 2 marginal significance, χ (6) = 11.200, p = .082, R McFadden = .09. Specifically, as Figure 2 shows, a higher score on political bullshit statements increased the probability that one had voted for the conservative-liberal VVD party (as compared to the centrist D66), B = 1.747, SE = .89, p = .051, and a higher score on political bullshit programs also increased the probability that one had voted for the VVD (as compared to the left-wing oriented GL party), B = .890, SE = .48, p = .063.

Political bullshit measures did not predict whether one planned to vote in the 2021

2 2 elections χ (3) = 3.321, p = .345, R McFadden = .03, with none of the bullshit measures attaining significance (zs < 1.5, ps > .15). Also, the model was not significant in predicting the parties for which participants planned to vote in the 2021 elections (“VVD”, “D66”, and “GL”), χ2(6) =

2 4.230, p = .646, R McFadden = .03. This contradicted the findings for the 2017 elections in which political bullshit measures could distinguish between these parties.

Discussion

Figure 2. Left: Political bullshit statements distinguishes between voters of VVD and GL as the probability to vote for VVDOverall, increases the withresults higher of this score study on politicalmatch with bullshit the statements.previous two Right: studies: Bullshit three program measures endorsement of distinguishes between voters of VVD and D66 as the probability to vote for VVD increases with higher score on politicalbullshit program. bullshit were moderate to highly correlated with each other and with pseudo-profound bullshit, the other type of bullshit (convergent validity). This relationship held even after controlling for answers on factual political statements. However, contrasting the previous two studies, there was no conclusive evidence that endorsement of slogans correlated with pseudo- profound bullshit.

40 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

In contrast to the previous two studies, endorsement of political bullshit statements was related to all ideological measures (social and economic ideology, and neoliberalism), and it could also predict (marginally) whether a person voted for the conservative-liberal VVD party as these voters had higher scores than the centrist D66 voters. Another bullshit measure, slogan endorsement, was related to social ideology and neoliberalism, but not economic ideology. It was the only measure of political bullshit related to populism. Interestingly, endorsement of bullshit programs was not related to any of the ideological measures or populism, which opposed the findings in the previous two studies. However, bullshit programs did predict (marginally) whether one voted for VVD as these voters had higher scores than GL voters.

In summary, the three studies yielded similar results; before discussing the implications of these findings, we first performed meta-analyses on the correlational results to investigate the associations across the three different countries and explore whether the political bullshit measures could predict voting behavior divided by ideology.

Meta-analytic effects of the three studies

To average out effects across three countries, we computed meta-analytic effects for the correlations between bullshit measures (political bullshit statements, slogans, bullshit programs, pseudo-profound bullshit). We also computed meta-analytic effects for the correlations between political bullshit measures on one hand, and ideological measures and populism on the other. The analyses were performed using the metaBMA package in R (Heck et al., 2017).

To achieve this, we conducted a Bayesian meta-analysis, which can account for the uncertainty between fixed and random effects (or non-effects in the case of the null hypothesis) and gives the plausibility of all four models (fixed H0, random H0, fixed Ha, random Ha). In the frequentist meta-analysis, a researcher has to decide whether the three studies are different in

41 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY terms of the underlying true effect (i.e., heterogeneity) and opt for a fixed or random effect.

Bayesian Model Averaging takes these into account and yields the most plausible model (i.e., whether the effects, if there are any, are different across countries) (Gronau et al., 2017; Hinne et al., 2019). Our priors were based on suggested priors in psychology: for effect size, we chose zero-centered Cauchy’s distribution with width parameter r = .707 (Gronau, Ly, &

Wagenmakers, 2019; Morey & Rouder, 2015). Since we hypothesized that the correlation would be positive, the prior was truncated at 0 with only positive values as possible. For the heterogeneity between studies, we used the Inverse-Gamma (1, 0.15) prior, which is based on the investigation of heterogeneity estimates in 162 meta-analyses (Van Erp, Verhagen, Grasman, &

Wagenmakers, 2017). This showed that the distribution of these estimates is well captured by the

Inverse-Gamma (1, .15) distribution which is suggested as an informed prior in psychology more generally (Gronau et al., 2019). Given that we performed meta-analyses using Fisher’s z statistic

(converted from r correlation coefficient), the scale of the priors was divided by two since

Fisher’s z values are approximately half the size of Cohen’s d. We set all four models to have the same prior plausibility (.25 for each model) to “let the data speak for itself”, a common assumption in model-averaging and suggested guidelines in the literature (Gronau et al., 2019).

The meta-analytic effects of the relationship between all bullshit measures are given in

Table 11. Correlation coefficients are averaged across heterogeneity (fixed and random effects).

None of the correlation coefficients had evidence in favor of the random effects (BF > 3) which suggests that the effect sizes are similar across countries. Correlation sizes were moderate to high, supporting the convergent validity of bullshit measures.

Table 11 Meta-analytic effects of correlations between all bullshit measures in three studies. 1. 2. 3.

42 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

1. Political bullshit statements 2. Slogans .477*** 3. Bullshit programs .302** .304** 4. Pseudo-profound bullshit .374*** .284** .314***

*** Note. Bayes factor in favor of one-sided alternative averaged across heterogeneity (BF+0) > 100; ** BF+0 > 10.

We then performed the same analyses regarding the correlations between political bullshit measures and ideological measures and populism, with the exception that the alternative hypothesis for the correlation between political bullshit measures and populism was two-sided

(as in the original studies). As Table 12 shows, bullshit programs correlated highest of all political bullshit measures with ideological measures and populism, while the support for neoliberalism correlated with all political bullshit measures. However, a meta-analysis of the correlations between bullshit programs and neoliberalism showed support for random effects, suggesting that correlations were different across countries, which is why looking at the correlations within each country is more informative. The same stands for the relationship between slogans and populism.

Table 12. Meta-analytic effects of correlations between political bullshit measures and ideological measures, and populism. Ideology Ideology Neoliberalism Populism social Economic support 1. Political bullshit statements .146* .094 .174** .095 2. Slogans .127 .049† .184** .097R 3. Bullshit programs .320** .141* .197*R .241+ ** Note. Bayes factor in favor of one-sided alternative averaged across heterogeneity (BF +0) > 10; * BF+0 > 3. †Bayes factor in favor of null against one-sided alternative positive averaged across heterogeneity

(BF0+) > 3. + Bayes factor in favor of two-sided alternative averaged across heterogeneity (BF10) > 3. R Bayes factor in favor of random H1 against fixed H1 > 3.

43 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Predicting voting for left, center, and right-wing parties on the merged dataset

Finally, we performed exploratory analyses on the combined datasets. To increase the power for predicting voting behavior, we merged three datasets by grouping the parties/candidates into the left, center, and right categories for all three countries. Table 13 shows how the categories were formed across countries for the last elections (2016 in the U.S. and

Serbia and 2017 in the Netherlands). Multinomial regression with three predictors (political bullshit statements, slogans, bullshit programs) showed that political bullshit measures could

2 2 predict for which ideological candidate/party participant voted, χ (6) = 28.000, p < .001, R McFadden

= .055. Specifically, as shown in Figure 3, scoring higher on political bullshit increased the probability to vote for the right political options, so that left-wing voters had lower slopes than center voters (B = .508, Z = 2.550, p = .011), who had lower slopes than right-wing voters (B =

2.629, Z = 3.316, p < .001).

Table 13 Frequencies of three categories (left-center-right) across countries for the last elections. Previous elections (2016 and 2017) Country Party/Candidate Ideological category Donald Trump Right Hillary Clinton Center The U.S. Gary Johnson Center Jill Stein Left SNS Right SPS-JS-ZS Left SRS Right Serbia DJB Center DS Center Dveri-DSS Right LDP-LSV-SDS Center VVD Right PvdA Left PVV Right The Netherlands SP Left CDA Right D66 Center GL Left

44 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Total (N = 255) Left Center Right 54 138 63

Figure 3. The probability that participants voted for the left, center, or right option at the last elections as a function of bullshit program endorsement. The probability to vote for the left option drops with the increase in bullshit program endorsement, remains stable for centrist voters, while it increases to vote for right-wing options.

We then tested whether political bullshit measures predicted the ideological position (left, center, right) of the candidate/party for which one would vote at the next elections (2020 in the

U.S. and Serbia, and 2021 in the Netherlands). Table 14 shows how the ideological categories were formed for the next elections. In contrast to the findings for the last elections, multinomial regression with categories “left”, “center” and “right” showed that political bullshit measures

(political bullshit statements, slogans, and bullshit programs) could not predict the voting at the

2 2 next elections, χ (6) = 7.660, p = .264, R McFadden = .012. However, when political bullshit statements were the only predictor in the model, it could distinguish between left and right voting options, B = .431, Z = 2.11, p = .035. This was also found for the bullshit program endorsement,

45 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

B = .327, Z = 2.108, p = .035. As Figure 4 shows, as the score on political bullshit statements and bullshit programs increased, the probability to vote for the right options increased, while the probability to vote for left options decreased. This finding mirrors the pattern for the voting at the last elections.

Table 14. Frequencies of three categories (left-center-right) across countries for the next elections Next elections (2020 and 2021) Country Party/Candidate Ideological category Donald Trump Right The U.S. Bernie Sanders Left Joe Biden Center SNS Right SPS-JS-ZS Left SRS Right Serbia Civic Front Left SZS Center PSG Center VVD Right PvdA Left PVV Right The Netherlands SP Left CDA Right D66 Center GL Left FvD Right Total (N = 312) Left Center Right 130 104 78

46 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Figure 4. Probability that participant would vote for the left, center or right option at the next elections as a function of political bullshit statements (left) and bullshit program endorsement (right). With the higher scores on the two political bullshit measures, the probability to vote for left options decreases, remains the same for centrist options and increases for the right option.

General discussion

In the present research, we investigated the receptivity to political communication which we deemed as “political bullshit”. Admittedly, this term might be controversial as “bullshit” has a different colloquial connotation, and something more neutral like “political emptiness” could be more suitable. However, this would disregard the similarity of this concept to the already well-established concept of pseudo-profound bullshit (Pennycook et al., 2015) from which the idea of bullshit in politics emerged (Sterling et al., 2016). In any case, we used this term to label political communication in which politicians use vague but simplified statements to mobilize voters. Since this rhetoric is too broad to claim anything specific, it avoids the in-depth discussion of real issues (Edelman, 1988), often exploiting self-presentation (e.g., Schutz, 1997).

47 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

We developed three different measures of political bullshit (political bullshit statements, slogans, and programs) to operationalize political bullshit.

In three pre-registered studies in three different countries (The U.S, Serbia, the

Netherlands), we found evidence for the receptivity to this type of communication. In each of the three countries, as well as in the meta-analysis, we obtained moderate to high intercorrelation between political bullshit measures and pseudo-profound bullshit. We also found the relationship between political bullshit endorsement and ideological orientation so that right-wing and neoliberalism supporters are more receptive to political bullshit. Finally, political bullshit could predict voting behavior to some extent, showing that the probability to vote for right-wing candidates/countries increased as the endorsement of political bullshit, especially bullshit programs, increased.

Political bullshit measures

First, we found evidence for convergent validity for political bullshit measures and pseudo-profound bullshit. Similarly to the attempts to operationalize other types of bullshitscientific (Evans et al., 2020) and general (Čavojová et al., 2020)political bullshit was consistently positively correlated with pseudo-profound bullshit. This builds on the idea that pseudo-profound bullshit is only one kind of bullshit (rather extreme) among many to be operationalized (Pennycook et al., 2015; Sterling et al., 2016). Importantly, receptivity to bullshit might be generalizable across domains, meaning that openness to bullshit exists regardless of the type: one might be receptive to bullshit no matter whether it is political, pseudo-profound, or scientific (Čavojová et al., 2020; Evans, 2020).

The psychometric properties of each of the three political bullshit measures were good: all had good reliabilities and showed that the hypothesized one-factor solution had the best fit.

48 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Political bullshit statements and slogans had higher intercorrelations than any of them had with bullshit programs. We assume that this is due to the format of statements and slogans which were one-sentence items, while bullshit programs were more elaborate (one paragraph). This also raises both theoretical and empirical questions of the structure of political bullshitwhether it is one or two factorial structure (or even with one higher-ordered factor). However, this is a fine- grained question, out of the scope of the present paper which attempted to establish the concept of political bullshit itself.

Our participants also recognized factual statements as such, rating them lower on persuasiveness in all three studies. Importantly, controlling for these () did not substantially alter the correlations as all of them remained positive. This bias remains an important issue to be accounted for (Podsakoff, 2003), especially in bullshit receptivity research

(Evans et al., 2020; Pennycook, et al., 2015).

Political bullshit, ideology, and voting

Political bullshit measures did not show only convergent validity; we also found evidence that people who endorse political bullshit also support the free market to a higher extent. This is in line with the finding that free-market supporters (neoliberals) are more susceptible to pseudo- profound bullshit (Sterling et al., 2016). Interestingly, the relationship with ideology on social and economic issues is less consistent: only bullshit programs endorsement was correlated with both social and economic ideological stance while political bullshit statements were correlated only with social ideology. It is surprising that political bullshit measures were consistently correlated with the support for the free market but inconsistently with economic ideology, given that the free market is an important feature of right-wing economics (Harvey, 2007). There might be at least two reasons for this inconsistency. Firstly, the support for the free market was

49 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY measured with two items, while economic ideology was a one-item measure thus it probably had a higher measurement error (i.e., it was less precise). Secondly, although related, support for the free market and economic ideology are not necessarily the same constructs. While the former was exclusively focused on the free market, right-wing economics might cover even more concepts (less welfare state, private ownership). Therefore, one might have a free market with leftist concepts (public/state ownership) as in the case of market socialism (e.g. the case of

Yugoslavia; Bockman, 2011). In this view, it is the free market, and not the right-wing economics per se, that is related to the endorsement of bullshit (political or pseudo-profound).

This aligns with the idea that if one endorses vague concepts such as pseudo-profound bullshit

(or political bullshit in our case), she would also endorse the free market which is based on obscure concepts such as the invisible hand (Sterling et al., 2016). It is also in line with Nilsson and colleagues’ idea that “bullshit receptivity might emerge only in specific kinds of left and right ideological thought” (p. 13), which in this case would be the support for the free market

(which is in itself a vague concept).

As for the voting behavior, we found evidence that political bullshit measures were associated with the voting choice, especially in the U.S. (Trump vs Clinton; Trump vs Biden or

Sanders). In the Netherlands, it was only associated with the choice of party one voted for but not one’s future choice (next election)a higher score on political bullshit receptivity increased the probability to vote for the right-wing party (VVD) rather than centrist or left-wing (D66,

GL). Interestingly, political bullshit was not associated with voting behavior in Serbia. This might be due to a small sample size since some participants did not vote, while many of those who voted did not want to indicate for whom they voted/would vote. Finally, there was a large variance across voting options and parties had, therefore, a small number of observations.

50 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Performing analyses on three datasets merged into one supported the notion that a higher score on political bullshit increases the probability to vote for right-wing parties (while the probability remains unchanged for the centrist and decreases for left-wing). These results are in line with finding that favorable view of Republican candidates was correlated with pseudo-profound bullshit receptivity (Pfattheicher & Schindler, 2016)

Although in line with previous research, it remains the question of why there would be ideological asymmetries in susceptibility to political bullshit. One possible reason would be the above-mentioned differences between people with left-wing and right-wing ideological stances which showed that individuals on the right engage in simpler information processing (Jost, 2017;

Jost et al., 2016). Given that right-wing individuals have a more intuitive thinking style (Deppe et al., 2015; Pacini & Epstein, 1999) and rely more on heuristic processing (Jost & Krochik,

2014), it might be that they fail to detect the vagueness of political bullshit. Indeed, as Sterling and colleagues (2016) found, the relationship between neoliberalism and pseudo-profound bullshit disappears when heuristic and biased processing and faith in intuition are included in the model. However, there are also reasons why conservatives and right-wing voters would be less receptive to vague political rhetoric: they have a larger need for certainty and closure

(Chirumbolo et al., 2004; Chirumbolo & Leone, 2008; Jost et al., 2003) which is the opposite of political bullshit.

Another possibility is that conservatives might have construed more meaning in otherwise vague political sentences and programs; this would be in line with findings that individuals low on liberalism (as an intellect measure) have a higher tendency to apophenia

(ontological confusion, paranormal belief; Bainbridge et al., 2019) and that conservatives see patterns where there is none (as in illusory correlations; Carraro et al., 2014; Castelli & Carraro,

51 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

2011). In this sense, right-wing individuals would have a general tendency of construing meaning where none exists, so that endorsement of political would be a simple manifestation of this tendency. However, this explanation might not hold given that conservatives are lower on openness as a personality trait (Van Hiel et al., 2000; Van Hiel & Mervielde, 2004) which would be a major part of the tendency to construe meaning (e.g., openness to ideas). In any case, the exact mechanism remains to be investigated by future research.

Limitations and future directions

Our study is not without limitations. Firstly, items from political bullshit measures could be improved (as indicated by CFA); also, all political bullshit measures were relatively short

(between five and ten items) which might have increased measurement error. Since our instruments showed good reliability and factorial structure, we would suggest increasing the number of items, especially for the political bullshit programs that proved to be the measure that best predicts ideological measures and voting behavior. In any case, our scales have satisfying psychometric characteristics and can be used for future research, although we suggest increasing the scales’ length.

Secondly, the number of studies for conducting a meta-analysis was small. We assume that because of the small number of studies, only two correlations (between slogans and populism, and bullshit programs and neoliberalism) had evidence for heterogeneity (different correlations across countries). It is reasonable to assume that the correlation between political bullshit and ideological measures might vary across countries due to the different conceptualizations of ideology (Nilsson et al., 2019). However, it is possible to collect more data and update evidence to get a more accurate picture as multiple testing does not represent an issue in the Bayesian framework (Hinne et al., 2019). In this way, this limitation can be circumvented.

52 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

However, a strength of the meta-analytic approach is that we could show there was no evidence for heterogeneity for any of the correlations between bullshit measures. This suggests that the intercorrelations might be homogenous across countries and implies the robustness of political bullshit as a concept since convergent validity was found in three different countries.

Another limitation is a small number of observations in the voting categories. This might have decreased power to detect the relationship between political bullshit and voting behavior

(e.g., 2016 elections in Serbia), and did not allow to perform analyses in some cases (e.g., 2020 elections for Serbian sample). However, analyses in the U.S., the Netherlands, and on parties combined in the left/center/right categories for all countries support the idea that right-wing voters might be more susceptible to political bullshit. Another reason that this might be the case is a different linguistic repertoire that left-wing and right-wing individuals employ (Sterling et al., 2020). Therefore, it is possible that our political bullshit items contained words more used or familiar to right-leaning individuals. This is one of the questions that future research should investigate.

It is also possible to question our use of the term “political bullshit”. Indeed, as our current studies show: the empty political messages are correlated with actual voting behavior in some of our studies. This indicates that the empty political statements are not perceived as bullshit by the voters at all. Rather, for them, the in-itself empty messages might resonate well with their existing beliefs and attitudes. This implies we might want to find more neutral terms to describe the content of the empty political message (political bullshit), the people expressing empty political messages (political bullshitters), and the process of empty political messaging (political bullshitting).

53 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Lastly, a natural question that arises iswho uses political bullshit? Our research does not provide an answer; it only answers who might be more receptive to it. Indeed, it is tempting to simply label political opponents as producers of political bullshit. However, this still has to be empirically investigated. One way to do this could be to take a similar approach that Pennycook and colleagues (2015) took with Deepak Chopra and pseudo-profound bullshit. If items actually uttered by politicians (Deepak Chopra) have the same psychological reality as political (pseudo- profound) bullshit, it would be a strong argument in favour that it indeed is political bullshit. In any case, we do believe that all politicians, to some extent, use political bullshit as exemplified in the introduction with Joe Biden and Donald Trump.

Conclusion

In conclusion, we found preliminary evidence for the existence of political bullshit receptivity, its correlation with ideology, and the potential to predict voting behavior. We believe that the present research brings new insights about political decision making and may eventually help in recognizing and guarding against vague and meaningless political communication. This could contribute to the democratization of the political process that has been recently jeopardized

(e.g., Gilens & Page, 2014; Ayers & Saad-Filho, 2015). More research investigating the usage and mechanisms underlying this type of communication would be a good first step towards this democratization.

References

Ackerman, L. S., & Chopik, W. J. (2020). Individual differences in personality predict the use

and perceived effectiveness of essential oils. PloS one, 15(3), e0229779.

https://doi.org/10.1371/journal.pone.0229779

54 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Ayers, A. J., & Saad-Filho, A. (2015). Democracy against neoliberalism: Paradoxes, limitations,

transcendence. Critical Sociology, 41(4-5), 597-618.

https://doi.org/10.1177/0896920513507 789

Bainbridge, T. F., Quinlan, J. A., Mar, R. A., & Smillie, L. D. (2019). Openness/Intellect and

Susceptibility to Pseudo-Profound Bullshit: A Replication and Extension. European

Journal of Personality, 33(1), 72–88. https://doi.org/10.1002/per.2176

Ball, J. (2017). Post-truth: How bullshit conquered the world. Biteback Publishing.

Bockman, J. (2011). Markets in the name of socialism: The left-wing origins of neoliberalism.

Stanford University Press.

Carraro, L., Negri, P., Castelli, L., & Pastore, M. (2014). Implicit and explicit

as a function of political ideology. PLoS One, 9(5), e96312.

https://doi.org/10.1371/journal.pone.0096312

Carson, T. L. (2016). Frankfurt and Cohen on bullshit, bullshiting, deception, lying, and concern

with the truth of what one says. Pragmatics & Cognition, 23(1), 53-67.

https://doi.org/10.1075/pc.23.1.03car

Caruso, E. M., Vohs, K. D., Baxter, B., & Waytz, A. (2013). Mere exposure to money increases

endorsement of free-market systems and social inequality. Journal of Experimental

Psychology: General, 142(2), 301-306. https://doi.org/10.1037/a0029288

Castelli, L., & Carraro, L. (2011). Ideology is related to basic cognitive processes involved in

attitude formation. Journal of Experimental Social Psychology, 47(5), 1013-1016. https://

doi.org/10.1016/j.jesp.2011.03.016

Čavojová, V., Brezina, I., & Jurkovič, M. (2020). Expanding the bullshit research out of pseudo-

transcendental domain. Current Psychology, 1–10. https://doi.org/10.1007/s12144-020-

55 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

00617-3

Čavojová, V., Secară, E. C., Jurkovič, M., & Šrol, J. (2019). Reception and willingness to share

pseudo-profound bullshit and their relation to other epistemically suspect beliefs and

cognitive ability in Slovakia and Romania. Applied Cognitive Psychology, 33(2), 299–

311. https://doi.org/10.1002/acp.3486

Chirumbolo, A., & Leone, L. (2008). Individual differences in need for closure and voting

behaviour. Personality and Individual Differences, 44(5), 1279-1288.

https://doi.org/10.1016/j.paid.2007.11.012

Chirumbolo, A., Areni, A., & Sensales, G. (2004). Need for cognitive closure and politics:

Voting, political attitudes and attributional style. International Journal of

Psychology, 39(4), 245-253. https://doi.org/10.1080/00207590444000005

Chomsky, N. (1991). Media Control. The spectacular achievements of propaganda. Seven

stories press.

Deppe, K. D., Gonzalez, F. J., Neiman, J. L., Jacobs, C., Pahlke, J., Smith, K. B., & Hibbing, J.

R. (2015). Reflective liberals and intuitive conservatives: A look at the cognitive

reflection test and ideology. Judgment and Decision Making, 10, 314–331.

Edelman, M. (1988). Constructing the political spectacle. University of Chicago Press.

Elchardus, M., & Spruyt, B. (2016). Populism, persistent republicanism and declinism: An

empirical analysis of populism as a thin ideology. Government and Opposition, 51(1),

111-133. https://doi.org/10.1017/gov.2014.27

Erlandsson, A., Nilsson, A., Tinghög, G., & Västfjäll, D. (2018). Bullshit-sensitivity predicts

prosocial behavior. PloS one, 13(7), e0201474.

https://doi.org/10.1371/journal.pone.0201474

56 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Evans, A., Sleegers, W., & Mlakar, Ž. (2020). Individual differences in receptivity to scientific

bullshit. Judgment and Decision Making, 15(3), 401-412.

Frankfurt, H. G. (1986/2005). On Bullshit, 1–69. https://doi.org/10.2307/j.ctt7t4wr.2

Gilens, M., & Page, B. I. (2014). Testing theories of American politics: Elites, interest groups,

and average citizens. Perspectives on politics, 12(3), 564-

581. https://doi.org/10.1017/S1537592 714001595

Gligorić, V. & Vilotijević, A. (2019). “Who said it?” How contextual information influence

perceived profundity of meaningful quotes and pseudo-profound bullshit. Applied

Cognitive Psychology, 34(2), 535–542. https://doi.org/10.1002/acp.3626

Gligorić, V., & Vilotijević, A. (2020, April 4). Disintegration, neoliberalism and pseudo-

profound bullshit. https://doi.org/10.31234/osf.io/f75hz

Gronau, Q. F., Van Erp, S., Heck, D. W., Cesario, J., Jonas, K. J., & Wagenmakers, E. J. (2017).

A Bayesian model-averaged meta-analysis of the power pose effect with informed and

default priors: The case of felt power. Comprehensive Results in Social Psychology, 2(1),

123-138. https://doi.org/10.1080/23743603.2017.1326760

Gustainis, J. J. (1990). Demagoguery and political rhetoric: A review of the literature. Rhetoric

Society Quarterly, 20(2), 155-161. https://doi.org/10.1080/02773949009390878

Hardcastle, G. L., & Reisch, G. A. (Eds.). (2006). Bullshit and philosophy: Guaranteed to get

perfect results every time. Open Court.

Hart, J., & Graether, M. (2018). Something’s Going on Here: Psychological Predictors of Belief

in Conspiracy Theories. Journal of Individual Differences, 39(4), 229–237.

https://doi.org/10.1027/1614-0001/a000268

Harvey, D. (2007). A brief history of neoliberalism. Oxford University Press, USA.

57 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Heck, D. W., Gronau, Q. F., & Wagenmakers, E.-J. (2017). metaBMA: Bayesian Model

Averaging for Random and Fixed Effects Meta-Analysis. R package version 0.3.9.

Retrieved from https://cran.r-project.org/package=metaBMA

Heer, J. (2015, December 1). Donald Trump Is Not a Liar. He’s Something Worse: A Bullshit

Artist. New Republic. Retrieved from: newsrepublic.com

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world?.

Behavioral and Brain Sciences, 33(2-3), 61-83.

https://doi.org/10.1017/S0140525X0999152X

Hinne, M., Gronau, Q. F., van den Bergh, D., & Wagenmakers, E. (2019, March 25). A

conceptual introduction to Bayesian Model Averaging.

https://doi.org/10.31234/osf.io/wgb64

Hopkin, J., & Rosamond, B. (2018). Post-truth politics, bullshit and bad ideas: ‘Deficit

Fetishism’ in the UK. New Political Economy, 23(6), 641-655.

https://doi.org/10.1080/13563467.2017.137 3757

JASP Team. (2020). JASP (Version 0.12.2) [Computer software]. Retrieved from https://jasp-

stats.org/

Jost, J. T. (2017). Ideological asymmetries and the essence of political psychology. Political

Psychology, 38(2), 167-208. https://doi.org/10.1111/pops.12407

Jost, J. T., & Krochik, M. (2014). Ideological differences in epistemic motivation: Implications

for attitude structure, depth of information processing, susceptibility to persuasion, and

stereotyping. Advances in Motivation Science, 1, 181-231.

Jost, J. T., Sterling, J., & Stern, C. (2017). Getting closure on conservatism, or the politics of

epistemic and existential motivation. In C. Kopetz & A. Fishbach (Eds.), The Motivation-

58 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Cognition Interface: From the Lab to the Real World: A Festschrift in Honor of Arie W.

Kruglanski (pp. 56-87). Taylor and Francis. https://doi.org/10.4324/9781315171388_4

Kristiansen, L. J., & Kaussler, B. (2018). The Bullshit Doctrine: Fabrications, lies, and nonsense

in the age of Trump. Informal Logic, 38(1), 13-52. https://doi.org/10.22329/il.v38i1.5067

Law, S. (2011). Believing bullshit: How not to get sucked into an intellectual black hole.

Prometheus.

Littrell, S., Risko, E. F., & Fugelsang, J. A. (2020). The Bullshitting Frequency Scale:

Development and psychometric properties. British Journal of Social Psychology.

https://doi.org/10.1111/bjso.12379

Morey, R. D., & Rouder, J. N. (2015). BayesFactor: Computation of Bayes factors for common

designs. http://CRAN.R-project.org/package=BayesFactor

Neumann, V. (2006). Political bullshit and the stoic story of self. In G. L. Hardcastle, & G. A.

Reisch (Eds.), Bullshit and philosophy: Guaranteed to get perfect results every time (pp.

185–202). Open Court.

Nilsson, A., Erlandsson, A., & Västfjäll, D. (2019). The complex relation between receptivity to

pseudo-profound bullshit and political ideology. Personality and Social Psychology

Bulletin, 45(10), 1440-1454. https://doi.org/10.1177/0146167219830415

O'Dwyer, M. (2018). The intersectional politics of bullshit. European Journal of Politics and

Gender, 1(3), 405-420. https://doi.org/10.1332/251510818X15395100533121

Pacini, R., & Epstein, S. (1999). The relation of rational and experiential information processing

styles to personality, basic beliefs, and the ratio-bias phenomenon. Journal of Personality

and Social Psychology, 76(6), 972–987. https://doi.org/10.1037/0022-3514.76.6.972

Palan, S., & Schitter, C. (2018). Prolific. ac—A subject pool for online experiments. Journal of

59 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Behavioral and Experimental Finance, 17, 22-27.

https://doi.org/10.1016/j.jbef.2017.12.004

Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the

reception and detection of pseudo-profound bullshit. Judgment and Decision Making,

10(6), 549–563. https://doi.org/10.5281/zenodo.1067051

Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2016). It's still

bullshit: Reply to Dalton (2016). Judgment and Decision making, 11(1), 123-125.

Petrocelli, J. V. (2018). Antecedents of bullshitting. Journal of Experimental Social Psychology,

76, 249–258. https://doi.org/10.1016/j.jesp.2018.03.004

Pfattheicher, S., & Schindler, S. (2016). Misperceiving bullshit as profound is associated with

favorable views of Cruz, Rubio, Trump and conservatism. PLoS ONE, 11(4), 1–7. https://

doi.org/10.1371/journal.pone.0153419

Reisch, G. A. (2006). The pragmatics of bullshit, intelligently designed. In G. L. Hardcastle & G.

A. Reisch (Eds.), Bullshit and philosophy: Guaranteed to get perfect results every time

(pp. 33– 47). Open Court.

Rogers, T., & Norton, M. I. (2011). The artful dodger: Answering the wrong question the right

way. Journal of Experimental Psychology: Applied, 17(2), 139-147.

https://doi.org/10.1037/a0023439

Rouder, J. N. (2014). Optional stopping: No problem for Bayesians. Psychonomic Bulletin &

Review, 21(2), 301-308. https://doi.org/10.3758/s13423-014-0595-4

Schutz, A. (1997). Self-Presentational Tactics of Talk-Show Guests: A Comparison of

Politicians, Experts, and Entertainers1. Journal of Applied Social Psychology, 27(21),

1941–1952.

60 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Simonsohn, U. (2018). Two Lines: A Valid Alternative to the Invalid Testing of U-Shaped

Relationships With Quadratic Regressions. Advances in Methods and Practices in

Psychological Science, 1(4), 538–555. https://doi.org/10.1177/2515245918805755

Starkweather, J., Moske, A. K. 2011. “Multinomial Logistic Regression.” Retrieved from

http://bayes.acs.unt.edu:8083/BayesContent/class/Jon/Benchmarks/MLR_JDS_Aug2011.

pdf

Sterling, J., Jost, J. T., & Bonneau, R. (2020). Political psycholinguistics: A comprehensive

analysis of the language habits of liberal and conservative social media users. Journal of

Personality and Social Psychology, 118(4), 805-834.

https://doi.org/10.1037/pspp0000275

Sterling, J., Jost, J. T., & Pennycook, G. (2016). Are neoliberals more susceptible to bullshit?

Judgment and Decision Making, 11(4), 352–360.

Team, R. C. (2013). R: A language and environment for statistical computing. https://www.R-

project.org/

Turpin, M. H., Walker, A., Kara-Yakoubian, M., Gabert, N. N., Fugelsang, J., & Stolz, J. A.

(2019). Bullshit Makes the Art Grow Profounder. Judgment and Decision Making, 14(6),

658-670. https://doi.org/10.2139/ssrn.3410674

Van Erp, S., Verhagen, J., Grasman, R. P., & Wagenmakers, E. J. (2017). Estimates of between-

study heterogeneity for 705 meta-analyses reported in Psychological Bulletin from 1990–

2013. Journal of Open Psychology Data, 5(1). http://doi.org/10.5334/jopd.33

Van Hiel, A., & Mervielde, I. (2004). Openness to experience and boundaries in the mind:

Relationships with cultural and economic conservative beliefs. Journal of

Personality, 72(4), 659-686. https://doi.org/10.1111/j.0022-3506.2004.00276.x

61 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Van Hiel, A., Kossowska, M., & Mervielde, I. (2000). The relationship between openness to

experience and political ideology. Personality and Individual Differences 28(4), 741-751.

https://doi.org/10.1016/S0191-8869(99)00135-X

Wagenmakers, E. J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Love, J., ... & Matzke, D.

(2018). Bayesian inference for psychology. Part I: Theoretical advantages and practical

ramifications. Psychonomic Bulletin & Review, 25(1), 35-57.

https://doi.org/10.3758/s13423-017-1343-3

Walker, A. C., Turpin, M. H., Stolz, J. A., Fugelsang, J. A., & Koehler, D. J. (2019). Finding

meaning in the clouds: Illusory pattern perception predicts receptivity to pseudo-profound

bullshit. Judgment and Decision Making, 14(2), 109–119.

62 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Appendices

Appendix A. Factor loadings (estimation method: minimum residual; rotation oblimin: promax), means, and standard deviations of pre-tested political bullshit and factual statements. The items selected for the main studies are in italics.

Political bullshit statements M (SD) Factor 1 Factor 2 loadings loadings We would have to bring back dignity to the people. 3.08 (1.13) 0.512 It is of the utmost importance that our political leadership believes 3.26 (1.34) 0.857 in their country. Politicians should always think about the future of the nation. 3.49 (1.19) 0.748 It is crucial that politicians believe in their voters. 3.14 (1.34) 0.772 We should bring back meaning to politics. 3.14 (1.22) 0.677 In order to succeed, we must protect a stable society. 3.24 (0.99) 0.598 An exceptional politician makes the country work. 2.69 (1.16) 0.566 Admirable politicians raise a country as a whole. 2.92 (1.09) 0.777 Our driving force is faith in progress. 3.04 (1.30) 0.815 A country with expert politicians is bound to succeed. 2.43 (1.08) 0.451 Good politicians can make citizens be on the same page. 2.78 (1.03) 0.619 Exceptional leaders can keep society together. 3.08 (1.11) 0.737 An effective politician can make the whole of the country larger 3.00 (1.22) 0.729 than the sum of its parts. Political leaders of our country should make society work as a 2.28 (1.10) 0.586 clock. To politically lead the people means to always fight for them. 3.45 (1.06) 0.649 Factual statements Politicians are usually elected by voting. 2.39 (1.33) 0.700 Most politicians get paid for the work they do. 2.10 (1.22) 0.682 In elections, we try to get as many votes as possible. 2.49 (1.54) 0.740 The president and prime minister have important political 3.06 (1.22) 0.410 0.536 functions. Voters vote during the elections. 2.24 (1.46) 0.807 Minors usually are not allowed to vote. 2.31 (1.44) 0.766 Parliamentary and local elections are different kinds of elections. 2.35 (1.28) 0.749 In elections, parties offer some kind of political program. 2.51 (1.26) 0.754 To get elected, we have to pass the threshold. 2.39 (1.20) 0.442 0.291 During the elections, politicians have a lot of debates. 2.45 (1.30) 0.827 Note. Only loadings higher than .25 are shown. M = Mean, SD = Standard deviation

Appendix B. Slogans endorsement descriptives and factor loadings on one factor. Selected items are in italics.

63 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Slogans M (SD) Factor loadings I believe in Gonfel! 2.84 (1.22) 0.686 For better and stronger Gonfel! 2.94 (1.21) 0.746 Together for Gonfel! 3.22 (1.08) 0.621 Don’t Hold Gonfel Back. 2.00 (1.15) 0.539 Gonfel comes first. 1.86 (1.08) 0.486 Only the finest Gonfel. 1.63 (0.77) 0.489 Our Gonfel belongs to us! 2.08 (1.18) 0.225 Gonfel has strength. 2.45 (1.12) 0.537 Stand up, Gonfel! 2.74 (1.29) 0.605 Gonfel’s new chance. 2.69 (1.09) 0.440 Note. M = Mean, SD = Standard deviation

64 Running head: POLITICAL BULLSHIT ENDORSEMENT: CROSS-CULTURAL STUDY

Appendix C. Factor loadings of items of three political bullshit measures across three countries. For each of nine (three measures in three countries) exploratory factor analyses (estimation method: minimum residual), the one-factor solution was the most appropriate.

Items of political bullshit measures Loadings of items on one factor across countries Political bullshit statements The U.S. Serbia The Netherlands

It is of the utmost importance that our political leadership believes in .613 .624 .707 their country. Politicians should always think about the future of the nation. .738 .713 .546 It is crucial that politicians believe in their voters. .706 .517 .589 We should bring back meaning to politics. .592 .366 .500 Admirable politicians raise a country as a whole. .746 .540 .648 Our driving force is faith in progress. .282 .397 .492 Good politicians can make citizens be on the same page. .599 .428 .649 Exceptional leaders can keep society together. .713 .626 .630 An effective politician can make the whole of the country larger than .594 .511 .570 the sum of its parts. To politically lead the people means to always fight for them. .527 .680 .430 Slogans I believe in Gonfel! .685 .731 .645 For better and stronger Gonfel! .691 .605 .654 Together for Gonfel! .685 .736 .603 Don’t Hold Gonfel Back. .405 .328 .452 Stand up, Gonfel! .579 .517 .527

Bullshit programs Support the first program .642 .810 .715 Vote for the fist program .659 .791 .787 Support the second program .819 .807 .681 Vote for the second program .801 .822 .691 Support the third program .691 .678 .701 Vote for the third program .696 .623 .733

65