Weaving it in: How partisan media react to events

Anonymized version

September 1, 2021

Abstract What strategies do partisan media use to fit real-world events into ideological narratives? I look at whether they connect events to related political issues (e.g. hurricanes and climate change), and whether each side is able to fit events into its existing set of issue positions. I focus on talk radio—a medium with a larger reach than Twitter. I analyze almost 2 million hours of radio from hundreds of talk shows, and find that partisan content makers do not ignore ideologically inconvenient events or their political implications, in the way they have been found to ignore scandals and other political “bad news”. Instead, after an event, both ideological sides give far more attention to related political issues. At the same time, liberal and conservative shows feature starkly different mixes of positions on those issues, and events do little to change those mixes. In other words, partisan media re-interpret events to fit their existing stance on the topic, weaving them in. keywords: partisan media, agenda-setting, ideology, issues, talk radio These things have become very politicized as you know, folks.

Hurricanes and hurricane forecasting is much like much else that

the left has gotten its hands on [...]. The forecast and the destruc-

tion potential doom and gloom is all to heighten the belief in climate

change.

—Rush Limbaugh, September 11, 2018

A significant share of Americans get their political information and opinions from non-mainstream media, such as opinion blogs, podcasts and talk radio shows (Schaffner et al., 2019; Pew Research Center, 2019). Producers of these types of media often appear to be less bound by journalistic norms of objectivity and balance. Instead, many have an ideological bias that informs the content they create (Baum and Groeling, 2008; Sobieraj and Berry, 2011). Moreover, the “big three” cable news channels—, CNN and MSNBC—have also developed a consistent political leaning (Ad Fontes Media, 2019; Martin and

Yurukoglu, 2017). Together, these outlets make up the landscape of so-called partisan media in the United States. And while we know more and more about the potentially polarizing effects of such media, we have less systematic knowl- edge about the content they create. How exactly do they manage to report on current affairs in a way that is ideologically consistent?

In this paper, I investigate two strategies that partisan media might use in response to newsworthy events. The first strategy (“partisan agenda-setting”) is about whether content producers make the connection between the event and related political topics. For example, conservative outlets may be less likely to discuss climate change in their reporting on a storm. The second strategy

(“partisan position-taking”) is not to let real-world events affect the ideological narrative about those topics. In this case, the outlet reports on the event and its political implications, but re-frames what happened in a way that is consistent with its ideological slant. For example, after a hurricane hits the US, conser- vative hosts may justify a skeptical position on climate change by framing the event as irrelevant to the existence of climate change. While the first strategy

1 would affect whether a related political issue is discussed, the second affects how it is talked about.

I look for these strategies in a massive data set on US talk radio, covering almost 2 million hours of content. I focus on partisan discussion of three issues

(climate change, gun policy and immigration) in response to relevant events. I

find that newsworthy events cause a spike in discussion of related political issues, regardless of the show’s leaning. The effects are large, with 40%–400% increases in topic mentions. In other words, downplaying the connection between an event and a political issue (e.g. hurricanes and climate change) is a surprisingly rare strategy among partisan outlets. However, I also find that political talk shows are strongly ideologically sorted in the way they discuss these issues, and events do very little to change this. On both sides, the balance of stances about climate change and immigration does not shift after an event. In the case of gun policy, only conservative shows move somewhat: an extra 8% of mentions of the topic take an anti-gun position.

In sum, based on a study of three political topics, I find that partisan me- dia producers do not selectively downplay the connection between newsworthy events and political issues. Instead, they typically re-interpret the event to

fit into their existing ideological narrative about an issue. This adds nuance to earlier evidence showing that partisan outlets do strategically avoid certain events and topics—e.g., scandals affecting “their” side, campaign issues owned by the “other” side, or controversial foreign policy (Baum and Groeling, 2008;

Groeling, 2008; Puglisi, 2011; Stroud, 2011). Perhaps the connection between certain events and political topics is too significant to be ignored; or perhaps some events leave more room for re-interpretation—for instance, when they are not directly about politicians’ qualities or performance.

These results suggest that in partisan media, newsworthy events tend to amplify an already ideologically consistent discussion of related issues. When events are reported on in this way, rather than a shared experience, they may become a polarizing factor. This is illustrated by a curious finding about hurri-

2 cane impacts. Usry et al. (2019) uncover that in North Carolina, after hurricane

Florence made landfall, highly educated and partisan Republicans became less likely to see climate change as a threat. Educated partisans are also more likely to listen to like-minded partisan media (Stroud, 2011). The current study shows that Republican partisans in North Carolina likely would have heard far more climate-skeptical messaging just after the hurricane than before.

Tragedies such as hurricanes or school shootings have a special potential for becoming shared national experiences; the sort of experiences that cut across groups and layers in society. These events can be part of the “social glue” that holds a democracy together (Sunstein, 2018). For partisan media audiences, ide- ological interpretations of the event could well interfere with its binding power— making it divisive instead of unifying.

Political talk radio in the US

Though it does not currently spark the same research interest as social media or hyperpartisan news websites, talk radio is a tremendously impactful medium in the US. The number of Americans who tune in to terrestrial (offline) radio in a given week has remained stable around 90% for over a decade (dropping off slightly in 2020 due to a decline in time spent driving; Pew Research Center

2021a). Terrestrial News/Talk stations together reach about 18% of the adult

American public, or 45.3 million people, each week (Radio Advertising Bureau,

2021). This is a slightly larger weekly audience reach than Twitter (Pew Re- search Center, 2021a). Many radio shows further expand their listenership by making content available online (Calmes, 2015); the weekly reach of online audio is 62% of American adults (Pew Research Center, 2021b).

Though there are no numbers on the overall listenership of political talk radio, it is telling that out of the ten most-listened-to talk radio hosts in the US

(Talkers magazine, 2019), eight are political.1 Finally, radio listeners are very

1According to the classifier developed in this paper. The other two, Dave Ramsey and George Noory, host a financial and a paranormal talk show.

3 politically engaged—in part because are older than the average American. In

2018, for instance, 75% of radio listeners said they definitely intended to vote in the midterm election, compared to 63% of non-listeners (US Census Bureau,

2019; Schaffner et al., 2019). 2

In spite of its audience sizes, large-n studies of political talk radio are rare, because audio is so challenging to store and process at scale. Even studies that analyze many hours of content still cover only one or a handful of shows (Barker,

2002; Berry and Sobieraj, 2013). This is a likely reason why research on the politics of talk radio has tapered off since the early 2000s, while listenership has remained strong.

Strategies in partisan media content

In this paper, I analyze talk radio content to investigate how partisan content- makers respond to real-world events. How do they make sure their reporting is in line with their ideological stances? I examine two strategies: partisan agenda- setting, and partisan position-taking. The first revolves around the choice of topics to talk about in the aftermath of an event. The second has to do with the way these topics are discussed—in other words, spin.

Partisan agenda-setting

Partisan agenda-setting comes in two variants. One involves strategically giving less attention to events; the other involves strategically choosing which political topics to connect them to.

In the first variant, a media outlet spends less time on events that are less compatible with its ideological slant—for instance, because the event reflects negatively on Democrats or Republicans, or because event details are (at least at first glace) difficult to match with the outlet’s ideological story. Groeling

2Survey-based media consumption measures raise concerns about people overreporting their media use (Guess, 2015; Prior, 2009). Of the sources cited here, the Pew Research Center, the Radio Advertising Bureau and Schaffner et al. rely on survey data. Nielsen uses a combination of portable audio recording devices and weekly diaries. Talkers magazine ratings come from industry experts combining a variety of data sources.

4 (2013) calls this “selection bias” (as opposed to “presentation bias”). For ex- ample, compared to the presumably neutral baselines of and

Reuters, both Fox News and the left-leaning blog DailyKos clearly give more attention to stories that fit their partisan narrative (so does the right-leaning blog FreeRepublic, but only under certain specifications; Baum and Groeling

2008). CBS, NBC and (to a lesser extent) ABC all appear to have had a pref- erence for reporting polls showing approval gains for Bill Clinton and approval losses for G.W. Bush. Fox News had the opposite tendency (Groeling, 2008).

And slanted newspapers tend to give more coverage to scandals involving the

“other” side, and less to those involving their own (Puglisi and Snyder, 2011).

This first type of agenda-setting strategy is less feasible for extremely news- worthy events. In these cases, content producers may choose a second agenda- setting strategy: they may report on events, but be selective about connecting them to certain political debates. Partisan media makers may avoid issues that their ideological side does not own (Petrocik, 1996)—this is the case for liberals and immigration, for instance (Seeberg, 2017). They may also choose not to bring in issues that their ideology frames as less important, or that listeners on their ideological side typically care less about. For example, this is likely to be true for conservatives and climate change (Pew Research Center, 2020). Finally, this approach may be helpful when an event, once connected to a political topic, would be difficult to spin. For instance, while the connection between climate change and Atlantic hurricanes may be intuitive, there is actually no strong evi- dence that global warming is currently worsening hurricanes (Geophysical Fluid

Dynamics Laboratory, 2019). In other words, reporting on a hurricane may be difficult to integrate with liberal messaging about climate change.

Existing research on partisan coverage of political topics shows that news- papers give more coverage to some economic topics when the current trends support the partisan side that the paper is on (Larcinese et al., 2011). Simi- larly, during presidential campaigns with Republican incumbents, the New York

Times pays particular attention to campaign topics that are “owned” by the

5 Democratic party (Puglisi, 2011). Finally, Stroud (2011) finds that Fox News and Republican-endorsing newspapers spent somewhat less time covering the

Iraq war, and more time covering terrorism, than CNN and Democrat-endorsing newspapers. These findings confirm that outlets sometimes ignore inconvenient topics—either because that topic is not owned by their favored party, or because current affairs are not as compatible with the partisan story about that topic.

Partisan position-taking

The second approach that partisan media could use is to filter news events through an ideological lens. For example, Stroud (2011) finds that when covering the Iraq war, Fox News and Republican-leaning newspapers used noticeably more positive frames than CNN and Democrat-leaning papers do. Studies of mass and elite political behavior show that ideology often inspires different interpretations of the same events Gaines et al. (2007). For example, Bisgaard

(2015) finds that in the UK after the Great Recession, both Conservative and

Labour partisans agreed on the fact that the economy had gotten worse under a

Labour government. However, Conservatives tended to say that the government was responsible for the economic situation, whereas Labour adherents said it was not. Ideological lenses could explain why mass shootings are followed by laws that loosen gun control in Republican-controlled state legislatures, but cause no change in Democrat-controlled states (Luca et al., 2019). While most

Republicans believe that gun violence can be solved with broader gun ownership, most Democrats believe the opposite. This is true for both party elites and the public (Pew Research Center, 2019; Spitzer, 2011).

An open question, however, is whether partisan media are able to maintain their ideological stances in the face of unusual real-world events. Perhaps news- worthy events with strong connections to political topics have the power to (at least temporarily) change the balance of positions on those topics. Indeed, each of the event types that I study here—hurricanes, mass shootings and family separation—has moved perceptions of a political issue in one way or another.

6 For example, Visconti and Young (2019) find that disasters such as hurricanes and floods influence Americans’ climate beliefs. Newman and Hartman (2019) show that people who live nearby the site of a become more likely to support stricter gun control, regardless of their partisanship. Finally, ordinary Republicans were split in their opinions on president Trump’s policy of separating families (Quinnipac University, 2018). The fact that this event was the result of party politics did not necessarily cause partisans to fall in line.

These instances show that perhaps some events challenge partisan interpreta- tions, as they do not fit easily into both ideological narratives.

Methods

Data and design

In this paper, I investigate partisan agenda-setting and partisan position-taking, asking how rare or common these strategies are. I do this by looking at 1.5 year’s worth of US political talk radio. I define political talk radio as any radio show where one or more hosts (and possibly guests or callers) talk about current affairs. In the main analyses, I exclude news broadcasts and shows produced by public radio networks. In Supplementary Information section 9, however, I illustrate that news and public radio programs do not behave differently from other political shows—and that the paper’s results hold whether or not I in- clude them. Similarly, I do not place any requirement on how ideological the show must be to be included in the data set—but I also demonstrate that non- ideological political shows are relatively rare, and that excluding them leads to the same results.

The raw data consist of continuous recordings, and transcriptions, of the live Internet streams of 175 radio stations. They were collected by [data source redacted for anonymization]. Supplemental Information section 1 provides more details about this data set, including station locations, how station character- istics compare to the population of US talk radio stations, and transcription

7 quality. Together, these stations broadcast over 1,000 unique radio shows.

I analyze discussions of three political topics (climate change, gun policy and immigration) on each show, in the weeks before and after a relevant event happened. Shows are produced relatively independently from one another, mak- ing them a sensible observational unit of radio content.3 And because almost all shows are broadcast according to a weekly schedule (e.g. one episode per weekday), it is sensible to bundle content into weeks rather than, say, days.

The unit of analysis, then, is the radio show-week. The total number of show- weeks in the analyses depends on the topic and which events it is connected to.

The number of radio stations in the data set was gradually increased, so if an event happened later in the period, more political shows were being captured at the time. I also cover a different number of events for each topic: two for climate change, three for gun policy, and one for immigration. In total, I collected 1412 show-weeks for climate, 1896 for gun policy, and 524 for immigration.4

Partisan agenda-setting is measured by the number of times a political topic was mentioned on a show. Partisan position-taking is measured by the stances that those mentions support. I expect these to vary depending on whether a show-week happened just before or just after a major event; and whether the political talk show leans liberal or conservative (measured using an automated text classifier). Figure 1 illustrates the workflow towards measurement and modeling of these variables.

Selecting issues and events

The issues I examine in this study are climate, gun policy and immigration.

Each one is clearly connected to one or more newsworthy events: hurricanes for climate change, mass shootings for gun policy; and for immigration, the

3The exception is that shows licensed to the same network may be subject to a common set of pressures about content, including a set of legal and moral rules called Standards and Practices. 4To estimate potential audience effects, it would be helpful to weight shows by their au- dience sizes. Unfortunately, such data does not exist. Neither do the datasets that might be used to create estimates (e.g. hour-by-hour audience sizes and programming schedules of a representative sample of US talk radio stations).

8 9

Fractional logit

Figure 1: The project workflow, from data to dependent variable (topic men- tions and stances) and independent variables (pre- or post-event week and show ideology), and on to regression models. outburst of attention to families being separated at the US–Mexican border.

For the topic of climate change, I chose hurricanes over (say) major wildfires and floods because hurricanes get national attention, their onset is sudden (so it is meaningful to consider “pre-event” measurements), and more than one such event is covered by the radio transcripts. As noted above, each of the studied event types is known to have influenced partisan opinions on the relevant topic in one way or another. Discussion of these topics could also be easily identified through a limited number of topic-related terms.

The specific events I study here were the most newsworthy of their kind in the observed period (as measured by mainstream media attention). For the topic of climate change, I use the two hurricanes that made landfall in the continental United States: Florence and Michael. For gun policy, I incorporate three mass shootings that received broad attention: Santa Fe High School, Jack- sonville Landing, and the Pittsburgh Synagogue shooting.5 For immigration, I use the outbreak of public attention to President Trump’s policy of separating immigrant children from their parents at the US–Mexican border in June 2018.

Based on numbers of Google searches for “immigration”, this was by far the most noteworthy immigration event in the period covered by the data.

Topic mention counts

A topic mention is simply an occasion where the algorithm recognized a topic- relevant term in the speech produced by a radio show. The terms for each topic are: (1) climate change and global warming; (2) gun control, gun right(s), second amendment, gun owner(ship), anti-gun, pro-gun, and gun violence; (3) immigration, immigrant, migration, and migrant. These lists were composed by searching for fragments containing “seed” terms (e.g. “immigration”), and mining them for other possible keywords, until any further keywords yielded an unsatisfactory rate of false positives.

5Another mass shooting happened in Thousand Oaks, California, just eleven days after Pittsburgh event. I did not include the Thousand Oaks shooting, as its pre-event week would overlap with the Pittsburgh post-week.

10 Shows that are broadcast on several of the recorded stations only have their mentions counted once. I make use of the transcripts from all broadcasts of the show, however, in order to help deal with any errors in the transcript of any given broadcast. Supplemental Information section 2 goes into more detail on this process.

Topic mention stances

Workers on Amazon’s Mechanical Turk coded the stance of the mentions by listening to a 30-second audio fragment surrounding the topic keyword. Pre- testing showed that longer fragments very rarely provided information that would change one’s initial judgment. Workers coded up to 25 mentions per show-week (if there were more topic mentions, they coded a random sample).

This amounted to about 4500 out of 4900 mentions for climate change, 5050 out of 5650 mentions for gun policy, and 7350 out of 15500 mentions for immigration.

For each topic, I asked coders to label the mentions with one of two issue stances: “skeptical” or “convinced” about climate change, “pro-gun” or “anti- gun”, and “supporting immigration” or “tough on immigration”. Coders could also label mentions as taking neither stance. I limited the coding scheme to three options because pre-testing showed that more nuanced labels were highly subjective in meaning, and therefore difficult for coders to agree upon.

I asked two raters to code each mention. If they disagreed, I added a third.

The Spearman rank correlation of the codes given by any two raters was .74 for climate, .61 for gun policy, and .53 for immigration. The lower accuracy for immigration mentions is largely due to their being slanted only in subtle or debatable ways. As a result, coders often disagreed on whether mentions supported a particular stance, or should be labeled as “neither” instead. Sup- plemental Information section 3 contains details about the coding task, including raw agreement rates and full code books.

An advantage of using non-expert human coders, and audio rather than text transcripts, is that coders’ interpretations of fragments are in some ways likely

11 to be quite close to radio listeners’ interpretations. For instance, they are about equally able to catch sarcasm. However, coders differ from real-world listeners in two notable ways.

Hearing the audio fragment, coders may or may not be able to detect the general ideological leaning of the show (for a very small number of popular shows, they may even guess the identity of the host). Listeners have more access this information, and might use it to infer speakers’ implicit stances.

Further, listeners are more likely than coders to know whether the speaker is a guest or caller, or the host of the show. Listeners may care less about guests’ stances. This could be important if a shift in guest perspectives is the main source of change in the balance of viewpoints on a show.

Both of these biases, however, are in the direction of coders detecting more change in stances over time than listeners—by having weaker prior beliefs about the ideology of the show, and by giving equal weight to the (variable) viewpoints of guests. It is therefore reassuring that I largely find over-time stability in the interpreted stances of shows, despite a bias towards variability.

Pre-event and post-event weeks

In this study, I consider radio show-weeks that happened just before, or just after an event. Mass shootings are unpredictable and short, meaning their media impact starts simply on the day of the event. With hurricanes and immigrant family separation, on the other hand, the start of the impact is less clear-cut.

For example, the strength and path of hurricanes can be predicted with more and more certainty as they approach, until they eventually make landfall.

I expect events to have media impact once they cross some threshold of social significance—for example, once a hurricane is predicted to hit a populated area.

I use search indices to detect when this social threshold is reached. Supplemental

Information section 4 goes into further detail on how I employ Google Trends data to define pre-event and post-event weeks.

12 Classifying shows: politics and ideology

In order for a show to be included in the analyses, it needs to be classified as political in content. Non-political talk shows (e.g., cooking shows) are excluded.

We also need to know whether it has a liberal or a conservative slant. For both decisions, I created bag-of-words classifiers based on all transcribed episodes of the show—typically 1.5 years’ worth of data.

As a training set, I used the transcripts of 50 shows with known labels. For the training set of the political/non-political classifier, I hand-labeled 33 shows as non-political based on their titles, verified by their transcript and the show’s website. This training set includes shows on nine broad topics—for instance, gardening, music, cooking and sports. For the ideology classifier, I required at least two sources to confirm that a show has either a conservative or a liberal slant. This method provided an ideological label for 15 shows, which also served as the “political” shows for the political/non-political classifier. The classifiers were trained on the episodes of these hand-labeled shows. Supplemental Infor- mation section 5 contains information on the training shows (including sources for the ideological labels) as well as further details on how the classifiers were trained, tuned and tested.

When the trained models were tested on previously unseen (held-out) episodes, the political/non-political model correctly classified all 50 known shows. The conservative/liberal model successfully classified 14 out of 15 political shows.

Finally, I applied the trained classifiers to all shows, including unlabeled ones.

Of the recorded shows, 429 were labeled as political, with 269 labeled conser- vative and 160 liberal. Of the shows labeled as political, the ideology classifier was able to label the vast majority (94%) as either liberal or conservative with reasonable certainty (> 70%).

13 Results

Agenda-setting: connecting events and issues

The first content production strategy we are interested in is whether shows downplay events’ connections to political topics. To investigate this, I analyze the number of topic mentions in the show-weeks before and after a relevant event. A negative binomial model matches well with the data, because the distribution of mention counts is skewed and fat-tailed (see Supplemental In- formation section 6 for more details). I include an interaction between time

(pre-event or post-event) and ideology (liberal or conservative), and control for a show’s amount of weekly airtime. Below, I report predicted numbers of topic mentions before and after events, along with p-values on the correspond- ing model coefficients. Supplemental Information section 7 includes the full regression tables.

Figure 2 shows the predicted number of mentions on conservative and liberal shows, for each topic, in the weeks before and after an event. I find that on a conservative show with four hours of airtime, the estimated number of climate mentions increases from 0.8 to 2.1 (p < .001). On a liberal show, it increases from 2.2 to 4.1 (p = 0.002). In the case of gun policy, on a conservative show, the predicted number of mentions changes from 1.9 to 3.3 (p < .001); on a liberal show, it changes from 0.7 to 1.0 (p = .053). The predicted number of immigration mentions on a conservative show goes from 4.3 to 25.3 (p < .001); on a liberal show it goes from 7.3 to 14.5 (p = .080). The difference between the proportional change on the liberal and conservative sides is significant only in the case of immigration (p = .006). Supplemental Information section 8 discusses the longevity of attention after it has peaked.

One obvious concern about the immigration findings might be that it would be difficult to report on the event (family separation) without mentioning the topic terms. For that reason, I re-do the analyses leaving out any mentions that the coders labeled “neither”. These are mentions that take no stance on

14 6

4

before after

2 Number of mentions climate

0 conservative liberal 5

4

3 before after 2

1 Number of mentions gun policy 0 conservative liberal

30

20 before after

10 Number of mentions immigration 0 conservative liberal

Figure 2: Predicted number of mentions of climate, gun policy and immigration on a conservative or liberal radio show with four hours of airtime, in the weeks before and after an event. Bars are 95% bootstrap percentile CIs, blocked at the show level. immigration, largely because they are simply pieces of news on the topic. On both ideological sides, the proportional increase in the “non-neither” mentions is roughly the same size as the overall increases above (conservative: from 1.9 to 9.4; liberal: from 4.0 to 6.4). In other words, the spikes in attention to immigration are not only due to outlets reporting on the events themselves— they are as much due to an increase in opinionated commentary on the topic of immigration.

Position-taking: defending issue stances

To investigate partisan position-taking as a strategy, I analyze the mix of issue stances present on partisan outlets. I use a fractional logit model, where out- comes can take any value between 0 and 1 (see Supplemental Information section

6). The dependent variable is the proportion of mentions on each show-week that appear convinced about climate change, anti-gun, or tough on immigration

(among all mentions with a stance; leaving out the “neither” mentions). The model includes an interaction between time (pre/post) and ideology.

Figure 3 shows the predicted proportions of topic mentions supporting each stance. In the week before an event, discussions of all three political topics are already deeply sorted. The proportions of convinced climate mentions are vastly different between conservative and liberal shows (cons.: 38%, lib.: 92%, p < .001). The same is true for anti-gun mentions (conservative: 21%, liberal:

71%, p < .001) and for “tough” immigration mentions (conservative: 70%, liberal: 31%, p < .001).

Any changes in positions around events are typically minor and not statisti- cally significant. The proportion of convinced climate mentions increases from

38% to 42% among conservative shows (p = .330), and from 92% to 95% on liberal shows (p = .149). The proportion of anti-gun mentions increases from

21% to 29% among conservative shows (p = .011), reaching statistical signifi- cance, and from 71% to 77% on liberal shows (p = .373). Finally, the proportion of tough-on-immigration mentions decreases slightly from 70% to 67% among

16 100

75

50 before after

25

0 % of mentions that are convinced on climate % of mentions that are convinced conservative liberal

100

75

50 before after

25 % of mentions that are anti−gun

0 conservative liberal

100

75

50 before after

25

0 % of mentions that are tough on immigration conservative liberal

Figure 3: Predicted proportion of mentions that are convinced about climate change, anti-gun, or tough on immigration—among mentions that take a stance on those topics—for a conservative and liberal radio show, in the weeks before and after an event. Bars are 95% bootstrap percentile CIs, blocked at the show level. conservative shows (p = .420), and from 31% to 27% on liberal shows (p = .407).

Supplemental Information section 9 shows that all findings are robust to a number of different specifications. First, I demonstrate that the findings are not affected by the decision to exclude news and public radio shows. Second, in the analyses above, shows are considered political if their episodes have an average estimated probability of being political that is greater than 50%. They are conservative if their average episode’s estimated probability of being so is

50% or more; liberal otherwise. Robustness checks show that varying these thresholds does not change the conclusions.

Discussion

In this paper, I investigate the effects of newsworthy events on the discussion of three political topics (climate, gun policy and immigration) on US talk radio.

In all cases studied, the events (hurricanes, school shootings and family sepa- ration) result in a sharp increase in the total volume of discussion on relevant political topics, on both ideological sides. For instance, after a hurricane, both conservative and liberal shows double their coverage of climate change. Events rarely cause significant shifts, however, in the stances that speakers take. The exception is shootings, which caused a very modest increase in the proportion of anti-gun mentions on the conservative side. In other words, radio hosts typically manage to weave these events into their existing ideological narrative.

The findings contrast with what we know about selection bias in parti- san media (Groeling, 2013). Partisan media do selectively underreport events that reflect negatively on parties and politicians on their end of the ideological spectrum—for instance, scandals or dips in approval (Groeling, 2008; Puglisi and Snyder, 2011). We even see that in the face of such events, partisan au- diences on the “losing” side tend to tune out of political news altogether (Kim and Kim, 2021; Tyler et al., 2019). This paper shows that there exists another category of events: ones that have clear connections to politics, but that do not

18 directly reflect negatively or positively on specific parties or politicians. A fall in approval or a political scandal may be difficult to re-interpret, but the same is not true for a hurricane. We do not see selection bias in reporting on this second type of events; instead, partisan media spin them to fit into a conservative or liberal story.

Implications: audience effects

The events in this study amplified the amount of messaging about related polit- ical topics, without changing much about the stances taken on those topics. In other words, after an event that is related to a political issue, talk radio listeners are much more likely to hear ideologically colored discussion of that issue. A possible consequence is that after major events, partisan segments of the public become even more sorted or polarized in their opinions.

A number of studies have found effects of partisan media on audience at- titudes. Barker and Knight (2000) show that if Rush Limbaugh gives a lot of negative attention to certain topics or people, listeners develop more negative attitudes about them. In an experimental setting, Levendusky (2013) finds last- ing, polarizing effects of like-minded news programs, whereas programs from the

“other side” have no effect. Partisan TV programs have persuasive effects on viewers who prefer Fox, but also on those would rather have watched entertain- ment (de Benedictis-Kessner et al., 2019). In the real world, selective media exposure predicts future polarization of opinions about presidential candidates

(Stroud, 2010). And even incidental exposure to Fox News encourages viewers to vote Republican (Martin and Yurukoglu, 2017).

Not all evidence on partisan media effects points to persuasion: other stud- ies find no effect (Arceneaux and Johnson, 2010), including from talk radio

(Yanovitzky and Cappella, 2001)—especially if participants are allowed to opt out of partisan content and into entertainment channels (Arceneaux and John- son, 2013). Nonetheless, the literature as a whole suggests that there is a distinct possibility that partisan media can influence the issue opinions of their audi-

19 ences (which are likely to be partisan themselves, e.g. Guess 2020). My findings show that events likely cause these audiences to hear even more ideologically motivated messaging on topics related to the event.

In the end, partisan media may have an effect on politics whether or not they really change audience attitudes. Republican politicians feel pressured by conservative outlets due to their alleged effect on Republican voters (Calmes,

2015; Hemmer, 2016, p.272-274). Talk radio is considered especially influential because it reaches people in rural states, including early primary states (Calmes,

2015). For instance, when House majority leader Eric Cantor decided to soften his position on immigration late in his 2014 campaign, Laura Ingraham and other talk show hosts took aim at him, likely playing a role in his primary defeat that year (Caldwell and Diamond, 2014). If politicians notice an increase in ideological messaging about a topic on partisan media, they may feel compelled to follow. Moreover, interviews with conservative journalists show that they sometimes try to target activists or policy-makers directly with their messaging

(Nadler et al., 2020).

Future work

The main finding of this study—that partisan media do not downplay the con- nections between events and politics, but rather re-interpret ideologically in- convenient events—hints at a number of open questions. First, the events in- vestigated in this paper are chosen to be very newsworthy, meaning that not reporting on these events at all would have been a challenge for a political outlet.

With more local events, such as gay pride events or factory closures, selective re- porting could be a viable strategy. As a next step, we could investigate whether this approach exists—and whether outlets that are local to the smaller event would still be forced to report on it, and connect it to political discussions.

Second, while partisan media react differently to events, it is likely that they also pay attention to different political topics as a baseline. My findings from pre-event weeks contain hints that a show’s average agenda is ideologically

20 informed—for example, liberal shows mention the climate more, and conser- vative shows mention gun policy more. Further analyses, involving a broader range of topics, could show whether liberal and conservative shows tend to pay attention to different baskets of issues.

Third, the results in this study point to a dilemma for partisan content makers who want to decrease the salience of an issue—i.e., lower its priority for citizens. In mainstream media, attention to political issues causes those issues to become more salient (e.g. Cohen 1963; King et al. 2017; McCombs and Shaw

1972). The results above suggest that when faced with a significant event, partisan journalists cannot avoid talking about relevant political issues. A brief investigation suggests that instead, they downplay the importance of the issue.

For example, when liberal talk show hosts discuss immigration, a common frame is that the US should use fewer resources to tackle this non-problem. The same is true for conservatives and climate change. Public opinion research confirms that issue priorities are strikingly different between Democrats and Republicans

(Gallup, 2020), despite the agenda-forcing power of events. Future research could investigate whether “importance-denying” content can indeed override the salience effect of mentioning an issue.

To conclude, this paper extends an emerging literature of large-n studies on the content of partisan media. I find that when events can be ideologically re-interpreted, these media choose to do so. This finding complements the knowledge that in the case of political scandals and misfortunes, partisan content producers rather try to ignore the inconvenient event. In both cases, events do not result in a shared narrative—instead, they give rise to two separate stories, with the potential of pushing partisans’ understanding of political topics further apart.

Declarations

Funding. Not applicable.

21 Conflicts of interest. Not applicable.

Availability of data and material. Results of the Mechanical Turk coding process, along with all show-week covariates (time, show name, estimated show ideology, etc.) will be made available in the study’s online repository. Raw audio and transcripts cannot be made available due to copyright.

Code availability. Code to replicate the paper’s findings from the publicized data will be made available in the study’s repository.

References

Ad Fontes Media. Interactive media bias chart 5.0, 2019. URL https:

//www.adfontesmedia.com/interactive-media-bias-chart/.

K. Arceneaux and M. Johnson. Does media fragmentation produce mass po-

larization? Selective exposure and a new era of minimal effects. APSA 2010

Annual Meeting Paper, 2010.

K. Arceneaux and M. Johnson. Changing minds or changing channels?: Parti-

san news in an age of choice. University of Chicago Press, 2013.

D. Barker. Rushed to judgment: Talk radio, persuasion, and American political

behavior. Columbia University Press, 2002.

D. Barker and K. Knight. Political talk radio and public opinion. Public opinion

quarterly, 64(2):149–170, 2000.

M. A. Baum and T. Groeling. New media and the polarization of American

political discourse. Political Communication, 25(4):345–365, 2008.

J. M. Berry and S. Sobieraj. The outrage industry: Political opinion media and

the new incivility. Oxford University Press, 2013.

22 M. Bisgaard. Bias will find a way: Economic perceptions, attributions of blame,

and partisan-motivated reasoning during crisis. The Journal of Politics, 77

(3):849–860, 2015.

L. A. Caldwell and J. Diamond. 7 reasons Eric Cantor lost, 2014. URL https:

//www.cnn.com/2014/06/11/politics/why-eric-cantor-lost/.

J. Calmes. ’They don’t give a damn about governing’: Conservative media’s

influence on the republican party. Technical report, Shorenstein Center on

Media, Politics and Public Policy, Harvard University, 2015.

B. C. Cohen. Press and foreign policy. Princeton University Press, 1963.

J. de Benedictis-Kessner, M. A. Baum, A. Berinsky, T. Yamamoto, et al. Per-

suading the enemy: Estimating the persuasive effects of partisan media with

the preference-incorporating choice and assignment design. American Politi-

cal Science Review, 113(4):902–916, 2019.

B. J. Gaines, J. H. Kuklinski, P. J. Quirk, B. Peyton, and J. Verkuilen. Same

facts, different interpretations: Partisan motivation and opinion on iraq. The

Journal of Politics, 69(4):957–974, 2007.

Gallup. Several issues tie as most important in 2020 election,

2020. URL https://news.gallup.com/poll/276932/several-issues-tie-

important-2020-election.aspx.

Geophysical Fluid Dynamics Laboratory. Global warming and hurricanes: An

overview of current research results, 2019. URL https://www.gfdl.noaa.gov/

global-warming-and-hurricanes/. Accessed August 2021.

T. Groeling. Who’s the fairest of them all? An empirical test for partisan bias

on ABC, CBS, NBC, and Fox News. Presidential Studies Quarterly, 38(4):

631–657, 2008.

T. Groeling. Media bias by the numbers: Challenges and opportunities in the

23 empirical study of partisan news. Annual Review of Political Science, 16,

2013.

A. M. Guess. Measure for measure: An experimental test of online political

media exposure. Political Analysis, 23(1):59–75, 2015.

A. M. Guess. (almost) everything in moderation: New evidence on Americans’

online media diets. 2020. Online preprint.

N. Hemmer. Messengers of the Right: Conservative Media and the Transfor-

mation of American Politics. University of Pennsylvania Press, 2016.

J. W. Kim and E. Kim. Temporal selective exposure: How partisans choose

when to follow politics. Working paper, 2021.

G. King, B. Schneer, and A. White. How the news media activate public ex-

pression and influence national agendas. Science, 358(6364):776–780, 2017.

V. Larcinese, R. Puglisi, and J. M. Snyder Jr. Partisan bias in economic news:

Evidence on the agenda-setting behavior of us newspapers. Journal of public

Economics, 95(9-10):1178–1189, 2011.

M. S. Levendusky. Why do partisan media polarize viewers? American Journal

of Political Science, 57(3):611–623, 2013.

M. Luca, D. Malhotra, and C. Poliquin. The impact of mass shootings on gun

policy. Technical report, National Bureau of Economic Research, 2019.

G. J. Martin and A. Yurukoglu. Bias in cable news: Persuasion and polarization.

American Economic Review, 107(9):2565–99, 2017.

M. E. McCombs and D. L. Shaw. The agenda-setting function of mass media.

Public opinion quarterly, 36(2):176–187, 1972.

A. Nadler, A. Bauer, and M. Konieczna. Conservative newswork: A report on

the values and practices of online journalists on the right. Technical report,

Columbia Journalism Review, 2020.

24 B. J. Newman and T. K. Hartman. Mass shootings and public support for gun

control. British Journal of Political Science, 49(4):1527–1553, 2019.

Nielsen. Tops of 2018: Radio, 2018. URL https://www.nielsen.com/us/en/

insights/news/2018/tops-of-2018-radio.html.

J. R. Petrocik. Issue ownership in presidential elections, with a 1980 case study.

American journal of political science, pages 825–850, 1996.

Pew Research Center. Where news audiences fit on the political spectrum, 2014.

URL https://www.journalism.org/interactives/media-polarization/.

Pew Research Center. Audio and podcasting fact sheet, 2019. URL https:

//www.journalism.org/fact-sheet/audio-and-podcasting/.

Pew Research Center. Important issues in the 2020 election, 2020. URL

https://www.pewresearch.org/politics/2020/08/13/important-

issues-in-the-2020-election/. Accessed August 2021.

Pew Research Center. Social media use in 2021, 2021a. URL

https://www.pewresearch.org/internet/wp-content/uploads/sites/9/

2021/04/PI 2021.04.07 Social-Media-Use FINAL.pdf.

Pew Research Center. Audio and podcasting fact sheet, 2021b. URL https://

www.journalism.org/fact-sheet/audio-and-podcasting/. Accessed Au- gust 2021.

M. Prior. The immensely inflated news audience: Assessing bias in self-reported

news exposure. Public Opinion Quarterly, 73(1):130–143, 2009.

R. Puglisi. Being : the political behaviour of a newspaper.

The BE Journal of Economic Analysis & Policy, 11(1), 2011.

R. Puglisi and J. M. Snyder. Newspaper coverage of political scandals. The

Journal of Politics, 73(3):931–950, 2011.

25 Quinnipac University. Stop taking the kids, 66 percent of U.S. voters say, 2018.

URL https://poll.qu.edu/images/polling/us/us06182018 uwsf18.pdf/.

Radio Advertising Bureau. Radio facts, 2018. URL http://www.rab.com/

whyradio.cfm.

Radio Advertising Bureau. Radio listening by format, 2021. URL http://

www.rab.com/whyradio.cfm. Accessed August 2021.

B. Schaffner, S. Ansolabehere, and S. Luks. CCES Common Content, 2018,

2019. URL https://doi.org/10.7910/DVN/ZSBZ7K.

H. B. Seeberg. How stable is political parties’ issue ownership? a cross-time,

cross-national analysis. Political Studies, 65(2):475–492, 2017.

S. Sobieraj and J. M. Berry. From incivility to outrage: Political discourse

in blogs, talk radio, and cable news. Political Communication, 28(1):19–41,

2011.

R. J. Spitzer. The Politics of Gun Control. New York: Routledge, 2011.

N. J. Stroud. Polarization and partisan selective exposure. Journal of Commu-

nication, 60(3):556–576, 2010.

N. J. Stroud. Niche news: The politics of news choice. Oxford University Press

on Demand, 2011.

C. R. Sunstein. # Republic: Divided democracy in the age of social media.

Princeton University Press, 2018.

Talkers magazine. Top talk audiences, 2019. URL http://www.talkers.com/

top-talk-audiences/. Accessed April 2019.

M. Tyler, J. Grimmer, and S. Iyengar. Partisan enclaves and information

bazaars: Mapping selective exposure to online news, 2019. Working paper.

26 US Census Bureau. Voting and registration in the election of November

2018, 2019. URL https://www.census.gov/data/tables/time-series/

demo/voting-and-registration/p20-583.html.

K. Usry, J. Husser, and A. Sparks. “I’m not a scientist, I just know what I see”:

Hurricane experiences and skepticism about climate change in a red state.

Working paper, 2019.

G. Visconti and K. Young. Do natural disasters change risk perceptions and

policy preferences about climate change? Working paper, 2019.

I. Yanovitzky and J. N. Cappella. Effect of call-in political talk radio shows

on their audiences: Evidence from a multi-wave panel analysis. International

Journal of Public Opinion Research, 13(4):377–397, 2001.

27