<<

A HYPERLINK AND SENTIMENT ANALYSIS OF THE 2016 PRESIDENTIAL ELECTION: INTERMEDIA ISSUE AGENDA AND ATTRIBUTE AGENDA SETTING IN ONLINE CONTEXTS

Youngnyo Joa

A Dissertation

Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of

DOCTOR OF PHILOSOPHY

August 2017

Committee:

Gi Woong Yun, Committee Co-Chair

Kate Magsamen-Conrad, Committee Co-Chair

Bill Albertini Graduate Faculty Representative

Sung-Yeon Park

© 2017

Youngnyo Joa

All Rights Reserved iii ABSTRACT

Gi Woong Yun, Committee Co-Chair

Kate Magsamen-Conrad, Committee Co-Chair

This study investigated the intermedia agenda-setting dynamics among various media

Twitter accounts during the last seven weeks before the 2016 U.S. presidential election. Media

Twitter accounts included in analysis were those of print media, television networks, news

magazines, online partisan media, online non-partisan media, and political commentators. This

study applied the intermedia agenda-setting theory as the theoretical framework, and network

analysis and computer-assisted content analysis enabling hyperlink and sentiment analysis as the

methods. A total of 5,595,373 relationships built via Tweets among media Twitter accounts was

collected. After removal of irrelevant data, a total of 16,794 relationships were used for analysis.

The results showed that traditional media Twitter accounts, such as print media and

television networks, play roles in the Tweeting network by bridging isolated media Twitter

accounts, and are located in the center of networks, so that information reaches them quickly;

further, they are connected to other important accounts. Together with the changes in the

volume of Tweeting that signaled media interest, the set of popular URLs and keywords/word

pairs in Tweets also served as sensors that detected media Twitter accounts’ interest about that

time. The results also supported the previous research findings that, as political events, the debates affect the production and dissemination patterns of news. Not only did the volume of

Tweeting produced spiked immediately after each debate, but various types of hyperlinks and sentiment words used in Tweets increased as well. iv The number of negative sentiment words observed in the Tweeting network surpassed the number of positive sentiment words observed in the Tweeting network across different time points, and the gap between them decreased as the election approached. The use of positive and negative sentiment words differed across different media Twitter account categories. Online non-partisan media reported the highest use of positive sentiment words, while political commentators reported the highest level of negative sentiment word use. With respect to sentiment contagion, this study found the influence of online media and partisanship on intermedia agenda-setting dynamics within Twitter. Lastly, there were more evident individual agenda setters that affected negative sentiment contagion in multiple media categories, while in positive sentiment contagion, there was no distinctive media Twitter account found. The results advocated a multimethod approach to explore the dynamics of intermedia agenda-setting and sentiment contagion within Twitter. Limitations and future research were addressed as well. v ACKNOWLEDGMENTS

I would like to express my special appreciation to my advisor Dr. Gi Woong Yun, for

encouraging me to grow as a better researcher and person. I have always felt grateful to have him as my advisor, as he is someone who thrives on riding the scholarly wave and facilitating

others to do so as well. I would also like to thank my co-advisor Dr. Kate Magsamen-Conrad.

Being in her research team has been an amazing experience, her guidance and support helped me

stay focused, especially during the tough times. I would also like to express my gratitude to my

committee members Dr. Sung-Yeon Park and Dr. Bill Albertini. I am grateful that Dr. Park was

willing to listen to me and offer help during my dissertation writing period as well as various

phases of my graduate studies when I frequently stumbled. She has been an inspiring figure to

me as a researcher, by being consistent and a caring colleague to others. I thank Dr. Albertini for

being supportive and closely engaged with my dissertation project. His thoughtful comments

and constructive suggestions helped me greatly improve my dissertation.

I also want to extend my deepest thanks to my academic mentors in Seoul, Dr. Sooyoung

Lee and Dr. Daiwon Hyun. The guidance and encouragement they have provided allowed me to

begin, continue and finish this journey. I was fortunate to have such tremendous mentors who

are passionate about what we do and believe in the changes that we can make. I will always be

grateful and appreciative of the marks they left on both my academic journey and on the path of

life. Thank you for always reminding me to stay positive and humble.

I would like to acknowledge the support offered by my colleagues in the graduate

program and my friends in Seoul who always made time for me and were there through the good

times and the bad. To Kisun, thank you for our walks together, even when it was inconvenient.

Lastly, this dissertation is dedicated to my parents who never stopped me from dreaming bigger

and my brother who always encouraged me to be me. vi

TABLE OF CONTENTS

Page

CHAPTER I. INTRODUCTION…………………………………………………………… 1

Agenda Setting in Online Contexts………………………………………………… 4

Intermedia Agenda Setting within Social Media…………………………… 6

Attribute Agenda on Twitter………………………………………………… 9

Network Analysis…………………………………………………………………… 11

Computer-Assisted Content Analysis……………………………………………… 12

Hyperlink Analysis…………………………………………………………. 12

Sentiment Analysis…………………………………………………………. 13

Time-Series Analysis………………………………………………………………. 14

Purposes of This Study……………………………………………………………… 15

Research Method…………………………………………………………………… 20

Organization of the Dissertation…………………………………………………… 21

CHAPTER II. LITERATURE REVIEW…………………………………………………... 22

Agenda-Setting in Online Contexts………………………………………………… 22

Intermedia Agenda-Setting on Twitter……………………………………………… 26

Social Media Effects…………………………………………………………. 30

Sentiment: the Agenda of Attributes………………………………………………… 33

Political Candidate Attributes ……………………………………………… 33

Attribute Dimensions……………………………………………………… 35

Sentiment in Online Contexts……………………………………………… 37

Network Analysis……………………………………………………………………. 39 vii

The Concept of Network……………………………………………………. 40

Network Centrality…………………………………………………………... 41

Computer-Assisted Content Analysis………………………………………………. 43

Sentiment Analysis…………………………………………………………. 43

Agenda-Setting Examined by Sentiment Analysis…………………………… 46

Hyperlink Analysis in Agenda-Setting Studies……………………………… 50

The Structure of Hyperlinks in Twitter Feeds……………………………… 52

CHAPTER III. RESEARCH QUESTIONS AND HYPOTHESES………………………. 56

Twitter Network Change……………………………………………………………. 57

Cross-Linking Across Different Media Twitter Accounts…………………………... 60

Media Twitter Accounts’ Use of Sentiment………………………………………… 62

Key Media Twitter Accounts in Network…………………………………………… 64

The Temporal Dynamics of Sentiment……………………………………………... 65

Mapping Intermedia Agenda Setting Influence……………………………………… 69

CHAPTER IV. METHOD…………………………………………………………………… 71

Procedure……………………………………………………………….…………… 71

Sample………………………………………………………...……...……… 72

Data Acquisition……………………………………………………………. 74

Unit of Analysis……………………………………………….……………. 76

Data Analysis………………………………………………………………... 76

Message Level Content Analysis …………………………………... 77

Network Analysis …………………………………………………... 77

Hyperlink Analysis …………………………………………………... 77 viii

Sentiment Analysis …………………………………………………... 78

Time-Series Analysis ………………………………………………... 78

Mapping Granger Causality Relationships ………………………… 80

Measurement………………………………………………………………………… 80

Types of Media Twitter Accounts…………………………………………… 80

Information in the Text Streams………………………...…….……………. 82

Hyperlink Salience…………………………………………………… 82

Sentiment Orientation and Salience ………………………………… 82

Network Centrality …………………………………………………… 82

Cues in the Tweeting Trends………………………………………………. 83

Political Events ……………………………………………………… 83

Media Interest ……………………………………………………… 83

Causal Relationship Between Two Time-Series Data Sets………………… 84

CHAPTER V. RESULTS…………………………………………………………………… 85

Descriptive Statistics………………………………………………………………… 85

Results at Time 1……………………………………………………………. 85

Results at Time 2……………………………………………………………. 92

Results at Time 3……………………………………………………………. 98

Results at Time 4……………………………………………………………. 104

Results at Time 5……………………………………………………………. 110

Results at Time 6……………………………………………………………. 116

Results at Time 7……………………………………………………………. 122

Results of Research Questions and Hypotheses……………………………………... 128 ix

Analysis of Tweeting Network Change……………………………………… 128

The Debate Effects on News Tweets ……………………………… 128

Hyperlink Frequency on the Network ……………………………… 129

Sentiment Word Frequency on the Network ……………………… 131

Prevalent Sentiment on the Network ……………………………… 132

Time-Series of Three Indicators …………………………………… 134

Cross-Linking Across Different Media Twitter Accounts…………………… 136

Debate Effects on Cross-Linking Practices ………………………… 136

Crosslinking & Sentiment …………………………………………… 136

Media Twitter Accounts’ Use of Sentiment Words………………………… 139

Network Analysis of Media Twitter Accounts……………………………… 142

The Temporal Dynamics of Intermedia Agenda-Setting…………………… 150

Intermedia Agenda-Setting among Different Media Types ………… 150

Intermedia Agenda-Setting among Individual Media

Twitter Accounts …………………………………………………… 156

Print Media Twitter Accounts ……………………………………… 156

Television Network Twitter Accounts ……………………………… 159

Online Media Twitter Accounts …………………………………… 162

Political Commentator Twitter Accounts …………………………… 164

Agenda-Setters of Public Sentiment on the Twitter News Network………… 167

CHAPTER VI. DISCUSSION……………………………………………………………… 173

Media Twitter Accounts……………………………………………………………... 174

Media Interest………………………………………………………………………... 176 x

Debate Effects………………………………………………………………………... 177

Sentiment Words……………………………………………………………………... 180

Different Types of Agenda Setters…………………………………………………... 182

Sentiment Intermedia Agenda-Setting……………………………………………... 183

News Marketing via Twitter Accounts……………………………………………... 187

Limitations and Future Research…………………………………………………... 189

REFERENCES……………………………………………………………………………… 194 xi

LIST OF FIGURES

Figure Page

1 Aggregated Time Series of Total Edges on Twitter by Day ...... 129

2 Aggregated Time Series of Total Hyperlinks on Twitter by Week ...... 130

3 Aggregated Time Series of Top 10 Hashtags and Top 10 Mentioned Accounts

by Week ...... 131

4 Aggregated Time Series of Sentiment Words Frequency on Twitter by Week ...... 132

5 Aggregated Time Series of Negative and Positive Sentiment Words by Day ...... 133

6 Aggregated Time Series of Positive and Negative Sentiment Word Percentages ..... 134

7 Aggregated Time Series of Total Edges, Hyperlinks, and Sentiment Words ...... 135

8 Aggregated Time Series of Tweets and Cross-links ...... 137

9a Aggregated Time Series of Percentage of Positive and Negative

Sentiment Words in Tweets by Day ...... 138

9b Aggregated Time Series of Percentage of Positive and Negative

Sentiment Words in Mentions (Cross-links) by Day ...... 138

10 Aggregated Time Series of Negative Sentiment Words by Media Type ...... 154

11 Aggregated Time Series of Positive Sentiment Words by Media Type ...... 155

12 Aggregated Time Series of Negative Sentiment Words of @usatoday2016,

@nytpolitics, @postpolitics, and @wsjpolitics ...... 158

13 Aggregated Time Series of Negative Sentiment Words of @cnnpolitics,

@foxnewspolitics, and @abcnewspolitics ...... 161

14 Aggregated Time Series of Negative Sentiment Words of @drudgereport,

@huffpostpol, and @thehill ...... 163 xii

15 Aggregated Time Series of Negative Sentiment Words of @michellemalkin,

@ezraklein, and @natesilver538 ...... 166

16 Directed Granger Causality Graph: Negative Sentiment ...... 169

17 Directed Granger Causality Graph: Positive Sentiment ...... 172

xiii

LIST OF TABLES

Table Page

1 Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 1) ...... 87

2 Top URLs in Tweet (Time 1) ...... 88

3 Top 10 Word and Word-pair (Time 1) ...... 89

4 Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness,

and Eigenvector Centrality (Time 1) ...... 91

5 Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 2) ...... 93

6 Top URLs in Tweet (Time 2) ...... 94

7 Top 10 Word and Word-pair (Time 2) ...... 95

8 Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness,

and Eigenvector Centrality (Time 2) ...... 97

9 Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 3) ...... 99

10 Top URLs in Tweet (Time 3) ...... 100

11 Top 10 Word and Word-pair (Time 3) ...... 101

12 Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness,

and Eigenvector Centrality (Time 3) ...... 103

13 Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 4) ...... 105

14 Top URLs in Tweet (Time 4) ...... 106

15 Top 10 Word and Word-pair (Time 4) ...... 107

16 Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness,

and Eigenvector Centrality (Time 4) ...... 109

17 Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 5) ...... 111 xiv

18 Top URLs in Tweet (Time 5) ...... 112

19 Top 10 Word and Word-pair (Time 5) ...... 113

20 Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness,

and Eigenvector Centrality (Time 5) ...... 115

21 Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 6) ...... 117

22 Top URLs in Tweet (Time 6) ...... 118

23 Top 10 Word and Word-pair (Time 6) ...... 119

24 Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness,

and Eigenvector Centrality (Time 6) ...... 121

25 Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 7) ...... 123

26 Top URLs in Tweet (Time 7) ...... 124

27 Top 10 Word and Word-pair (Time 7) ...... 125

28 Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness,

and Eigenvector Centrality (Time 7) ...... 127

29 Media Category Differences in Sentiment Words Frequency ...... 141

30 Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness,

and Eigenvector Centrality: Time 1-7 ...... 143

31 Distribution of Media Twitter Account Category ...... 144

32 Top 10 Vertices, Ranked by Betweenness Centrality in Tweet: Time 1-7 ...... 147

33 Top 10 Vertices, Ranked by Closeness Centrality in Tweet: Time 1-7...... 148

34 Top 10 Vertices, Ranked by Eigenvector Centrality in Tweet: Time 1-7 ...... 149

35 Pairwise Granger Causality Test Results: Negative Sentiment

on Tweeting Networks ...... 151 xv

36 Pairwise Granger Causality Test Results: Positive Sentiment

on Tweeting Networks ...... 152

37 Pairwise Granger Causality Test Results: Negative and Positive Sentiment

on Print Media Tweeting Networks ...... 157

38 Pairwise Granger Causality Test Results: Negative and Positive Sentiment

on Television Network Tweeting Networks ...... 160

39 Pairwise Granger Causality Test Results: Negative and Positive Sentiment

on Online Media Tweeting Networks ...... 162

40 Pairwise Granger Causality Test Results: Negative and Positive sentiment

on Political Commentator Tweeting Networks...... 165

41 Granger Analysis between Individual Media Twitter Accounts

and a Media Category: Negative Sentiment ...... 168

42 Granger Analysis between Individual Media Twitter Accounts

and a Media Category: Positive Sentiment ...... 171

1

CHAPTER I. INTRODUCTION

Barack Obama’s 2008 presidential campaign is considered the first campaign that

succeeded in employing social media as part of its campaign strategy, using podcasting, Twitter,

MySpace, Facebook, and YouTube (Pew Research Center, 2016a). The Obama campaign

utilized social media as a platform to promote the candidate and to aggregate resources,

including supporters. In the 2008 campaign, social media were used to maximize candidate

Obama’s relatively limited sources using media campaign strategies optimized for social media; then, in his 2012 campaign, social media also were used to maintain the lead in the race by enhancing social media user engagement in the campaign. For example, the campaign interacted with supporters via fan pages, and voting reminders were sent out on Twitter as the election was approaching (Wortham, 2012). In contrast, opposing candidate campaigns lagged behind on taking advantage of its potential as part of their campaign outlets.

Based on the past experiences of the 2008 and 2012 presidential elections, presidential candidates running for the 2016 presidential election had placed more effort in building their social media presence. Democratic candidate and Republican candidate Donald

Trump selectively chose specific social media channels and established a presence on multiple outlets. Thus, candidates have utilized social media as a hub for information for their campaigns along with a candidate’s website, appearing to even prioritize social media outreach. As of

February 2016, candidate Donald Trump sat atop both the Twitter and Facebook leaderboards, having more than 5 million followers and more than 5 million Facebook likes to his name. The candidate Hillary Clinton surpassed 5 million Twitter followers but scored only 2.5 million likes, only half of Trump’s Facebook likes (Crist, 2016).

2

Along such drastic changes in political campaigns, social media has now become one of the major political information and news outlets for voters. Forty-four percent of U.S. adults reported having learned about the 2016 presidential election in the previous week from social media, outpacing both local and national print newspapers as of January 2016, and 24% said that they had turned to the social media posts of Donald Trump or Hillary Clinton for news and information about the election—more than those who turned to either of the candidates’ websites or emails combined (15%) as of July 2016 (Pew Research Center, 2016b). On the other hand, news media have encountered opportunities and challenges provided by the new dynamics of news production.

Through 2004 and 2008, political campaigns started to embrace the advent of the social media; thus, online political communication now seems to have phased into a mature status in the era of social media. The increasing diversity of outlets in the news media landscape has accelerated the competition among traditional and nontraditional media over leadership in setting media agendas in the networked online public sphere. For instance, most of news media, including newspapers, television networks, news magazines, and web-only media, run online outlets of traditional media sources such as online websites and social media accounts (e.g., The

New York Times in print, nytimes.com, @nytimes). Social media in particular has been a nexus that interconnects such multiple outlets. For example, one common practice in social media is to add hyperlinks to external web pages, news articles, or other online material when creating a post. Such hyperlinking practices connect online content with one another and facilitate social interaction among various media and journalists (Williams, Trammell, Postelnicu, Landreville, &

Martin, 2005). Besides direct links, each social media networking service supports a unique function to connect links between postings at both levels, within the platform and across

3

different outlets. For instance, on Twitter, mention functionality (the “@” symbol) can be used

in text to simply refer to an individual or organization Twitter account. Using hashtags (the “#” symbol) or citing others’ posts by retweeting is another way to build links to sources outside of

Twitter.

Thanks to such hyperlinking practices, political information and news are transferable across different media types with diverse political ideologies within social media. It is not unusual to witness such tactics employed in news reporting or political deliberation on social media. Thirty-nine percent of social-media-using liberal Democrats and 34 percent of conservative Republicans claimed they use social media to post links to political stories or articles for others to read (Pew Research Center, 2012). In the context of a political campaign, links within social media postings encourage readers to find more information and, consequently, become more involved with the campaign (Pew Research Center, 2016b).

Along politicians, news media and journalists have now become interested in journalism practices in social media. With the rise of social media’s importance, which has brought more changes in the methods of news information production and consumption, news media have been challenged by new dynamics of multiple co-existent news and information outlets. For example, social media have helped journalists and media to reduce the uncertainty of new information and others’ strategic moves in news releases and lack of contact with the audience (Vonbun,

Königslöw, & Schoenbach, 2016). At present, traditional and nontraditional media are using online outlets, including social media, to disseminate news, market stories, establish relationships with news consumers, and as a tool for reporting. As the newspaper industry is in crisis, and less time and resources are available for newsgathering, social media turns out to be a convenient and cheap platform for political journalism (Broersma & Graham, 2012). Television

4

news channels also have begun to harness the potential of social media as a tool for reaching out

to audiences.

Today, news is commonly distributed through social media postings such as Twitter feeds, and it provides the various news media, from traditional to nontraditional, with the capacity to stream content. Social media have taken a role as an alternative means of news

distribution to traditional news media. Moreover, a considerable number of news organizations

use software such as linkbots that automatically publish headlines on their social media accounts to generate traffic to their websites (Broersma & Graham, 2012). Building connections using social media postings between multiple outlets for their own news content has become one of the essential strategies employed by news organizations to keep their influence over media agendas online. Through such connections, agendas developed and set by media transfer across various media types from offline to online or vice versa. Consequently, such trends appearing in journalism practices in social media have advanced the intermedia agenda-setting theory

investigating media influence on other media. Social media, especially Twitter, has become a

place where intermedia agenda-setting dynamics among various media types can be observable.

Agenda Setting in Online Contexts

Since the 1950s, scholars have assessed the influence news media has in other media’s

agenda-setting process (McCombs, 2014). This effect is known as intermedia agenda-setting

(McCombs, 2005). When it comes to social media effects on intermedia dynamics in news

production and consumption, the direction and nature of intermedia influence have been revisited

with newly discovered evidence. In previous intermedia agenda-setting studies, the direction of influence was presumably considered as one-way from elite media to less-elite media or

5

audience. Under this understanding of intermedia agenda-setting, elite media, in which editors and journalists develop and determine agendas to report, holds influence on not only public opinion but also the less-elite media’s agenda over what to think and how to view social/political issues. For example, past studies found that newspapers such as and the

Washington Post set agendas for online bulletin boards (Lee, Lancendorfer, & Lee, 2005;

Roberts, Wanta, & Dzwo, 2002) and blogs (Meraz, 2009). However, drastic changes around news creation and distribution with the advent of online outlets, especially social media, has challenged the presumption of the elite news media’s agenda-setting influence. Some studies found that media agendas in social media and online outlets, such as blogs, as a public sphere not only follow and repeat traditional media agenda items, but they also reciprocally enter the mainstream media agenda and lead to certain ways of framing of issues (e.g., Meraz, 2011b;

Sayre et al., 2010). Even though breaking news or investigative reporting from traditional media can create a huge impact on a presidential race and poll results, the interaction that occurs within social media can provide a wide range of perspectives.

Agendas on online and social media are often picked up by traditional news media, and it possibly results in the expanded news agenda pool. For example, presidential candidates produce quotable tweets, news updates, statements on their policy positions, and even personal stories on social media, and such content, which is typically differentiated from a campaign’s controlled content on social media, melds into a traditional news media’s agenda, such as through print media and television networks. In practice, journalists report using social media to find story leads, follow politicians, and compare social media content with other information subsidies such as campaign press releases (Parmelee, 2013). Moreover, regarding the nature of media influence, the scope of agendas set by news media is beyond political information. Public

6 sentiment toward political candidates, parties, and events often dominate the social media sphere, and news media works as a source leading public sentiment or a distributor reporting it.

Particularly, in terms of social/political events such as elections, social media has become the center of attention as a new public sphere with its subversive power. Studies indicated that social opinion streams on social media tend to rapidly increase after political events (Conway,

Kenski, & Wang, 2013) and convey distinct agendas compared with those of traditional media

(Neuman, Guggenheim, Jang, & Bae, 2014).

The complicated relationship between multiple media actors requires further examination, and the questions of who sets the media agenda in the era of social media remain.

In exploring the influence, research might need to consider the fact that social media follows a different logic in generating and distributing political coverage (Jungherr, 2014). Moreover, social media’s role in news production may be affected by different factors such as whether it is election time or runs by different owners with various media orientations and political ideologies. With the focused examination of a large scale of social media data, more research is necessary to indicate dynamics of intermedia agenda-setting between various media actors on social media (Conway, Kenski, &Wang, 2015; Neuman et al., 2014).

Intermedia Agenda Setting within Social Media

Agenda-setting theory suggests that media develops social agendas and determines their importance. The advancement of online news information outlets, however, and social media in particular, shift to a different environment. In this new environment, the abundance of information sources and channels for news distribution possibly reverses the direction of influence not only from public to media but also even among media. In consideration of this

7

change, intermedia agenda-setting theory has been revisited, with the focus on the transfer of

issues across various media outlets (McCombs, 2005). Recent intermedia agenda-setting studies

have focused on revealing the direction of intermedia agenda-setting effects across different

media types from traditional media, such as newspapers and television network news programs,

to online media, such as blogs and online discussion boards (e.g., Lee et al., 2005; Lim, 2006;

Lopez-Escobar et al., 1998; Meraz, 2009; Roberts & McCombs, 1994; Roberts et al., 2002).

Questions regarding intermedia agenda-setting ask about how media outlets influence

each other. Past studies in regards to traditional media’s intermedia agenda-setting influence

were evidenced by transferred media agendas from traditional media to less traditional media such as candidate websites and blogs; from newspaper coverage to television news programs

(Lopez-Escobar et al., 1998; Roberts & McCombs, 1994); from newspaper coverage to online news sites (Lim, 2006), online bulletin boards (Lee et al., 2005; Roberts et al., 2002), and blogs

(Meraz, 2009). In some cases, however, studies have confirmed that intermedia effects are

multidirectional, not unidirectional, in the current media environment, including social media as

intermedia agenda-setting agents. For example, newspapers may influence Twitter, but

television news influences blogs rather than Twitter (Cushion, Kilby, Thomas, Morani, &

Sambrook, 2016). Traditional news media coverage is most influential in the political tweet generation, but tweets also pick up public opinion about agendas traditional news media have overlooked (Parmelee, 2013) and social media conveying agendas distinct from those of traditional media (Neuman et al., 2014). However, most empirical research thus far has failed to confirm such complicated intermedia agenda-setting dynamics between various media, as most empirical investigations use rather small media samples to focus on the relationship between

8 specific media types or specific media outlets, instead of attempting to investigate the news media landscape at a larger scale (Vonbun et al., 2016).

Historically, intermedia agenda setting has regularly been studied across various platforms, media systems, and geographical regions (Groshek & Groshek, 2013). Agenda setting researchers have noticed that social media, as one of the scene’s new online outlets, could influence intermedia agenda-setting processes. The current trend shows that news media actively maintain social media presences on social media services such as Twitter by running their official accounts (e.g., @cnnpolitics, @wsjpolitics). But very little research, to the best of our knowledge, has been conducted to examine the intermedia dynamics within social media, as compared with intermedia agenda-setting research across social media and other media outlets such as the spillover from online news sites to social media accounts (Boczkowski & de Santos,

2007; Groshek, 2008).

However, thanks to the development of relatively new network and computer-assisted content analysis tools (Yun et al., 2016), social media has become a place in which to examine interaction among various media types using Twitter accounts to measure the direct effect that each media Twitter account has on another, in particular, in dominating the social media sphere with certain issues or perspectives. Examining intermedia agenda-setting dynamics within social media enables researchers to use different measures1 to determine media influence rather than depending on content analysis and issue rank-order comparisons across media, traditionally used in intermedia agenda-setting research. Thus, this study is primarily concerned with the

1 For example, there are network centrality measures (network analysis) and page rank/hub scores (hyperlinking analysis).

9 intermedia agenda-setting dynamics among various types of media Twitter accounts using network and computer-assisted content analysis.

Attribute Agenda on Twitter

Two levels of agenda setting have been mainly identified, as agenda-setting studies are accumulated. The first-level agenda setting refers to the relationship between the perceived agenda in the public’s mind and the media agenda set by news media in terms of how two sets of agenda are corresponding to each other (McCombs, 2005). The agendas in this case are what to think about certain issues or events. On the other hand, the second level of agenda-setting refers to not only agendas but also to salient attributes, framed with a particular issue or event

(McCombs, 2014). For example, political candidates are easily associated with certain attributes representing them in public perceptions, such as cognitive attributes (e.g., trustworthiness or electability) or affective attributes (e.g., positive or negative evaluations of the candidates). In this case, image-defining traits of candidates may be discussed on Twitter and then become attribute agendas; consequently, media can guide us how to think of them. For example, a candidate image can be defined with either negative attributes such as “weak, or disqualified” or positive attributes such as “strong or qualified.”

Traditionally, intermedia agenda-setting research has focused more on issues than on attributes (Heim, 2013). However, recent studies have begun to pay attention to attribute agendas rather than on topics or issues as the news-related agendas on Twitter are likely to be

“affective news” built on subjective experiences, opinions, and emotions (Papacharissi &

Oliveira, 2012). News media accounts in social media disseminate not only information but also sentiment in it, and political news on Twitter tend to be followed with reactions (e.g., feelings,

10 thoughts) to the news media coverage. Considering such nature of propagated information on social media, attributes agenda-setting needs further examination.

In context of intermedia agenda-setting research, recent second-level agenda-setting studies have focused on possible factors involved in the process. Media partisanship, which is typically presented in providing news information and showing ideological bias, is especially easy to be associated with attribute agendas. Hyun and Moon (2016) revealed in their recent study that a partisan imbalance existed in the portrayal of candidates’ attributes on Fox, CNN, and NBC news programs. In particular, Fox’s “Special Report” provided near one-sided coverage favorable to the Republican candidate Romney rather than to the Democratic candidate

Obama in terms of affective attributes, while CNN’s Anderson Cooper showed an imbalance but in the opposite direction, giving more favorable coverage to the candidate Obama. NBC’s

“Nightly News” remained relatively balanced. Such media partisanship affects not only public perceptions on each candidate’s attributes but also other media’s attribute agenda-setting. For example, there are studies about how attribute agenda-setting occurs across traditional media and online media with varied political ideologies such as political blogs. Meraz (2011a) found that the attribute agendas of the liberal and moderate blogs were found to be strongly correlated with the elite news media’s attribute agendas, while the conservative blogs were not correlated with the elite news media’s attribute agendas as much.

Additionally, in recent years, intermedia attribute agenda-setting research phased into another level in consideration of social media’s capability as a big-data repository that enables direct measurement of the media effects. Thanks to the developments in computer-assisted content analysis tools, Twitter can provide an ideal environment for assessing the outcomes of corresponding political events or news distributed on social media, in particular, public sentiment

11

toward the events or news (Jang & Pasek, 2015). In fact, some studies found that Twitter users’

emotions and sentiments toward candidates are a better predictor of voters’ minds than a public

opinion poll (Grosheck & Al-Rawi, 2013; Tumasjan, Sprenger, Sandner, & Welpe, 2010). In line with such research, this study focuses on the transfer of sentiment across various media

Twitter accounts during the 2016 presidential campaign.

Network Analysis

According to the formal definition of “network,” a network contains a set of objects (in mathematical terms, nodes) and a mapping or description of relations (edges) between the objects or nodes (Kadushin, 2012). Network analysis basically enables the visualization of key influencers and relationships on the network using network centrality measures (e.g., betweenness centrality, closeness centrality, and eigenvector centrality) and calculating structural characteristics of the network. For example, in the case of Twitter, each Twitter account communicates with one another through Tweets, replies (to other Tweets), and mentions (of other Twitter messages). Information about the relationships initiated by one and built between two nodes can be used to measure each node’s popularity or influence within the network and determine each node’s role in the network.

While early studies mainly adopted survey methods for gauging public agendas and content analysis for the media agenda, a network analysis can identify agenda-setters among individual media and visualize the complex relationships between them on the network and agendas prevalent in the network. The salience of issues and attributes found in the network provide information about agendas set by agenda setters. Moreover, researchers can determine the direction of a relationship with information about the starting point and the end point of

12

edges and causality using accumulated time-series data of key elements of networks to figure

intermedia agenda-setting dynamics among nodes in the network. The number of intermedia

agenda-setting studies exploring social media spheres is recently increasing to test the

applicability and validity of this method (e.g., Vargo et al., 2016; Yun et al., 2016).

Computer-Assisted Content Analysis

Hyperlink Analysis

Hyperlink analysis is a method used to discern the interconnections between web pages

and blogs. With information about the hyperlink network, the social roles and connections of

actors in an online network can be analyzed (Kim, Barnett, & Park, 2010). Previous studies found differences in hyperlinking practices across different media types and media outlets with varied political ideologies. For example, in comparing candidates’ websites and blogs, the websites were much more likely to be linked to promotional material, such as campaign merchandise sales, political advertisements, and fundraising in the form of donation requests than were blogs; however, blogs were more likely to be linked to external media sites, even those with different ideologies (Williams et al., 2005). This indicates that campaign organizers treat media, blogs, and websites differently in utilizing hyperlinking. Hyperlink studies also have suggested that partisan affiliation strongly influences intermedia agenda-setting by setting source agendas, as media that share political ideology are more prone to link to one another and share similar sources (Heim 2013, Meraz, 2009; Meraz, 2011b).

Twitter employs three types of hyperlinking practices. First, Twitter accounts can refer to another account using the mention functionality (the “@” symbol). Second, Twitter users can build links to share topics or issues within Twitter by embedding hashtags (the “#” symbol) in

13 text or retweeting others’ posts. Last, Twitter users can add direct links to external sources such as URLs to news coverage in online news sites. By analyzing each type of hyperlinking practice frequency in Twitter feeds, this study analyzed the structure and characteristics of a given hyperlink network with information about influencers and the strategies to enhance agenda setting influence. Thus, in this study, referred accounts, hashtags, and external links employed in

Twitter feeds were extracted and analyzed.

Sentiment Analysis

Sentiment analysis is the process of computationally identifying and categorizing opinions expressed in text-based source materials (Pang & Lee, 2008). Sentiment analysis can help agenda-setting researchers discover the affective states and emotions expressed in messages.

This analysis method recently has been commonly used for mining opinions on social media, including Twitter (Pak & Paroubeck, 2010). By exploring the salience of positive and negative attributes associated to the agendas, researchers can investigate the second level agenda-setting influence, i.e., how to think about certain objects or issues such as candidates or political issues owned by candidates.

Attributes associated with agendas can be categorized into substantial and affective attributes. For instance, substantive aspects of the attribute agendas describing political candidates include the candidates’ issue positions and political ideology on public issues, their perceived qualifications and experience, personality, biographical information, and integrity; affective attributes involve positive, negative, or neutral opinions or impressions about the candidates (Golan & Wanta, 2001; Kiousis, Mitrook, Wu & Seltzer, 2006). Thanks to automated computational treatment of massive social media streaming data, sentiment analysis has been

14

used for tracking the flow of an attribute agenda such as public sentiment (for example, positive

and negative emotions associated with a politician) across multiple media outlets, including

social media (Tumasjan et al., 2010). Furthermore, some studies using sentiment analysis

measured Twitter users’ positive and negative attitudes toward a particular political candidate

(Ceron et al., 2014) or social/political events (Thelwall, Buckley, & Paltoglou, 2011) using

sentiment analysis. They then were able to examine how those perceptions transfer to actual

practices, such as voting. This study also investigated how sentiment analysis can be used to

investigate the dynamics of intermedia attribute agenda-setting across various media Twitter

accounts owned by different media types with different political ideologies.

Time-Series Analysis

Additionally, time-series analysis has been commonly used for agenda-setting research.

The ability of the sentiment analysis to detect events of interest in the real world and to scan large quantities of online data has especially encouraged new types of media and communication research to pursue research topics such as time-series analysis of public sentiment in social media (e.g., Ceron, Curini, Iacus, & Porro, 2014; Groshek & Al-Rawi, 2013; Guo, Vargo, Pan,

Ding, & Ishwar, 2016; Tumasgan et al., 2010). In such studies, public sentiment in online contexts was tracked across different time points. Because emerging important events are typically signaled by sharp increases in the frequency of relevant terms (Thelwall et al., 2011), time-series data in social media are especially useful to analyze phenomena that change over time, along with particular points of interest during the event that triggered emotions or public discourse (Diakopoulos & Shamma, 2010).

15

Further, time-series analysis has long been recognized and utilized as a robust method for

determining causation in agenda-setting studies, including intermedia agenda-setting effect

research (Meraz, 2011b). Two types of approaches have been used to test the temporal order of

intermedia agenda-setting among media Twitter accounts: statistical and graphical time-series.

First, the statistical approach uses the Granger causality test. The test results can determine

whether a time-series data predicts another time-series data. A measure x is said to “Granger

cause” a measure y, if y can be better predicted from past values of x and y together, than from past values of y alone (Freeman, 1983). Recent studies choosing time-series analysis adopted this method to examine the causal relationship between sets of time-series data (e.g., Meraz,

2011b; Neuman et al., 2014; Vargo et al., 2016), as it is argued that Granger causality can provide a more accurate result than other time-series methods (Meraz, 2011b). Second, the graphical time-series approach is essential to construct a graph of the volume of topic-relevant tweeting over time (Thelwall, 2014). In this study, both types of time-series approaches were employed to identify trends in media interest and individual events of interest, vice- and presidential debates, during the time monitored.

Purposes of This Study

Increasing diversity in news media outlets has accelerated competition among various media over leadership in setting media agendas in social media. The intermedia agenda setting theory subsequently has been examined to comprehend the new nature of interconnected media accounts in the network and interactive practices such as hyperlinking. Moreover, characteristics of social media may or may not affect the type and volume of agendas being distributed through the network. For instance, affective news (Papacharissi & Oliveira, 2012) or Tweets regarding

16 affective attributes and sentiment may be the postings that are most frequently propagated because social media postings tend to be comprised of commentaries on traditional media news coverage, when it comes to reporting of political events. Adding to this, conditions concerning the dynamics of intermedia agenda-setting subsequently require further examination.

This study therefore includes three objectives. First, it explores the conditions of increasing the volume of Twitting networks during the last seven weeks before the 2016 U. S. presidential election, with consideration of the existence of political events and characteristics of propagated messages or issues on the network. Indicators such as the volume of network, hyperlink frequency, and sentiment frequency were used for analysis. Second, this study investigates the intermedia agenda-setting dynamics among various media accounts with varied political ideologies within Twitter. The media Twitter accounts included in analysis are Twitter accounts of print media, television networks, news magazines, online partisan media, online non- partisan media, and political commentators. Last, this study examines the temporal dynamics of sentiment contagion across different media Twitter accounts during the seven weeks. In order to answer these questions, this study applies intermedia agenda-setting theory as the theoretical framework and a series of network analyses and computer-assisted content analyses enabling hyperlink analyses and sentiment analyses as the methods. Based on these purposes, the following research questions and hypotheses are posited. Chapter 3 provides a detailed rationale of the research questions and hypotheses.

RQ 1. To what extent Tweeting network did change during the last seven weeks before

the 2016 U.S. presidential election?

17

H1: The volume of Tweeting network increased during the last seven weeks before the

2016 U.S. presidential election.

RQ 2. To what extent did the presidential debates affect the Tweeting network?

H2a: There was a greater number of edges in the network in the days following the U.S. presidential candidates’ debates than prior to the debates.

H2b: There was a greater number of hyperlinks in the network in the days following the U.S. presidential candidates’ debates than prior to the debates.

H2c: Different types of hyperlinks frequency in the network (domain, hashtag, and accounts mentioned) changed together in the days following the U.S. presidential candidates’ debates than prior to the debates.

H2d: There was a greater number of sentiment words in the network in the days following the U.S. presidential candidates’ debates than prior to the debates.

H2e: Negative sentiment words in the network surpassed positive sentiment words in the network as the 2016 U.S. election approaches.

H2f: Three indicators measuring Twitter network increased during the last seven weeks before the 2016 U.S. presidential election.

RQ 3. To what extent did the presidential debates affect the cross-linking practices of media Twitter accounts?

H3a: The number of Twitter accounts’ cross-linking practices increased during

the last seven weeks before the 2016 U.S. presidential election.

18

H3b: There was a greater number of cross-linking in the network in the days following the U.S. presidential candidates’ debates than prior to the debates.

H3c: There was a greater proportion of cross-linking edges, including positive and negative sentiment words, than non-cross-linking edges, including positive and negative sentiment words in the network.

RQ 4. Do different types of media Twitter accounts use sentiment words distinctively?

H4a: Media Twitter accounts that belong to traditional media (print media, television networks, and new magazine) used a greater amount of positive sentiment words than did nontraditional media (online partisan media, online nonpartisan media, and political commentator).

H4b: Media Twitter accounts that belong to nontraditional media (online partisan media, online nonpartisan media, and political commentator) used a greater amount of negative sentiment words than traditional media (print media, television networks, and new magazine).

RQ 5. To what extent were traditional media Twitter accounts successful at occupying agenda-setter positions within the network?

H5a: The proportion of traditional media Twitter accounts ranked in the top 10 centrality measures was greater than the proportion of nontraditional media

Twitter accounts.

19

RQ 6. To what extent did each type of media Twitter account exert an intermedia

attribute agenda-setting impact on other types of media Twitter accounts?

H6a: Nontraditional media Twitter accounts were more likely to Granger cause2

traditional media Twitter accounts’ use of negative sentiment words than the reverse

relationship.

H6b: Traditional media Twitter accounts were more likely to Granger cause

nontraditional media Twitter accounts’ use of positive sentiment words than the

reverse relationship.

RQ 7. To what extent did each media Twitter account of print media, television networks,

news magazines, online media, and political commentators with different political

ideologies exert an intermedia attribute agenda-setting impact on other media Twitter

accounts included in the same media category?

RQ 8. Which media group or individual accounts with political ideologies were most

likely to set the attribute agenda, via positive and negative sentiment, for all media

Twitter accounts at large?

2 The results of the Granger causality tests can determine whether time-series data predicts other time-series data. A measure x is said to “Granger cause” a measure y, if y can be better predicted from past values of x and y together, than from past values of y alone (Freeman, 1983).

20

Research Method

This study employs a multimethod approach to explore the dynamics of intermedia

agenda-setting among the media Twitter accounts during the seven weeks preceding the 2016

U.S. presidential election. First, a network analysis method is used to identify key agenda-setters among various media Twitter accounts using three types of network centrality measures: betweenness centrality, closeness centrality, and eigenvector centrality. Second, computer- assisted content analysis software, NodeXL, is employed to analyze hyperlinks and sentiment words in Tweets. NodeXL performed content analysis to capture three types of hyperlinks (links to media coverage, hashtags, and mentioned Twitter accounts) and the words with sentiment.

When this process was completed, the databases for hyperlink analysis and sentiment analysis were ready for further analysis. NodeXL, a Microsoft Excel application add-in, retrieved the most recent 18,000 Twitter feeds from the point of data collection, which included search keywords (each media Twitter account name) and provided a list of the most propagated words, word-pairs, hyperlinks, and information on resources to which they link.

Political news Tweets from media Twitter accounts were collected through a sample of five daily newspapers (@nytimes, @postpolitics, @usatoday2016, @latimespolitics, and

@wsjpolitics); four television news networks (@cnnpolitics, @abcpolitics, @nbcpolitics, and

@foxnewspolitics); two news magazines (@newyorker and @); three online partisan media (@huffpostpol, @drudgereport, and @salon_politics); two online non-partisan media

(@thehill and @buzzfeedpol); three political commentators (@natesilver538, @ezraklein, and

@michellemalkin); and two presidential candidate campaigns (@hillaryclinton and

@realdonaldtrump). The set of media Twitter accounts were official Twitter accounts owned by

21

media having diverse political ideologies (left-leaning or right-leaning) and particularly being

run for political news and information. Each media Twitter account name has been selected for

NodeXL search keywords, and NodeXL search results using search keywords returned 21

databases archived on a daily basis. After the seven-week data collection, the incomplete data

sets (e.g., @cbspolitics, @rollcall, @politico, @glennbeck) were excluded. A total of 5,595,373

relationships built via Tweets among media Twitter accounts was collected. After removal of irrelevant data (e.g., individual non-media Twitter accounts’ Tweets referring to media Twitter accounts), a total of 16,794 relationships detected in Tweets, mentions, and replies to other posts generated by media Twitter accounts were used for analysis.

Organization of the Dissertation

This dissertation comprises six chapters. Chapter I introduces general information of intermedia agenda-setting, network analysis, and types of computer-assisted content analysis.

Chapter II includes the literature review and theoretical background. This chapter concentrates on previous literature about intermedia issue agendas and attribute agenda-setting on social media and three types of analysis. Chapter III presents research questions and hypotheses, along with definitions of major terms. Chapter IV presents the research method. This chapter explains network analysis, computer-assisted content analysis, and two types of time-series analysis methods used in this study as well as information regarding how the sample and data have been collected. Chapter V contains the results from data analysis and answers the research questions and hypotheses. Chapter VI discusses the results and interprets the findings. This chapter also suggests directions for future research and limitations of this study. The end of this dissertation includes references.

22

CHAPTER II. LITERATURE REVIEW

The primary goal of this study is to investigate how different types of media Twitter accounts with varied political ideologies attempt to dominate the Twitter-sphere and influence other media Twitter accounts’ agendas. Media Twitter accounts owned by print media, television networks, online non-partisan and partisan media, news magazines, and political commentators are situated in the environment where they can selectively choose the content and strategically post it in consideration of other media’s strategies. Social media has facilitated such interactions among media actors—media Twitter accounts—and has become a place allowing easy observations of intermedia agenda-setting influence among various media Twitter accounts through time-series analysis of salient agendas, including issues, attributes, and hyperlinks within the network.

The following section offers an overview of the intermedia attribute agenda-setting theory and related concepts. In addition to discussing the drastic changes confronting news media, this section covers social media’s effects on news production and distribution. Finally, this chapter reviews the different types of network analyses and computer-assisted content analyses employed in this study to analyze social media streams.

Agenda-Setting in Online Contexts

Mass media engages in social and political reality construction processes by selecting prominent elements, and those selected elements influence our perceptions of reality. This is agenda-setting’s key proposition. Media are one of the most significant windows through which audiences to perceive reality. Public judgments of an issue’s importance follow prominence in the media agenda (McLeod, Kosicki, & McLeod, 2009). Thus, how media represent certain

23

issues or objects can cause an audience to have biased views of society. The media’s gate-

keeping role in determining what, among a variety of issues in society, is worth reporting allows

elite media works to be the key agenda-setters of society.

Media can determine how often (for example, number of news coverages) and how long

(the length or portion of particular coverage within a program slot or print page) an issue will be

covered in the news. Those salience-related cues can signal to the audience what is important

and what is not. Issues with high news value may be conveyed to the public sphere, while issues

with low news value typically get dropped from the list of considerable issues for public

deliberation. In this context, finding evidence of such media effects that the salience of the

observed media agenda can transfer to the salience of that particular agenda in the public’s mind

has been a central concern of agenda-setting research.

When agenda-setting influences the public, salient cues from the media become tools to organize their own agendas and decide which issues are most important (McCombs, 2014). With this proposition, researchers have made efforts to reveal the conditions of the most prominent agenda’s origin. Traditionally, agenda-setting influence has been examined by comparing the full array of competing issues across the media agenda and the public agenda. This involved asking individuals to rank a series of issues to find evidence of any correspondence between those individual rankings and the emphasis on those issues in the news media; checking the degree of correspondence between the media agenda and the public agenda in the shifting salience of a single issue over time; or conducting experiments to measure the salience of a single issue for individuals before and after exposure to news programs (McCombs, 2014).

As mentioned above, agenda-setting research typically examines how media agenda influences the public agenda by tracing the transfer of issue salience between them. In online

24

contexts, however, agenda-setting research has employed new approaches enabling an analysis

of large-scale data. The big set of aggregate data online has now become a repository of interactions in the agenda-building process and between the media and the public agendas.

Electronically generated cues, such as time stamps or each user’s online profile with personal information, signal an agenda-setting influence. For example, time-lag research using Twitter data looks into agenda-setting influence by tracing the volume of generated deliberation around a particular event, such as a terrorist attack or political candidates’ debates (e.g., Conway et al.,

2013; Meraz, 2011b). The goal of such studies is to investigate causality and collect the evidence of the agenda-setting influence of news media on the public agenda. When the changes in the public agenda occur after the air-date or publication of the news, it is presumed that agenda-setting effects exist (McCombs, 2014).

Another type of new agenda-setting research aims at finding agenda-setters as influencers

within the network of news and information sources (e.g., Meraz, 2009; Yun et al., 2016). With

the introduction of network analysis, finding the actors who set the agendas have been of central

importance in revealing how agenda-setting occurs online. Along with computer-assisted

content analysis, massive data from social interaction among different actors on online networks

provide researchers with cues about network structures and evidence of agenda-setting taking place among various actors. Adding to this, hyperlinks are one of the characteristics that make online a unique virtual space for public deliberation. This technology enables agenda-setting researchers to trace the distribution process of media messages and its influence on other actors, including public and competitive media. For example, previous studies have looked at how traditional media continue to practice internal and external linking practices within their news content online (e.g., Dimitrova, Connolly-Ahern, Williams, Kaid, & Reid, 2003; Meraz, 2009).

25

These new types of agenda-setting research in online contexts have revealed that agenda-

setting faces drastic changes when online and social media enter into the news cycle. Messages from various sources transfer across different types of media outlets and are referenced in various ways by other media involved in content-creation and -distribution online. The new media environment creates multiple gates through which information passes to the public regarding the number of sources of information (for example, the Internet, cable television, and

social media) and the speed with which information is transmitted (shortened time-lag) (cite).

The list of news information providers and distributors has expanded beyond the traditional

media, and exclusive news is hard to find due to the accelerated speed of news circulation

between various types of media. The expansion of media outlets has created new opportunities

especially for nontraditional news media rooted online with fewer resources, and the interactive nature of social media makes it hard for the elite journalists to control the news (Meraz, 2009).

Not only the number of outlets but also the types of news genres that contain political information have increased (e.g., news shows formatted with citizen journalism and talk shows)

(Williams & Carpini, 2004). Agenda-setting attempts no longer have to be conveyed only by the hard and informative news, and subjective feelings or emotions using a peripheral approach can make the media agenda more appealing to the public.

Such agenda-setting research focusing on online contexts shares the idea that any media can involve news information production and distribution, and a sole agenda-setter or the static status of agenda-setters no longer exists. The interaction among various media, especially on

Twitter, is better explained through interactive processes for media agenda-building led by multiple actors than a limited number of agenda-setters. In this context, intermedia agenda-

26

setting refers to the occurrences of one media’s agenda being influenced by another media.

Intermedia agenda-setting in online contexts has brought more attention to this area of research.

Intermedia Agenda-Setting on Twitter

The scope of agenda-setting research has expanded to include many other channels of communication and political advertising and conversations, including social media (McCombs,

Shaw, & Weaver, 2014). Typically, newspapers and major television news channels have been recognized as established, traditional mainstream media or elite media likely to be powerful agenda-setters, while online outlets, such as websites and blogs, have been categorized into non- traditional news media or online media (Meraz, 2011b; Olteanu, Castillo, Diakopoulos, &

Aberer, 2015). The traditional media in their gatekeeping roles have operated through the interactions between the political elites and journalists. This point of interaction constitutes the gate through which information passes to the public. Prior intermedia agenda-setting studies have found the influence of such traditional media on less traditional media, including candidate websites and blogs. Agendas set by newspapers transfer to television news broadcasts (Lopez-

Escobar et al., 1998; Roberts & McCombs, 1994), online news sites (Lim, 2006), online bulletin board conversations (Lee, Lancendorfer, & Lee, 2005; Roberts et al., 2002), and blogs (Meraz,

2009). The agenda-setting influence of the major television networks also had been found with transferred media agendas in candidates’ campaign blogs (Sweetser et al., 2008). The findings indicated that traditional media’s agenda-setting influence works with online outlets as well.

However, neither the diverse sources of news nor the Internet spell the end of the agenda-setting influence (Coleman & McCombs, 2007).

27

The new media environment has challenged this system in various ways. This changed

media environment has created new opportunities and pitfalls for the nontraditional media and

public to enter and interpret traditional journalism and the political world (Williams & Carpinin,

2004). For example, previous studies found evidence of blogs shaping the media agenda

(Sweetser et al., 2008) and social media conveying agendas distinct from traditional media

(Neuman et al., 2014). Such a altered relationship between traditional media agendas and other

media agendas can be explained with changes in journalism practices. Journalists nowadays use

social media to trace story leads about politicians or political events on social media (Parmelee,

2013). Social media provides places for underdog media to reverse the current dynamics as they

can observe easily networked online spheres, and then they try to emulate a competitors’

behavior as soon as it is proven successful (Vliegenthart & Walgrave, 2008). It is also a place to

gauge audience reactions to the news content and encourage news audiences’ engagement.

In many cases, studies found that intermedia agenda-setting effects are rather

multidirectional or bi-directional in online contexts and in general (Cushion et al., 2016). For

example, previous studies found that newspapers and television networks can have intermedia

agenda-setting influences on one another (Sweetser et al., 2008); political advertising and

campaign websites influence television and newspaper agendas and vice versa (Ku, Kaid, &

Pfau, 2003; Lopez-Escobar, McCombs, & Lennon, 1998). More importantly, these findings

suggest a need for further investigations on the variations in intermedia agenda-setting influence

generated in different contexts to reveal its complexity.

Scholars have made substantial contributions to the intermedia agenda-setting theory as it applies to more advanced technologies, such as websites, blogs, and online discussion boards

(Meraz, 2011; Sweetser, Golan, & Wanta, 2008). However, these studies usually look at a

28

handful of media organizations such as the New York Times (NYT) and CNN (Golan, 2006;

Vonbun et al., 2016). The results consistently revealed the roles of key agenda-setter media in

the U.S. media landscape, such as NYT and Post, two nationally circulated daily newspapers, but they did not address the full range of media and issue types that exist in today’s political and social environment (cite). However, with the explosion of media choices that audiences have today, it stands to reason that different media types will have different intermedia agenda-setting effects under varied circumstances (Vargo et al., 2016).

So far, theoretical considerations and empirical findings could not provide detailed explanations of intermedia agenda-setting between media types as most empirical investigations with small media samples mainly focused on the relationship between specific media types or specific media outlets (Vonbun et al., 2016). Such a trend resulted in a lack of understanding of the current news media landscape. In this aspect, social media, especially Twitter where accounts of all media types are connected to each other, provides researchers with resources and quantifiable and quality data to investigate the intermedia agenda-setting dynamics among different types of media in more depth.

In terms of media types, the contrast between traditional and nontraditional media has long been used to distinguish media. In general, some media are traditional and have roots in offline media, while others are nontraditional and have always been hosted online (Banning &

Sweetser, 2007). But nontraditional media do not imply a lack of leadership in setting media agendas. Some of non-traditional media are nationally circulated and presumed to have a much

greater effect on society even though they are originated from online platforms (McCombs,

2005; Meraz, 2011). For example, BuzzFeed generates hundreds of millions of page views a day

and releases original reporting. It is now being recognized as a reputable media source (Tandoc

29

& Jenkins, 2015). Such online news media are inherently different from traditional media, but

also distinct from other types of online media, such as online partisan media.

Beyond the contrast between traditional and nontraditional media, the nontraditional

news media category rooted online was challenged by emerging online news media as it cannot

fully embrace the different types of online media. In this situation, the role of partisanship in the

media landscape has received careful scholarly attention. Partisan news production has

increased during the past two decades (Stroud, 2011), and news audiences have shifted away

from more traditional news sources to more partisan ones (Hollander, 2008). Moreover, partisan

media among online news media outlets have played a particularly important role in U.S. politics

(Hollander, 2008; Stroud, 2011). In the early 2000s, the number of political news blogs or

websites exploded (Vargo et al., 2016), and these online media tended to be partisan in nature

and often expressed partisan political viewpoints (Meraz, 2011b). Thus, the difference between

online partisan media and non-partisan media, which have begun to draw large audiences, such

as BuzzFeed, was differentiated from each other based on the apparent lack of partisanship

(Beckett, 2015).

To date, there is competing evidence suggesting that traditional, non-partisan media are still the significant agenda-setters (Lee, 2007; Sweetser et al., 2008), or traditional media tend to

follow the agendas of partisan media due to the audience shift (Meraz, 2011b). However, at

present, the limited research completed does not offer adequate empirical evidence to support

either direction of intermedia agenda-setting. In accordance with the advent of such new news

media with varied platform orientations, media types have yet to be further investigated in an

intermedia agenda-setting analysis. Thus, the present study examined a representative sample of

30 not only traditional and nontraditional media but also partisan and non-partisan online media on

Twitter.

Social Media Effects

The agenda-setting processes involving social media extend beyond the relationship between the media and the public, which has been at the center of agenda-setting research over the years (McCombs, 2014). For example, social media turn out to be an alternative outlet for distributing traditional news (Broersma & Graham, 2012), and social media accounts have become important media actors interacting with, and influencing, other traditional media. News organizations use social media not only to disseminate news but also to strengthen brand awareness and promote stories through announcements that stimulate users to watch or listen, or with links that direct them to their regular platforms (for example, using URLs in social media posts) (cite). Reporters in the past who did their jobs with relative anonymity are encouraged to be active and visible in the social network to establish a reputation and to talk back to audiences.

Furthermore, journalists use social media to find story leads (Parmelee, 2013), to listen (Spinner,

2015), and to establish a more collaborative relationship with their audiences and sources (Fahy

& Nisbet, 2011). Increasingly, news organizations monitor the conversations on social media and sometimes prompt continuing news coverage (McCombs, 2014; Wallsten, 2007).

At this point, Twitter is one of the increasingly significant social media among news media and the public. With more than 300 million active users and over 500 million tweets sent per day (Statista, 2016), Twitter is unique among social media platforms in that it has become an intermediary for linking anonymous users to one another as well as a space to break and contextualize news (Hermida, 2010). Thanks to Twitter’s capabilities such as hashtags,

31

hyperlinks, retweeting, mentioning, and direct links to external sources, news production and

consumption activities became visible and traceable. For example, hyperlinks are connective

tools that allow media Twitter accounts to direct other media and readers to their online websites

where their own media agendas are demonstrated (Freelon, 2014). Those functions on Twitter

are understood as a form of information sharing, as a system for navigating shared conversations,

and as a way to engage in other media and public.

Nonetheless, the two competing camps have continued to debate social media’s effects in intermedia agenda-setting dynamics in the current news environment. When it comes to the initial media effects on setting elite media agendas, the direction of influence was presumably considered as one-way from traditional mainstream media to non-traditional media. Despite the fact that social media has become an additional channel for political communication, the role of the traditional media is still important in setting initial agendas. The information available on online news websites is still mediated and influenced by editors and political elites of traditional media (Ceron, 2015). In particular, political news still flows from the top (the political elites) with little room for alternative viewpoints (Ananny, 2014). When it comes to significant social or political events, such as disease outbreaks, the main influencers of the Twitter feeds are mainstream news outlets (Newman, 2016).

The unique nature of Internet also may contribute to the reason that traditional news media continue to hold an intermedia agenda-setting influence online (Meraz, 2011b). For instance, the characteristics of social media networks allowing news readers to access through links possibly favor the traditional media Twitter accounts with its large audience. When the traditional media Twitter accounts reference a source similar to themselves, other traditional media Twitter accounts with a similar political ideology or orientation easily maintain their

32

status in the network. Furthermore, as the “birds of a feather” argument suggests, traditional and

other types of news media existing in a network of connected accounts are now more motivated

to behave similarly (McPherson, Smith-Lovin, & Cook, 2001).

However, the drastic changes around news creation and distribution that came with the

advent of online outlets, especially social media, have challenged the presumption of news

media’s agenda-setting influence. Empowered social media contribute to the democratizing effects on journalism practices (Broersma & Graham, 2012) and expand the media agenda pool by picking up agenda that traditional media overlooked. Nontraditional media bring agendas overlooked by traditional media. For example, Vargo, Basilaia, and Shaw (2015) noted in their case study of issue agendas on Twitter that blogs offered breaking news coverage on events, and consequently traditional media appeared to have less of an influence on the Twitter agenda.

On the other hand, considering that intermedia agenda-setting depends on news production and resources (Shoemaker & Reese, 1996), the affordance of Twitter enables the exposure of nontraditional political voices (Castells, 2013) channeled to reach a wider audience.

For example, a media Twitter account having a small number of followers can reach a major media Twitter account’s audience by tagging or mentioning the major account in the Tweets.

Different publishing cycles of different media types can also be considered as a factor exerting a certain intermedia agenda-setting influence (Vliegenthart & Walgrave, 2008). For instance, most newspapers are published in the morning and main broadcast television news air in the evening

(Vonbun et al., 2016), while online media can publish news without considering deadlines and publishing schedules. Based on such media characteristics it can be presumed that online news media will precede newspapers and broadcasts. However, the possibility that newspapers and broadcasts use their Twitter accounts to promote their stories before their regular news release

33

cannot be overlooked as media Twitter accounts may take on a role to compensate for their

limitations to becoming agenda-setters. As shown above, such intermedia agenda-setting

attempts or influence on Twitter have appeared to be complicated so further investigations.

Sentiment: the Agenda of Attributes

Beyond the agenda of objects (e.g., issues or topics), there is another level of agenda- setting to consider. During the past two decades, researchers began to pay attention to how

individuals approached objects through the media. When the objects are certain individuals

(e.g., political candidates) or events (e.g., climate change and nuclear power), the traits or

attributes associated with them work in indirect ways to influence how media audiences perceive

them. They are periphery paths for audiences to recognize objects and often generate greater

influence on shaping the audiences’ perceptions. For example, voters can describe or associate

certain candidates using words such as, “trust,” “reformer,” “leadership,” “patriotism,”

“compassion,” “winner/electability,” “on the attack,” “has a plan/vision,” and “vague,” rather

than perceiving the actual data on performance (Golan & Wanta, 2001). Based on this

understanding, an attribute means “a generic term encompassing the full range of properties and

traits that characterize an object” (McCombs, 2014, p.41); researchers have considered

traditional media’s influence on setting those attributes as media agendas. This is second-level agenda-setting or an attribute agenda-setting influence.

Political Candidate Attributes

Similar to the issue of agendas, attributes may be framed and formed into agendas by traditional media or campaigns, and then promoted and transferred to other media outlets and/or

34 the public. Attribute agenda-setting has explained how attributes associated to the objects framed by traditional media or campaigns influence the media audience’s perception of objects.

In other words, attributes attached to objects can be another kind of agendas and will possibly transfer from the media to another media and the public. Thus, framed attributes are characteristics and properties that fill out the picture of each object (McCombs, 2014); this second level of agenda-setting claims that mass media transmits not only issue salience but the influence of specific attributes toward objects (Ghanem, 1997; McCombs et al., 2014).

In the theoretical distinction between agendas of objects and agendas of attributes, the first and second levels of agenda-setting become more evident in an election setting. Candidates competing are a set of objects whose salience among the public can be influenced by news coverage and political advertising (McCombs, 2014). In particular, the political media coverage of traditional media, such as television news and newspapers, still plays a critical role as agenda- setters. They not only influence what the audience should think about (first-level agenda- setting), but also how they should think of that object (second-level agenda-setting). Attribute agendas provide audiences with powerful tools to process information or perceive certain objects. The tone of candidate descriptions shared among individuals can be a significant factor in political persons’ perceptions (Kinder, 1978).

The way of describing candidates now requires even more attention because, depending on the descriptions, media coverage of items allows the public to personalize politics and dramatize the political contest (Jungherr, 2014). When a candidate is represented by his or her background information associated with positive or negative traits in media coverage, those attributes regarding that candidate easily become the paths that influence the ways the public thinks of him or her. For example, second-level agenda-setting researchers have investigated

35 numerous attributes that possibly associate with each object, a candidate. Those attribute agendas observed in media include candidates’ issue positions, political ideology on public issues, perceived candidate qualifications and experience, personality, biographical information, and integrity; the attributes also include positive, negative, or neutral sentiment about them

(Golan & Wanta, 2001; Kiousis et al., 2006; McCombs et al., 2000). In those studies, the researchers focused on the high degree of correspondence between the attribute agendas of mass media and the voters’ attribute agenda for each of the candidates by looking at the tone of the voters’ descriptions following the tone of mass media describing the political candidates.

Recent intermedia attribute agenda-setting research also looked into how framed candidate images and attributes of candidates are influenced by other media when online political information outlets join the news cycle (e.g., Heim, 2013; Lim, 2011). The findings indicated that traditional news media and campaign sites still have the attribute agenda-setting influence over blogs in terms of political campaigns (Heim, 2013) and even traditional mainstream channels share similar political ideologies and influence each other’s attribute agendas in their online sites (Lim, 2011). As compared to traditional and intermedia agenda- setting influence research, second-level, intermedia agenda-setting in online contexts still needs more research.

Attribute Dimensions

The public image of political candidates is one of the research fields where the idea of attribute agenda-setting gains steadily (Golan & Wanta, 2001). With the candidates as the objects, the attributes are the various traits that define the images of the candidates in the media and among the voters (McCombs et al., 2000). The public tends to have a picture of each

36

candidate that is composed of descriptions or images. When respondents were given a list of

nine descriptions, for example, most Clinton supporters associated her with the phrases “well-

informed” and “willing to work with people she disagrees with” (64% each). On the other hand,

Trump supporters (42%) were far more likely than Clinton supporters to associate the term

“extreme” with their candidate (Pew Research Center, 2016a). While increasing candidates’ name recognition focuses on the salience of each of the candidate on news coverage (agendas of objects), the candidates’ image-building is more about the agendas of attributes.

In general, two types of attribute dimensions in media coverage were found effective in shaping public perceptions on candidates: substantial and affective attributes (Ghanem, 1997;

Golan & Wanta, 2001; Kiousis, Bamtimaroudis, & Ban, 1999; McCombs, 2014). First, substantial attribute dimension can include both information about candidate-related issues (e.g., taxes, campaign reform, campaign analysis, foreign policy, moral issues, education, the candidate’s past and race) and information about candidates’ personal characteristics (Golan &

Wanta, 2001). Substantive aspects of the attribute agenda described political candidates

(candidate images) and have been categorized as (1) candidate issue positions and political ideology on public issues, (2) perceived candidate qualifications and experience, (3) personality,

(4) biographical information, and (5) integrity (Kiousis et al., 1999; Kiousis et al., 2006;

McCombs et al., 2000).

The ideology and issue positions category includes statements in which candidates were portrayed as “left-wing,” “right-wing,” or “center,” and all the statements express the position of candidates on specific issues (McCombs, Llamas, Lopez-Escobar, & Rey, 1997). The qualifications and experience category includes all statements about the competency of the candidates for office, their previous experience in government posts, their biographical details

37

(McCombs et al., 1997), and their educational background such as “informed,”

“knowledgeable,” and “intelligent” (Kiousis et al., 1999). The personality category includes all

personal traits and features of the character of the candidates, including their moral standing,

charisma, natural intelligence, courage, ambition, independence, and persona, such as “funny” or

“kind” (Kiousis et al., 1999; McCombs et al., 1997). The integrity category includes corrupt or

not corrupt (McCombs et al., 1997); how honest a candidate has been and whether he has been

consistent in word and deed (Kiousis et al., 2006). Indeed, the findings suggest that personality

traits are essential candidate attributes (McCombs et al., 1997).

On the other hand, affective attributes involve subjective opinions about the candidates

(Golan & Wanta, 2001). Such affective aspects of the attribute agenda or the tone of candidate

descriptions have been categorized as positive, negative, or neutral (Kiousis et al., 1999; Kiousis

et al., 2006; McCombs et al., 2000). Studies focused on whether media coverage with positive

sentiment would influence how positively individuals viewed candidates or whether affective

attributes would influence how appealing a candidate has been perceived by individuals. In this

process, the role of sentiment is critical because it is one of the factors influencing public’s

perception toward political figures(Kinder, 1978). Indeed, the findings suggest the positive

correlation between news coverage and voters’ affective descriptions of the candidates (Kiousis

et al., 2006; McCombs et al., 1997) and personal traits had a strong impact on affective salience

in individuals’ perceptions (Kiousis et al., 1999).

Sentiment in Online Contexts

On Twitter and other types of social media, not only the information about presidential candidates are circulating, but also Twitter users’ attitudes, emotions, feelings as part of their

38

opinions towards the candidates. Opinions matter a great deal in politics (Pang, Lee, &

Vaithyanathan, 2002); communication research has long emphasized how the reception of political and societal events can differ, depending on the conversations about news in people’s immediate social contexts (Maireder & Ausserhofer, 2013). Based on this understanding, one of the focuses of this study is whether negative and positive sentiment toward presidential candidates in online contexts can be attribute agendas, which possibly transfer across different media.

Opinions and sentiment expressed online can be valuable data to understand humans’ social and political behavior. For instance, users respond to an external event (Thelwall et al.,

2011), reveal their positive or negative emotions, which illustrates their preference or perception towards a particular object such as party preference (Tumasjon et al., 2010) and co-generate public opinion online (Ceron et al., 2014) by posting Tweets. The stored attributes (e.g., intentions, judgments, attitudes) within postings made online can be used to investigate posters’ behavior (e.g., user information related to behavior) and to investigate opinions on a mentioned target by analyzing judgments (e.g., product reviews).

Despite the fact that social media can be the place where the large data set of public opinion is achievable with relatively low cost and time efforts, the challenge is to select which methods are most appropriate to analyze such data (Ceron et al., 2014). At this point, sentiment analysis has been adopted in communication research only in recent years. The sentiment, the focus of sentiment analysis, is similar to the concept of attitude in social psychology (Liu, 2015).

In general, sentiment detected and archived online includes people’s positive or negative opinions, appraisals, attitudes, and emotions toward entities and their attributes expressed in written text using everyday language (Liu, 2015). For example, during the 2012 U.S.

39

presidential campaign, on ’s official Facebook page, “Obama” was regularly

mentioned with positive descriptors such as ‘‘vote,’’ ‘‘good,’’ and ‘‘love,’’ while ‘‘Romney’’

was strongly linked to negative descriptors such as ‘‘lies,’’ ‘‘liar,’’ and ‘‘rich.’’ When

considering the framing of ‘‘Romney’’ on Mitt Romney’s official Facebook page, ‘‘Romney’’

was strongly linked to ‘‘jobs’’ as well as ‘‘plan,’’ but these associations were stronger than those

to Barack Obama (Groshek & Al-Rawi, 2013).

Sentiment, however, can be extracted from even the non-topic-related expressions of

individuals’ postings, such as humorous appeals or analytical skills and citing outside sources

(Thelwall et al., 2010). Moreover, apart from topics and opinions about specific issues,

sentiment on social media also allow us to study the participants. For instance, participants’

sentiment profiles, based on a set of positive and negative opinions expressed in the users’ posts,

reflect another level of sentiment (Tumasjon et al., 2010). Additionally, social media

participants may post messages and interact with one another through Like buttons or retweeting,

and adding comments or hashtags that involve agreeing and disagreeing with others’ sentiments

expressed in their messages (Liu, 2015).

Network Analysis

Network analysis basically enables the visualization of key influencers and relationships on a given network using network centrality measures (e.g., betweenness centrality, closeness centrality, eigenvector centrality) and calculating structural characteristics of that network.

While early studies mainly adopted survey methods for gauging the public agenda and content analysis to understand the media agenda, a network analysis can identify agenda-setters among media actors and visualize the complex relationships between actors on the network and agendas

40 prevalent in the network. The salience of issues and attributes found in the network provides information about what agenda setters put in place. Additionally, in the way that interaction between actors on social media can be tracked down easily, this method has become another way to examine intermedia agenda-setting effects, adding to such traditional intermedia agenda- setting research methods as content analysis and analysis of rank-order of agendas transferring across different media. Since diffusion is a matter of tracing the flow of a new idea, product, or practice, network-based methods are clearly the best way to assess the role of personal influence in the chain of events (Kadushin, 2012). Indeed, social network analysis can not only describe a communication network in detail, but it may also explain and predict how the network structure affects the attitudes and behavioral intentions of individuals and/or groups residing in the network (Yun et al., 2016).

The Concept of Network

According to the formal definition, a “network” contains a set of objects (in mathematical terms, nodes) and a mapping, or description, of relations between the objects or nodes

(Kadushin, 2012). The simplest network contains two objects, 1 and 2, and one relationship that links them. Edges are relationships built between those two objects on the network. The relationship can be either directional or in-directional. For example, a vote is the directional edge a voter initiated toward a candidate. In the case of Twitter, each account is located within a shared topic or issue network as an individual node, and they communicate with one another through various relationships such as Tweets, replies, and mentions, indicating individual

“edges.” Information about the relationships, such as who initiated the relationship or how many

41

times the relationship is repeated in the network, can be used to measure each node’s popularity,

or influence, and determine each node’s social role within the network (Kadushin, 2012).

Network researchers have long been quantifying the importance of nodes in information

distribution networks. For example, in this study, nodes included in network analysis are various

media Twitter accounts owned by print media, television networks, online non-partisan media,

online partisan media, news magazines, and political commentators. Network analysis enables

researchers to identify different types of relationships by revealing the characteristics of the

communication taking place in the network. For instance, intermedia agenda-setting influence

among these media Twitter accounts can be determined using information about the direction of

relationships, such as information about the starting point and end point of edges, and causality,

using accumulated time-series data of key elements of networks. The number of intermedia

agenda-setting studies exploring social media spheres has increased recently to test the

applicability and validity of this method (e.g., Vargo et al., 2016, Yun et al., 2016).

Network Centrality

Popularity can be broken down into several different ideas─all under the general

umbrella of “centrality” (Freeman, 1979). Centrality refers to the number of links that a specific

node has within the network and captures how “important” (central) a node is within the network

(Hansen, Shneiderman, & Smith, 2011). Several centrality measures can also be calculated

through social network analysis, including betweenness centrality, closeness centrality, and

eigenvector centrality. Each centrality measure can be thought of, respectively, as a bridge score, a distance score, and an influence score (Hansen, Shneiderman, & Smith, 2011).

42

Betweenness centrality measures how often a given node lies on the shortest path between two other nodes. Nodes with high betweenness may have considerable influence over information flow between others, because they are bridges between different groups within the network. The elimination of such nodes with high betweenness centrality can cause disruption of communications within the network (Freeman, 1979; Kadushin, 2012).

On the other hand, closeness centrality measures the average distance from a node to other nodes in the network. In general, normalized closeness centrality is used by researchers using network analysis so that a higher closeness indicates the shortest distance between nodes.

Thus, nodes with high closeness centrality might have better access to information at other nodes or a more direct influence on other nodes due to their central position within the network (Yun et al., 2016). In a social network, for instance, a media Twitter account with a lower mean distance to other accounts might find that their news or information reaches others in the community more quickly than the news of some accounts with a higher mean distance.

Lastly, eigenvector centrality implies that not all nodes in the network are equivalent and some are more valuable, even if they have the same number of links to other sources. A person with few connections could have a very high eigenvector centrality if those few connections were much valuable than other connections. Eigenvector centrality allows for connections to have a variable value, so that connecting to some nodes has more benefit than connecting to others (Hansen, Shneiderman, & Smith, 2011). For instance, a media Twitter account with high eigenvector centrality means that the account is important within the network because it is linked to other important accounts, such as accounts with more resources or audiences. The links to important accounts mean endorsements from them, and this can be another type of influence a node can have within a social network. Thus, a node receiving many links does not necessarily

43 have a high eigenvector centrality, and a node with high eigenvector centrality is not necessarily highly linked. The node might have few links, but those linkers are important.

Computer-Assisted Content Analysis

In communication research, content analysis has been used primarily for the collection of data from text sources. Advances in computer-assisted content analysis have greatly increased the power of collecting and analyzing data. Typically, a computer-assisted content analysis is conducted through prior manipulation and processing, such as parsing and assessing the content, to provide it with a basic structure for additional and more complex searches (Vargo et al.,

2014). In this study, computer-assisted content analysis was used to extract such meaningful information as various types of hyperlinks and sentiment words from given databases, the results of media Twitter accounts’ Tweet search. When the prior manipulation process was completed, the data sets for hyperlink analysis and sentiment analysis were ready for further analysis.

Sentiment Analysis

Sentiment analysis is the process of computationally identifying and categorizing opinions expressed in different types of text-based source materials. With the rise of machine learning methods in natural language processing and text analysis, and technological advancement in retrieving large sets of information, sentiment analysis began to spread around early 2000 (Pang & Lee, 2008). Basic tasks of sentiment analysis are computational treatment of opinion and sentiment in text. Typically, subjective elements expressed through text by a writer include the underlying feelings, attitudes, evaluations, or emotions associated with an opinion, each of which can fall within three general categories: positive, negative, or neutral (Liu, 2015).

44

The two most common methods of sentiment analysis are machine learning and lexicon-

based methods. First, machine learning enables an automated classification of the overall

sentiment through the use of algorithms. Thus, the volume of data treatable is limitless. There are two required steps to conduct standard machine learning. First, human coders manually code/label a set of texts as positive, negative, or neutral. The judgments are then used to train a machine- learning model or algorithm to detect and classify features that are associated with positive and negative categories (Vargo et al., 2014). In the second stage, the automated statistical analysis provided by an algorithm or a model extrapolates the final result to the entire population of posts or documents (Ceron et al., 2014). Topic-sentiment modeling or topic-based text categorization is commonly employed in this approach and usually follow two-step processes: message retrieval to

identify messages relating to the topic, and opinion estimation to determine what sentiment orientation these messages express about the topic (Mei, Ling, Wondra, Su, & Zhai, 2007; Pang et

al., 2002). The units from texts used for analysis are typically sets of words, word pairs, and word

triples. An extraction tool such as LightSIDE can be used to find the units in text (Groshek & Al-

Rawi, 2013).

The second approach is lexicon-based sentiment analysis, which was used in this study.

The uniqueness of this method lies in the use of lists of words that are pre-coded for sentiment orientation, and sometimes also for strength of sentiment (Thelwall et al., 2011). A list of words and phrases that are subject to sentiment analysis is called a sentiment lexicon (Liu, 2015).

Numerous algorithms have been designed to utilize such lexicons and have been tested in analyses.

As a result, a variety of lexicon-based (or dictionary-based) sentiment analysis tools exist today

(e.g., SentiStrength, LIWC). For example, the Linguistic Inquiry and Word Count (LIWC) software program can assess emotional, cognitive, and structural components of text samples using

45 a validated dictionary. This software estimates the degree to which a text contains words that belong to empirically pre-defined psychological and structural categories, such as sentiment orientation and positive or negative emotions. The validation of this approach is conducted through a series of human coders’ reviews and tests on established wordlists over a long period of time (Vargo et al., 2014). With this method, each positive or negative expression (a word or phrase) is assigned a positive sentiment orientation (SO) value or a negative SO value. The common method to classify a document’s sentiment orientation is counting and combining the SO values of all sentiment expressions in the document (Liu, 2015). The advantage of this approach is the possibility for implementing a fully automated analysis (Ceron et al., 2014).

Typically, sentiment words include information about the orientation of the sentiment: positive, negative, or neutral. Sentiment orientation is also referred to as polarity, so a neutral orientation usually means the absence of much sentiment or no sentiment at all (Liu, 2015). After detecting and analyzing those sentiment words, sentiment analysis calculates the overall polarity or sentiment orientation to identify the overall sentiment orientation of a posting/document (Pang

& Lee, 2004). During the calculation, if particular texts or comments include cues for sentiment identification, they are considered as expressing more explicit sentiments. These cues include such words as “great” or “worse,” or particular intensifiers (e.g., very, so, extremely, dreadfully, really, awfully, terribly) and diminishers (e.g., slightly, pretty, a little bit, a bit, somewhat, barely) (Liu,

2015; Park, Ko, Kim, Liu, & Song, 2011).

Each method has its strengths and weakness. Lexicon-based sentiment analysis allows for a fully automated analysis of a large scale of databases, while machine learning needs human coders and involves developing the algorithm and training the machine for analysis. Even though the lexicon or dictionary continues to be developed through multiple tests and revising

46

processes, critical information could be missed when used words are not listed on the dictionary.

For example, languages used online or hidden emotions in messages are easily missed when

using lexicon-based sentiment analysis. In that case, sentiment analysis using machine learning

may be a better option. Programming machine learning requires knowledge about coding

language and computational skills; however, it allows researchers to use more advanced search

queries or algorithms to extract more information of interest from the datasets.

Agenda-Setting Examined by Sentiment Analysis

The ability of the sentiment analysis to automatically detect events of interest in the real

world and to scan large quantities of online data has encouraged new types of media and

communication research to pursue topics such as sentiment-based time-series analyses of online

agendas (e.g., Groshek & Al-Rawi, 2013; Guo et al., 2016); forecasting analysis by analyzing sentiment in social media (e.g., Ceron et al., 2013; Tumasjan et al., 2010); sentiment analysis in comments as outcomes of interactive communication (e.g., Park et al.,, 2011); and creating sentiment profiles with sentiment patterns (e.g., Conway et al.,, 2013). Each of the types has its own merits in applying sentiment analysis in communication research. However, the shared purpose across all of the types of study is to monitor public opinion in social media.

Firstly, some communication researchers pay attention to the fact that sentiment associated to particular social or political agendas can be traced throughout different times.

Recently, researchers have used sentiment analysis methods to help analyze time-series data, as sentiment can take a role as a mark that tracks only some data of interest. Since emerging important events are typically signaled by sharp increases in the frequency of relevant terms

(Thelwall et al., 2010), time-series data in social media are useful to analyze phenomena that

47

change over time. Several previous studies have analyzed online communication from a time-

series perspective, revealing the evolution of topics over time (e.g., Diakopoulos & Shamma,

2010; Thelwall et al., 2010; Thelwall & Prabowo, 2007). Examples include a time-series

analysis of blogs identifying emerging public fears about science (Thelwall & Prabowo, 2007)

and an analysis to discover customer concerns with particular brands or companies (Thelwall et

al., 2010). When applied in these contexts, time-series analysis can detect particular points of

interest during the event that triggered emotions or public discourse (Diakopoulos & Shamma,

2010). Furthermore, some sentiment-based time-series analyses of online topics revealed that sentiment changes can be used to predict or associate with offline phenomena (Diakopoulos &

Shamma, 2010). For instance, studies found that there was a strong correlation between the sentiment-based Twitter volumes and the opinion poll results over time (O’Conner,

Balasubramanyan, Routledge, & Smith, 2010) or citizens’ political preferences (Ceron et al.,

2014). Such results suggest that detecting sentiment on social media can be used not only to

monitor public opinions about the topics of interest but also to be proactive in dealing with social

issues, such as managing collective fears towards a certain disease.

Secondly, a few researchers have used the sentiment information on social media to

predict election results (e.g., Ceron et al., 2014; Effing, van Hillegersberg, & Huibers, 2011;

O’Conner et al., 2010; Tumasjon et al., 2010). These studies reviewed that social media data has

considerable predictive power about offline events, but merely counting mentions or Tweets is

not sufficient to provide accurate foresight (Ceron et al., 2014). Accordingly, they demonstrated

forecasting analysis using sentiment in social media as an alternative tool for public opinion

measurement (Thelwall et al., 2010) and suggested the predictability of the method. For

example, Ceron et al. (2014) found citizens’ political preferences in social media could predict

48 the election results in Italy and France. Similarly, Effing, Van Hillegersberg and Huibers (2011) found that Dutch politicians got more votes when they were more active on social media in the

2010 national elections. O’Conner, Balasubramanyan, Routledge and Smith (2010) examined correlations between text sentiment and polls data and found text sentiment’s availability as a leading indicator of polls. Tumasjan, Sprenger, Sadner and Welpe (2010) also found a strong correlation between the number of references on Twitter to political parties or figures and the election results. Additionally, they found that the Tweets’ sentiment orientations (positive and negative emotions associated with a politician) corresponded closely with voters’ political preferences. These forecasting research look into the question of analyzing and mining public opinion from publicly and freely available online data, which could be a faster and less expensive alternative to traditional polls (O’Conner et al., 2010). However, the sample size and representativeness remains problematic in such an area of research. Some results from prior research suggest that content and sentiment on social media can reflect or even predict public opinion (Groshek & Al-Rawi, 2013) and leads to the type of social media research with sentiment analysis for measuring and monitoring public opinion in social media.

Another type of communication research with sentiment analysis has centered on online subject comments as outcomes of interactive communication. Past online comment studies have been conducted to examine various aspects of comments: the volume of comments, their relationship with popularity, and their effect on users’ behavior. Many studies have discussed the value of comments as an indicator of popularity (Park et al., 2011). Earlier studies focused mainly on the volume of data (related, for instance, to each party or candidate). However, more recently, researchers have focused on capturing users’ attitudes in greater detail, beyond merely tabulating numbers of mentions (Ceron et al., 2014). Robertson (2011) examined Facebook

49 posts made by the friends of candidates during the 2008 U.S. election campaign to capture and identify a “reflection-to-selection” process, which was driven by users. With the Linguistic

Inquiry and Word Count (LIWC) software program, a change in the sentiments of the posts was used to identify the process. Groshek and Al-Rawi (2013) used sentiment analysis on Facebook pages to identify dominant topics and emergent frames as presented and discussed in social media during the 2012 U.S. presidential election. The assumption was that the sentiment observed within these spaces is likely to reflect or lead public opinion and traditional media agendas reciprocally.

The last type of study is the agenda-setting study, focusing on the behavior of individual commenters. It uncovers commenters’ sentiment patterns toward such targets as political news articles and predicts the individuals’ characteristics, such as political orientation, from the sentiments expressed in the comments (Park et al., 2011). The results showed that active commenters are those who leave comments on a large proportion of articles for a long period, while predictive commenters showed a high degree of regularity in their sentiment patterns.

Researchers also measured whether commenters are posting comments based on their political preferences or not and found that a liberal predictive commenter tends to leave a negative comment to conservative articles and a positive comment to liberal articles. Another study on sentiment profiles is about “political junkies” (Jansen & Koop, 2006). While the authors found evidence of a lively political debate on Twitter, it is unclear whether this deliberation was led by a few “political junkies” or the general public. Jansen and Koop (2006) found that less than 3% of users on a political message board that dealt with a particular political event handled almost a third of all posted messages. Generating multidimensional profiles of the politicians and users by conducting sentiment analysis can also be part of this type of research (e.g., Groshek & Al-

50

Rawi, 2013; Tumasjan et al., 2010). In particular, party sentiment profiles turned out to reflect the similarity of political positions between parties (Tumasjon et al., 2010).

Hyperlink Analysis in Agenda-Setting Studies

Hyperlink analysis has been one method used to study the interconnections between web pages and blogs (Kim et al., 2010; William et al., 2005). In the current networked media environment, in which every node and piece of content is connected to others through links, the outcomes of strategical interaction among media can become visible as well. By scraping and analyzing text-based social media streams online, using computer-assisted content analysis software, hyperlink analysis has now become even more useful for communication researchers than ever before. Some software provides researchers information about the source origins (e.g.,

NodeXL) where links terminate. Due to the advancement in computer-assisted text analysis and the fact that hyperlink analysis is a form of inquiry mostly used to investigate social relationships among various nodes within a network, research on agenda-setting, especially on intermedia agenda-setting, has been the primary place where communication researchers have tested the applicability of this method.

Past agenda-setting studies have found differences in hyperlinking practices across different media platforms belonging to the same type of media. For instance, Williams et al.

(2005) tested how the campaigns of political candidates practiced gate-keeping of their sources and supporters by limiting hyperlinking practices online. They found that, in comparison to their blogs, candidates’ campaign websites were much more likely to link to such promotional material as campaign merchandise sales, political advertisements, and fundraising in the form of donation requests. Campaign blogs were more likely to link to external media sites, but they

51

were twelve times less likely than campaign websites to link to supporting group sites, four times

less likely to link to special-interest group sites and about half as likely to link to national

political party sites. Interestingly, there were no differences between candidates’ websites and blogs in the number of links to external blogs. Based on these findings, the researchers concluded that campaigns were clearly making gate-keeping attempts via their hyperlinking

practices.

When it comes to the ideologies and practices of individual candidates and campaigns,

the differences among hyperlinking strategies can reveal interesting information on their political

advertising strategies, such as which issues they believe they own or how and how much they are

engaged in gate-keeping practices. For instance, during the 2004 U.S. presidential race, George

W. Bush’s website frequently linked to a hurricane relief website in an attempt to demonstrate

that the Bush team was sympathetic to the plight of Floridians battered by several major storms

that occurred during the campaign (Williams et al., 2005). This indicates that campaign

organizers treat various media, including blogs and websites, differently in utilizing

hyperlinking. Secondly, it shows that one of the strategies used by political websites is to use

hyperlinks to external sources that serve the overall goal of promoting the candidate, and

therefore are most likely to send the user to content that is favorable toward their candidate.

Moreover, these findings shine a spotlight on the role of political websites in either relational

(supporting or opposing external sources) or topical (sharing-issues with other media actors) aspects of hyperlinking practices. When Kim et al. (2010) analyzed the hyperlinks between the

United States Senate website and websites in Yahoo, they found that hyperlinks are more often liked at Democratic senators than at Republicans. This also showed how the websites and their

52

hyperlinks were being used as a means of communication between senators and ordinary citizens

online (Kim et al., 2010).

The effects of hyperlinking practices among individuals and media online who have a political affiliation have been another topic of research in this area. Meraz (2011b) studied informational influence and agenda sharing within the political blogosphere and revealed a tendency for blogs to connect to other media that shared similar political ideologies. Blogs that shared a political ideology were more likely to link to one another and share similar sources than blogs with differing ideologies. These hyperlink studies suggest that political affiliation strongly impacts intermedia agenda-setting through sharing the same sources. At this point, hyperlinks

need to be interpreted as an indicator of social roles and relations among political actors online

(Kim et al., 2010; Park & Jankowski, 2008).

The Structure of Hyperlinks in Twitter Feeds

Twitter appears to play a significant role in producing and spreading information,

including news and personal opinions, across the world (Parmelee & Bichard, 2011). Most

tweets are publicly accessible to all Twitter and internet users, and such functions as follower-

followee networking, hashtags, replying, mentioning, and retweeting on the Twitter platform

help to facilitate the spread of information and social interaction (Bruns & Moe, 2014). These

characteristics have even led to Twitter being considered as a reliable source of original reporting

(Hermida, 2010). Twitter users, including media Twitter accounts, can also communicate with

others through the hyperlinking practices enabled in Twitter feeds.

The structure of hyperlinks in Twitter feeds can show which type of relationships can be

built on Twitter. By considering the direction and reciprocal nature of interaction on Twitter,

53 different types of hyperlink relationships can be identified among Twitter accounts that are connected through hyperlinks (Park & Jankowski, 2008; Thelwall, Sud, & Wilkinson, 2012).

Typically, there are two types of links: inbound and outbound. First, when A receives a link from B, an in-bound link is generated. Conceptually, an in-bound link is the same as a back-link, which comes from an external site to a person’s site or account. On the other hand, when there’s a link from that person’s account to another account, an out-bound relationship is created. Out- bound links start from a person’s site or account and lead to an external site. For instance, if

@huffpostpol mentioned my account, the link from @huffpostpol is an in-bound link for my account. However, the link is an out-bound link from @huffpostpol’s perspective. Secondly, when one account links to two different accounts, those two accounts become co-in-bound linked or, variously, co-linked or co-cited. Conversely, if two different actors cite the same sources or link to the same account, those two accounts are then co-out-bound linked (Kim et al., 2010;

Thelwall et al., 2012).

Twitter is also one of the most optimized social media platforms for linking to Twitter accounts or sources outside of Twitter. Twitter connects to various media outlets in different ways, from simple user mentions of other websites (for example, shortened links to a news article on a website in a Tweet) to collectively curated lists of content that share the same theme connected through hyperlinks (a list of Tweets including the same hashtag).

Specifically, Twitter enables three types of hyperlinking practices. First, Twitter accounts can refer or reply to another Twitter user/account using the mention functionality (the

“@” symbol). This symbol is widely used for activities like giving credit to a source of content or information, or inviting the mentioned accounts to engage in a public conversation, and this type of hyperlinking is often generated by news or information sharing on Twitter. Therefore,

54 when we consider the fact that each Twitter account works as a media actor, this hyperlinking practice can be identified as intermedia agenda-setting.

Secondly, embedded hashtags (the “#” symbol) in Tweets are intended to help account owners collaboratively build a collection of links to a shared discussion on a particular topic or issue within Twitter. This type of hyperlinking is a simple act of joining an existing discussion, but at the same time also makes the topic that the media account is interested in more easily visible to other actors on Twitter (Bruns & Moe, 2014). Thus, when a media Twitter account creates a new hashtag or includes an existing hashtag in their tweets, they can be said to be acting with agenda-setting intentions. If a media Twitter account actively participates in an existing discussion using a popular or widely propagated hashtag (such as those listed in the

“Trending” sidebar on the Twitter interface), that can be also considered an intermedia agenda- setting practice.

Lastly, Twitter users can add direct links to external sources such as URLs to news coverage on online news sites. This type of hyperlinking may be differentiated from the other two types described above, because this can expand the scope of the influence this practice may have beyond the boundaries of Twitter. With this type of hyperlinking practice, accounts in

Twitter can invite people to explore agendas outside Twitter and even strategically promote their message across multiple outlets through the networks these hyperlinks create. At this point, we can identify embedding direct links to external sources in Twitter feeds as a different type of intermedia agenda-setting.

As reviewed above, the different types of hyperlinking practices seen in Twitter feeds indicate that there are various intermedia agenda-setting strategies employed by a variety of media accounts on Twitter. By analyzing each type of hyperlinking practice used in Twitter

55 feeds, this study explored the structure and characteristics of certain hyperlink networks. In doing so, this study was expected to obtain new information about the spread of sentiment about candidates online, key influencers in the 2016 U.S. presidential election, and each media Twitter account’s use of particular strategies to enhance their intermedia agenda-setting influence.

56

CHAPTER III. RESEARCH QUESTIONS AND HYPOTHESES

This study examines how media agenda is influenced by other media sources. As previously discussed, increasing diversity in the outlets of the news media landscape has accelerated competition among traditional and nontraditional media over leadership in setting media agendas in online public spheres, in particular social media. The dynamics of agenda- setting influence within social media demand advanced analysis with consideration of the uniqueness of social media, and intermedia agenda-setting theory has become more valuable as a framework in which to analyze interaction among various media and the attempts to dominate the social media sphere.

The questions and hypotheses posed in this chapter are designed to create better understandings of the flow of intermedia agenda-setting influence among the interconnected media accounts within social media, notably Twitter. Twitter accounts included in analysis were those of print media, television networks, news magazines, online partisan media, online non- partisan media, and political commentators.

First, this study explored the changes in Twitting networks and the conditions of

increasing volume of Twitting networks during the last seven weeks before the 2016 U.S.

presidential election, with consideration of the existence of political events and characteristics of

propagated messages or network issues. Indicators such as the volume of network, different types of hyperlink frequency, and sentiment frequency in the network were used for analysis.

Second, this study investigated the intermedia agenda-setting dynamics among various media Twitter accounts with varied political ideologies. In doing so, the extent to which

traditional media Twitter accounts remain agenda-setters were explored. If there was a reversed

57

influence flow from nontraditional media Twitter accounts to traditional media Twitter accounts,

the conditions supporting the phenomenon were further explored.

Last, this study examined the temporal dynamics of sentiment during the last seven

weeks before the 2016 U.S. presidential election across different media Twitter accounts through

time-series sentiment analysis. Time-series analysis helps us to note the differences in the trends

of an indicator (e.g.., sentiment on Twitter) to reveal the impacts of political events, such as

debates and breaking news, on media Twitter accounts’ news reporting and interaction with other

media accounts. The findings indicate the possibilities of sentiment salience transfers across

various media Twitter accounts and the feasibilities of treating them as attribute agendas

associated with the 2016 presidential election.

In order to answer these questions, this study applies the intermedia agenda-setting theory

as the theoretical framework, and network analysis and computer-assisted content analysis

enabling hyperlink analysis and sentiment analysis as the methods. Based on these purposes, the

following research questions and hypotheses have been posited.

Twitter Network Change

Exploring the nature of social media as a platform for journalism practices remains an

interesting quest. Social media frees media organizations and journalists from the constraints, such as time and space, which until now have limited journalistic practices (Russell, Hendricks,

Choi, & Stephens, 2015). Some researchers have argued that social media brings democratizing

effects to political communication by expanding the media agenda pool and inviting media,

organizations, and individuals who were not used be part of journalism practices.

58

Political events such as elections and debates attract media and journalists’ interest, and

such collective interests are likely to be reflected in the social media sphere. As a national

political event, which has drawn national interest, the presidential election remained the primary

topic trending on Twitter through the year of 2016. The first research question and hypothesis

are designed to overview the temporal changes of the Twitter network engaging key media

accounts, included in analysis of this study, during the seven-week data collection period.

RQ 1. To what extent Tweeting network did change during the last seven weeks before

the 2016 U.S. presidential election?

H1: The volume of Tweeting network increased during the last seven weeks

before the 2016 U.S. presidential election.

To deal with challenges in increased competition among media Twitter accounts,

individual media Twitter accounts attempt to dominate the social media sphere by increasing traffic to their news platforms and the publicity of their brands. In practice, cross-posting in online contexts (for example, the use of linkbot) is a common strategy among media actors having multiple news outlets such as websites and blogs (Park, 2003). Different tactics utilizing features built into Twitter, such as hashtagging, including URLs to external sources or visuals in messages, are also often employed as intermedia agenda-setting strategies by Twitter media accounts. For instance, hashtags in Tweets are intended to help users collectively build a collection of links to a shared discussion on a particular topic or issue within Twitter. But, as an indicator of social relationships among Twitter media accounts, this type of hyperlinking also shows when a Twitter media account creates a new hashtag and distributes it with agenda-setting intentions, and when other actors decide to join this agenda-setting process. Twitter media

59 accounts also can add direct links to external sources such as URLs to online news coverage.

The use of hyperlinks is common and allows researchers to further examine the political leanings of the sources of information that Tweets link to as messages, which are restricted in length to

140 characters or less (Himelboim, McCreery, & Smith, 2013). With this type of hyperlinking practice, media Twitter accounts can invite other accounts to explore their agendas, even outside of Twitter, strategically promoting their message across multiple outlets through the networks these hyperlinks create. When we consider this fact that each media Twitter account works as media, this hyperlinking practice can be identified as an attempt to influence intermedia agenda- setting. This tendency to dominate the social media sphere may be universal, regardless of media Twitter accounts’ political ideologies (liberal or conservative) or orientation toward certain platforms (online or print and broadcast). Such practices are highly likely to be used more often at those times during significant social or political events, such as presidential elections (Newman, 2016).

Additionally, via the Twitter network, we can observe not only the circulation of information about presidential candidates but also people’s attitudes, emotions, and feelings. For instance, past studies show that each presidential candidate was mentioned in social media, with certain words indicating negativity or positivity. Facebook posts with descriptors such as “vote,”

“good,” and “love” were associated Barack Obama, while Mitt Romney was strongly linked with

“jobs” and “plan” during the 2012 presidential campaign (Groshek & Al-Rawi, 2013). Media

Twitter accounts might be an effective tool to reach a broad audience and build connectivity across other online outlets, but they are also a good place in which to demonstrate feelings or emotions. To test the impacts of political events, e.g., presidential debates, on the Twitter

60

network and media Twitter accounts’ news reporting, a research question and hypotheses were

generated:

RQ 2. To what extent did the presidential debates affect the Tweeting network?

H2a: There was a greater number of edges in the network in the days following

the U.S. presidential candidates’ debates than prior to the debates.

H2b: There was a greater number of hyperlinks in the network in the days

following the U.S. presidential candidates’ debates than prior to the debates.

H2c: Different types of hyperlinks frequency in the network (domain, hashtag,

and accounts mentioned) changed together in the days following the U.S.

presidential candidates’ debates than prior to the debates.

H2d: There was a greater number of sentiment words in the network in the days

following the U.S. presidential candidates’ debates than prior to the debates.

H2e: Negative sentiment words in the network surpassed positive sentiment

words in the network as the 2016 U.S. presidential election approaches.

H2f: Three indicators measuring Twitter network increased during the last seven

weeks before the 2016 U.S. presidential election.

Cross-Linking Across Different Media Twitter Accounts

When intermedia agenda-setting effects occur within social media, media accounts, such as Twitter accounts, observe and react to competitors’ strategies. Social media has become another venue for this competition. To compete, users optimize their media strategies within

61

social media to increase their reach and to attract audiences. In doing so, from the network

perspective, the position or location within the news media network on Twitter may affect media

accounts’ journalism practices.

As previously mentioned, various hyperlinking practices may help to demonstrate the distinctions in different media Twitter accounts’ strategic moves around agenda-setting attempts

because hyperlinking practices are social interactions and exchanges—not only of information

but also of political power between actors on the news-information network. The use of

hyperlinks as a newly emerging social and communication channel have been considered as

valid measures of the actors’ influence online (Park & Thelwall, 2003). The strength of this structural approach to relationships among media is that it can reveal patterns that could not be observed if each media was analyzed individually.

Cross-linking (e.g.., mentioning other accounts in Tweets) is a distinctive hyperlinking practice through hashtagging and embedding URLs. Media Twitter accounts can refer to or

reply to other media accounts on Twitter with the mention function. This type of hyperlinking is

often generated by news or information-sharing on Twitter, as this is the practice of giving credit

to the source as well as being a selective sourcing practice to build one’s own credibility by

leveraging other media accounts’ credibility during the agenda-setting process. Two types of

cross-linking practices are often observed in a Twitter network: first, media Twitter accounts

may link to an affiliate media Twitter account (e.g., @cnnpolitics linking to @). This can be

interpreted as attempts to sustain their current influence within the network. On the other hand,

media Twitter accounts also can link to completely nonrelated media accounts to either refute or

agree with them. The initiators of cross-linking may prefer to cite other traditional media

accounts to take advantage of traditional media’ power in reaching more followers on Twitter, if

62

they need to refer to other sources (Russell et al., 2015). Such trends may be because linking to

other sources is an implied validation of those sources’ credibility; moreover, seeing the linking

identifies the sources as desirable affiliates or partners (Park & Thelwall, 2003). Thus, despite

the fact that cross-linking across different media Twitter accounts is the most aggressive hyperlinking practice, past studies revealed that journalists on social media are more likely to promote content from their own news sites than from other sources (Russell et al., 2015). To test this pattern regarding cross-linking, a set of a research question and hypotheses were generated

(as below):

RQ 3. To what extent did the presidential debates affect the cross-linking practices of

media Twitter accounts?

H3a: The number of Twitter accounts’ cross-linking practices increased during

the last seven weeks before the 2016 U.S. presidential election.

H3b: There was a greater number of cross-linking in the network in the days

following the U.S. presidential candidates’ debates than prior to the debates.

H3c: There was a greater proportion of cross-linking edges, including positive and

negative sentiment words, than non-cross-linking edges, including positive and

negative sentiment words in the network.

Media Twitter Accounts’ Use of Sentiment

The strategic moves among various media Twitter accounts also can be observed in

practical journalistic writing style. Journalistic writing has formed in a way of reflecting media

characteristics. Writing in the inverted-pyramid style has many benefits for newspaper readers,

63

and simple writing as we speak is the typical format for broadcast news writing. However, based

on each media Twitter account’s positioning on Twitter, each account might then have a

different writing style for news Tweets to strategically influence others’ media agendas and dominate the social media sphere. For example, nontraditional media Twitter accounts possibly

include more emotional appeals than do traditional media Twitter accounts, which can leverage

influence outside of Twitter. Traditional media Twitter accounts also may be abided to in

writing Tweets by traditional rules like journalism ethics code or journalistic objectivity.

Moreover, traditional media Twitter accounts may use less sentiment-intensive words;

thus, it is not necessary to use more sentiment words, as traditional media Twitter accounts are

highly likely to be monitored by a filtering process and traditional journalism ethics codes.

Thus, regardless of sentiment orientation (positive or negative), it can be expected that they tend

to use fewer sentiment words in their Tweet writing. However, despite that traditional media

Twitter accounts abide to journalism ethics codes, all media Twitter accounts possibly picked

that up and generated sentiment-intensive Tweets to appeal to news readers during the campaign

due to the fact that the 2016 campaigns were extremely negative. On the other hand, online

media Twitter accounts with a less rigorous filtering process and more flexibility in their Tweet

writing are highly likely to show their uncontrolled and uninhibited behaviors, such as greater

use of negative sentiment words in their messages than traditional media Twitter accounts.

Taken together, the overall sentiment words-usage level may possibly increase for traditional and

online media Twitter accounts, but online media Twitter accounts may reflect the negativity of

the campaign and show greater negative sentiment words usage than traditional media Twitter

accounts. On the contrary, traditional media Twitter accounts may use greater sentiment-

intensive words than usual, but they tend to use positive sentiment words due to the filtering

64 process and internalized journalistic norms. To explore this intermedia agenda-building process, a research question and hypotheses were generated:

RQ 4. Do different types of media Twitter accounts use sentiment words distinctively?

H4a: Media Twitter accounts that belong to traditional media (print media,

television networks, and new magazines) used a greater amount of positive

sentiment words than did nontraditional media (online partisan media, online

nonpartisan media, and political commentators).

H4b: Media Twitter accounts that belong to nontraditional media (online partisan

media, online nonpartisan media, and political commentators) used a greater

amount of negative sentiment words than traditional media (print media,

television networks, and new magazines).

Key Media Twitter Accounts in Network

As reviewed above, previous studies have confirmed the issue and attribute agenda- setting influences of traditional media when the topics are about national issues, government- related issues, or politically biased or partisan related issues (e.g., Olteanu et al., 2015). Studies that analyzed Twitter feeds around the time of political events such as elections found that

Twitter has become another media outlet that permeates traditional media’s agendas (Hermida,

2010; Thelwall et al., 2011), and that Twitter has shown the news reporting behavior typically expected from traditional media (Groshek & Al-Rawi, 2013). The results also showed that traditional media Twitter accounts possibly remain the status of agenda-setters due to the fact they are situated in this particular time period.

65

With the network analysis method, such traditional media Twitter accounts’ agenda- setting influence can be measured and visualized using various network centrality concepts (e.g., betweenness centrality, closeness centrality, eigenvector centrality). Popular media Twitter accounts within a network tend to be mentioned often by other media accounts, thus bridging different groups of media Twitter accounts or having connections to important accounts on the network. Each characteristic of popular media Twitter accounts may imply that they are different types of agenda-setters and attempt to dominate the social media sphere with different strategies. To examine the traditional Twitter media accounts’ intermedia agenda-setting influence on Twitter and identify key media Twitter accounts using network centrality measures, a research question and hypotheses were proposed (as below).

RQ 5. To what extent were traditional media Twitter accounts successful at occupying

agenda-setter positions within the network?

H5a: The proportion of traditional media Twitter accounts ranked in the top 10

centrality measures was greater than the proportion of nontraditional media

Twitter accounts.

The Temporal Dynamics of Sentiment

The 2016 U.S. presidential campaign, with the first female major-party candidate and the social media celebrity candidate, has been recognized as the most negative campaign in U.S. history. The frequency of negative mentions of the other candidate in speeches and campaigns exceeds the records from past campaigns (Blake, 2016), and over 50 percent of supporters for each candidate said that they are voting against the other candidate rather than voting for the candidate they are supporting (Pew Research Center, 2016c).

66

As discussed above, not only the issue of agendas set by media Twitter accounts, but also

the attributes such positive and negative feelings and emotions as part of the presidential race

may be formed into agendas by media or campaigns and then promoted and transferred.

According to attribute agenda-setting theory, media Twitter accounts also can create a picture of each candidate, comprising description or images, along with sentiments toward each candidate.

Affective traits such as positive, negative, or neutral opinions about them can be attribute agendas (Golan & Wanta, 2001; Kiousis et al., 2006; McCombs et al., 2000). Also, as election day approaches, the competition to influence candidates’ attribute agenda-setting on Twitter among various types of media accounts with different political ideologies becomes even more intense. Additionally, in setting those attribute agendas, typical media Twitter accounts are more likely expected to use negative sentiment than positive. The pairing of certain candidate traits with negative sentiment has been used for a long time in media frames and in negative political campaigns. Studies also support the effectiveness of using negative sentiments, e.g., negative sentiments toward political candidates influence the spike of Twitter feeds (Thelwall et al.,

2011). That means that media Twitter accounts are more likely to include negative sentiments in their Twitter feeds for publicity and greater influence.

In analyzing intermedia agenda-setting over sentiment, revealing temporal dynamics fits the objective of analysis. Traditionally, researchers have used time-series analysis to analyze a set of data points occurring at regular intervals; with the application of sentiment analysis into this area of research, sentiment can be used to track only some data of interest. Because emerging important events are typically signaled by sharp increases in the frequency of relevant terms (Thelwall et al., 2010), time-series data in social media are useful for analyzing

phenomena that change over time. To examine the temporal dynamics of sentiment among

67

various types of media Twitter accounts during the seven-week data collection period, a research

question and hypotheses have been generated (as follows):

RQ 6. To what extent did each type of media Twitter account exert an intermedia

attribute agenda-setting impact on other types of media Twitter accounts?

H6a: Nontraditional media Twitter accounts were more likely to Granger cause3

traditional media Twitter accounts’ use of negative sentiment words than the

reverse relationship.

H6b: Traditional media Twitter accounts were more likely to Granger cause

nontraditional media Twitter accounts’ use of positive sentiment words than the

reverse relationship.

When it comes to media Twitter accounts that reflect differing political ideologies and

partisanship and transferring attributes among them, previous studies found several patterns.

Conservative media has been known to make less use of hyperlinks and blogs connecting to

other blogs sharing similar political ideologies (Meraz, 2011b). On Twitter, however, the

tendency of prioritizing political ideology possibly differs from that in blogs, because

nontraditional media need to invite traditional media to propagate their news. In doing so, there

could be two different approaches. First, nontraditional media can depend on traditional media

with the same political ideology; however, they also can refer to traditional media with opposing

3 The results of the Granger causality tests can determine whether time-series data predicts other time-series data. A measure x is said to “Granger cause” a measure y, if y can be better predicted from past values of x and y together, than from past values of y alone (Freeman, 1983).

68

political ideologies to refute their messages or make sarcastic responses to it (Park & Thelwall,

2003).

When a certain candidate attribute becomes a dominating agenda not only in public but

also among media, we can say that whichever media initiated and promoted the candidate

attribute agenda is successful at setting other media’s second-level agendas of candidates. Media can influence another media’s news creation and distribution by letting them share their views on subjects such as presidential candidates. We already have observed that, during the 2016 campaign, some media have developed candidates’ attribute agendas through publishing exclusive news about candidate qualifications (e.g., the New York Times’ report on Trump’s tax return issue); other media’ attentions also start turning into that attribute agenda when the coverage becomes published.

This means that one way to evaluate the influence on other media in terms of how to talk about the candidates is to determine which candidate attribute is prominent in the public sphere, such as through Twitter. When more media, even those with an opposing political ideology, focus on discussing a certain candidate attribute mentioned by another media, that media’s second-level agenda-setting influence on other media can be considered powerful. For example, when Clinton was primarily mentioned on Twitter as being an unethical and disqualified presidential candidate in the context of her email scandal, the Trump campaign or conservative media were successful at setting their attribute agenda and influencing other media regarding

Clinton. Even when the Clinton campaign or liberal media refer to “emails” to support or defend

Clinton, those only help the agenda dominating the public sphere. This means that the Trump campaign or conservative media are dominating the public attribute agendas by letting other media talk more about that particular candidate’s attributes and make others join the agenda-

69

setting process. Such dynamics among media Twitter accounts with different political ideologies

cannot be directly examined with time-series analysis of sentiment salience within the network, but knowing which media Twitter accounts lead others by transmitting sentiment dominating the network at that time can provide knowledge about intermedia agenda-setting over sentiment within social media. To examine the temporal dynamics of sentiment among various media

Twitter accounts with different political ideologies during the seven-week data collection period,

a research question and hypotheses have been generated:

RQ 7. To what extent did each media Twitter account of print media, television networks,

news magazines, online media, and political commentators with different political

ideologies exert an intermedia attribute agenda-setting impact on other media Twitter

accounts included in the same media category?

Mapping Intermedia Agenda Setting Influence

The majority of the previous intermedia agenda-setting studies surveyed focus on a small

sample of media, or most address time as at one period. However, agendas are set up by

reciprocal interaction across different issues and different time periods, with no media taking a

clear lead in the current media environment (Vargo et al., 2016). One medium does not take the

role as an agenda-setter, and all media are interconnected. Thus, to assess the degrees to which

differing media set the agenda at large and visually represent the intermedia agenda-setting

effects across media categories and political ideologies, a research question has been generated:

70

RQ 8. Which media group or individual accounts with political ideologies were most likely to set the attribute agenda, via positive and negative sentiment, for all media

Twitter accounts at large?

71

CHAPTER IV. METHOD

Procedure

This study presents a comprehensive analysis that is representative of various news media

accounts found on Twitter during the seven-weeks before the 2016 U.S. presidential election. A

multimethod approach was employed to explore the dynamics of intermedia agenda-setting

among different types of media Twitter accounts with varied political ideologies. NodeXL, a

Microsoft Excel application add-in, was used for various types of analysis: network analysis,

hyperlink analysis, and sentiment analysis. First, network analysis was used to identify key

agenda-setters among various media Twitter accounts using three types of network centrality

measures-betweenness centrality, closeness centrality, and eigenvector centrality. Secondly, by conducting computer-assisted content analysis, through the series of text-based data treatment processes, meaningful information and indicators within the text streams were identified, such as the most propagated words and word-pairs, the media coverage spike, and the hyperlink and sentiment word frequency. Sentiment words and hyperlinks found in the content were extracted and archived for further analyses. The lexicon-based sentiment analysis was used to detect sentiment words4. Lastly, a series of time series analysis of hyperlinks and sentiment words and

Granger causality tests were conducted to examine the dynamics among media Twitter accounts.

4 NodeXL allows users to import a sentiment word dictionary. For sentiment analysis, two lists of sentiment words, positive and negative, were entered and the list of words for the program to skip (e.g., a, about, across, after) was also entered. The default sentiment word lists were modified from the Opinion Lexicon developed by the University of Illinois at Chicago for analyzing online customer reviews and opinions online web. The lists can be downloaded from https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html#lexicon.

72

Sample

The time frame of the sample is the seven weeks preceding the day of the 2016 U.S.

presidential election. This particular time frame has been selected for this study because, first,

there are no other national events that might distract the media’s attention from the presidential

election 2016 (e.g., 2016 Olympics) and next, three presidential debates and one vice-

presidential debate, which this study considers national political events that may influence the

dynamics among media actors in terms of covering presidential election-related content, were

held during the period of time.

Before data collection, the search results of Tweeting networks returned used search keywords, including #election2016, Hillary Clinton, Donald trump, #presidentialelection, and were analyzed for four weeks to identify key media Twitter accounts observed in the Tweeting network, and were cross-checked for media ranking was used to select the focus of analysis in several key studies such as top 10 daily newspaper by circulation, top three commercial television networks (e.g., Golan, 2006), top ten digital news entities (e.g., Pew Research Center,

2015; Vargo & Guo, 2016). After this monitoring process, the media Twitter accounts were selected using several standards. First, the Tweeting network generated by the keywords related to the 2016 U.S. presidential election and candidates composite of various Twitter accounts. For instance, media affiliate accounts (e.g., @abc, @abcpolitics, @abc7news) and individual accounts of media personnel were found (e.g., @mitchellreports, @DanaPerino, @ZekeJMiller).

However, only official and institutional Twitter accounts of various news media for political news and information distribution were included. Thus, media personnel like reporters and journalists were excluded. Second, the account owner, media, had to have multiple outlets such as websites and an original platform as the hyperlinking practices as objects for analysis. Third,

73 media bias of each account owner was considered to assign at least one media per political ideology within each media type. Lastly, for political commentator Twitter accounts, those involved in other media included for analysis were excluded. Adding to this, two campaign

Twitter accounts were included. Consequently, the set of media Twitter accounts were institutional Twitter accounts owned by media that had diverse political ideologies (left-leaning or right-leaning) and were particularly run for political news and information.

There were media Twitter accounts included in the sample list but were excluded for analysis due to their incomplete datasets (e.g., @cbspolitics, @rollcall, @politico, @glennbeck).

Finally, political news Tweets from media Twitter accounts were collected through a sample of five daily newspapers (@nytpolitics, @postpolitics, @usatoday2016, @latimespolitics, and

@wsjpolitics); four television news networks (@cnnpolitics, @abcpolitics, @nbcpolitics, and

@foxnewspolitics); two news magazines (@newyorker and @newsweek); three online partisan media (@huffpostpol, @drudgereport, and @salon_politics); two online non-partisan media

(@thehill and @buzzfeedpol); three political commentators (@natesilver538, @ezraklein, and

@michellemalkin); and two presidential candidate campaigns (@hillaryclinton and

@realdonaldtrump).

Previous studies revealed that the quantified media bias tends to be determined relatively by comparing to other media across the ideological spectrum rather than to be assigned to objective values either left-leaning or right-leaning (Meraz, 2009), and the magnitude of the differences in media bias is smaller than generally believed (Budak, Goel, & Rao, 2016). The relative media bias rank of individual media may be different, however, such research yielded broad agreement on the news media from the left-leaning and the right-leaning news media.

Thus, among the media Twitter accounts selected for data collection, accounts used for sentiment

74

contagion analysis across various political ideologies comprised the following: @wsjpolitics

(right-leaning), @postpolitics (left-leaning), @nytpolitics (left-leaning), @usatoday2016 (least- biased), @cnnpolitics (left-leaning), @foxnewspolitics (right-leaning), @abcnewspolitics (left- leaning), @drudgereport (right-leaning), @huffpostpol (left-leaning), @thehill (least-biased),

@ezraklein (left-leaning), @natesilver538 (left-leaning), and @michellemalkin (right-leaning).

Data Acquisition

NodeXL was used for Twitter data acquisition. To collect Twitter data, each media actor’s official Twitter account name, specifically designed for political news and information, was used as search keywords (for example, @cnnpolitics). NodeXL retrieves the most recent

18,000 Twitter feeds from the point of data collection, which include search keywords (each media actors’ Twitter account name) and provides a list of the most propagated keywords, hashtags, and hyperlinks and sentiment word frequency. Only those Tweets containing the search keyword were included in the sample, and the current social media stream were searched and saved on a daily basis. After the completion of data collection, the 21 datasets from Twitter were generated, based on the 21 topic keywords. For each search, no variations of keywords, other than the media Twitter account name, were used for the consistency of data collection.

When the selected keywords were very popular, data related to the recent Tweets that used the search keyword did not go back more than a day or two. A total of 5,595,373 relationships in

Tweets between media Twitter accounts were collected and manipulated to create a database consisting of only media Tweets.

The data manipulation process for generating a database of Tweets, generated only between media Twitter accounts selected for analysis, was conducted. To achieve and manage

75

the media Tweet database, Microsoft Access 2016 was used. First, every Tweet data archived in

the 21 databases using NodeXL were saved and converted to an Access database. Access is a

tool for managing large scale databases and lets researchers perform an action on the database

involving retrieving a choice of information especially using query functions (MacDonald,

2010). Queries are a way to ask questions about the database, and this feature is useful to craft

searches that selectively output only related data. Every query is a text command written in a

specialized language called Structure Query Language (SQL), well-known for being supported in

most major programs managing databases (MacDonald, 2010). For example, a union query,

primarily used for data manipulation in this study, is a query that is occasionally useful in

merging results from more than one database and then presents them in a single datasheet.

Union queries are a good way to link similar tables together that have been separated in the

previous step (MacDonald, 2010). In this study, multiple union queries5 with several filtering criteria6 to selectively return the Tweets between media Twitter accounts as NodeXL generated

separate databases for each media Twitter account on a daily basis. The series of data

manipulation processes resulted in (after the removal of irrelevant data), a total of 16,794

relationships, which is 0.3% of relationships from the total Tweets archived, detected in Tweets,

5 This is an example of SQL showing the query command used to extract only print media Tweets: [SELECT*FROM nytpolitics_Week1_forall; UNION SELECT*FROM postpolitics_week1;UNION SELECT*FROM latimespolitics_week1; UNION SELECT*FROM USAtoday2016_week1; UNION SELECT*FROM wsjpolitics_week1] 6 This is an example of a filter expression used for the data manipulation: ["nytpolitics" Or "postpolitics" Or "latimespolitics" Or "usatoday2016" Or "wsjpolitics" Or "cnnpolitics" Or "foxnewspolitics" Or "nbcpolitics" Or "abcpolitics" Or "newsweek" Or "newyorker" Or "Drudge_Report" Or "huffpostpol" Or "salon_politics" Or "Thehill" Or "Buzzfeedpol" Or "ezraklein" Or "michellemalkin" Or "natesilver538" Or "hillaryclinton" Or "realdonaldtrump" Or "nyt" Or "lat" Or "post" Or "wsj" Or "usa" Or "cnn" Or "fox" Or "nbc" Or "abc" Or "huffingtonpost" Or "salon" Or "buzzfeed"]

76

mentions, replies to other posts generated by media Twitter accounts. After the completion of such data manipulation process, this database of media Tweets was ready for further analysis.

Unit of Analysis

The Tweets were set as the units of analysis. Each Tweet archived by NodeXL typically includes the account name, a time stamp, a text message consisting of words and word-pairs, hyperlinks such as hashtags, accounts mentioned, and URLs to external sources, and information about user-engagement statistics (the number being replied, re-Tweeted, and liked) that are provided by Twitter. Such information was used to examine each media Twitter account’s network centrality and hyperlink and sentiment words salience in Tweets. The tweet between two media Twitter accounts was analyzed once. For example, the same Tweet generated by

@cnnpolitics and referring to @wsjpolitics can be archived from both Tweet databases of

@cnnpolitica and @wsjpolitics. In this case, while combining each database into the media

Tweets database, the duplicate relationships were counted only once.

Data Analysis

A multimethod approach was employed to explore the dynamics of intermedia agenda-

setting among various types of media Twitter accounts with varied political ideologies. To

analyze text-based data and deal with time-series data, computer-assisted content and network analysis software, NodeXL was primarily used. The sampling frame of Tweets used for descriptive statistics and weekly trend analysis was from 1 September 2016 to 7 November 2016,

resulting in seven time periods: Time 1(20 to 26 September 2016); Time 2(27 September to 3

October 2016); Time 3(4 to 10 October 2016); Time 4(11 to 17 October 2016); Time 5(18 to 24

October 2016); Time 6(25 to 31 October 2016); Time 7(1 to 7 November 2016). For time-series

77

analysis, a time-series data set with 49 observation times (days) before the presidential election

was generated and used. After the removal of irrelevant edges, a total of 16,794 edges out of

5,595,373 edges in Tweets generated by media Twitter accounts were analyzed.

Message level content analysis. First, as explained above, through the series of data treatment processes conducted by computer-assisted content analysis, meaningful information

and indicators within the text streams were identified, such as three types of the most popular

hyperlinks-links in Tweets, domains, hashtags, and mentioned Twitter accounts, sentiment

words, and the most propagated word and word-pairs in each Tweet.

Network analysis. To identify influential media Twitter accounts, a series of network

analysis was conducted. Network elements such as the characteristics of relationships (direction)

and a number of relationships (popularity) were used to calculate network centrality of each

media Twitter account and their influence within the network. Several centrality measures were

calculated through social network analysis, including betweenness centrality, closeness

centrality, and eigenvector centrality. To identify different types of agenda-setters within the

Tweeting network, the top 10 media Twitter accounts were rank-ordered based on their social

roles as a bridge, gatekeeper of information flow, and influencer connected to important sources

within the network.

Hyperlink analysis. The salience of hyperlinks in Tweets were used to analyze multiple

pairs of time-series data from media Twitter accounts. Additionally, cross-hyperlinking across

more than two media Twitter accounts were analyzed. To determine whether cross-hyperlinking

is across different media types or political ideologies, each media Twitter account was coded

using the predetermined categories: media Twitter account types (print media, television

network, news magazine, online partisan media, online non-partisan media, and political

78

commentator) and media Twitter account with a particular political ideology (conservative, neutral, and liberal). Least-biased media was considered neutral in terms of being supportive of

a particular presidential candidate or political ideology.

Sentiment analysis. The sentiment analysis used in this study was the lexicon-based

sentiment analysis method. As explained above, the uniqueness of this method lies in the use of lists of words that are pre-coded for sentiment orientation and sometimes also for strength of sentiment (Thelwall et al., 2011). For sentiment analysis, NodeXL requires two lists of sentiment words, positive and negative, and another list of words including skip words (e.g., a, about, across, after). Typically, sentiment words include information about the orientation of the sentiment: positive, negative, or neutral. When each Tweet is exported in a datasheet by

NodeXL, each positive or negative expression (a word or word-pairs) in individual Tweets were analyzed and counted based on their sentiment orientation, and then were used to generate two separate variables positive sentiment salience and negative sentiment salience. In doing so, each

sentiment word was assigned to a positive sentiment orientation value or a negative sentiment orientation value, which is 1, if that particular word was pre-defined in the word lists. The sum of positive or negative sentiment word observed in Tweets was used to calculate the sentiment word salience within each media type or individual media Twitter accounts’ media agendas.

When no sentiment word was detected from a Tweet, then the Tweet was categorized into a

neutral message, which means the absence of much sentiment or no sentiment at all (Liu, 2015).

Time-series analysis. First, the changes in hyperlink and sentiment word salience across

different time points were graphed. In doing so, the debate effects as political events were

examined. Then, to test the temporal order of intermedia agenda-setting among media Twitter

accounts, statistical and graphical time-series approach were employed. First, the Granger

79

causality tests were conducted to determine whether a time series data predicts another time

series data. A measure x is said to “Granger cause” a measure y, if y can be better predicted

from past values of x and y together, than from past values of y alone (Freeman, 1983). Previous

studies conducted time-series analysis adopted this method to examine the causal relationship

between sets of time-series data (e.g., Meraz, 2011b; Neuman et al., 2014; Vargo et al. 2016).

The strength of Granger causality is its statistical measure for testing causality as

compared to other time-series models such as autoregressive integrated moving average

(ARIMA) time-series models (Meraz, 2011b; Vargo et al, 2016). However, Granger causality

has also been critiqued because it is Granger causality, not real causality. In other words,

Granger causality can show that the change in the volume of one trend preceded the change of

values of another, but cannot show the extent that other events outside the model affected both

sets of values (Vargo et al., 2016). On the basis of these premises, a series of vector

autoregression (VAR) tests was utilized to confirm the appropriate number of lags before running the Granger causality tests. The log likelihood function (a likelihood ratio test) was used as criteria for lag selection and the results indicated that six days represent an optimal lag for analysis in this study. The F tests provided values of significance to determine Granger causality

relationships between two sets of time-series data. This method has been used to determine

causality in recent intermedia agenda–setting research work (e.g., Meraz, 2011b).

Adding to the Granger causality tests, the graphical time-series approach was used to

inspect the changes in media Twitter accounts’ interest and events visually influencing the

Tweeting trend. In this study, visual inspection of the graph was used to identify the overall

trend in media Twitter accounts’ interest in the race, candidates, and vice- and presidential

debates during the data collection. According to Thelwall (2014), if the graphic is for a specific

80

time-limited topic, there will be a) the increased volume of Tweeting, pointing to the time at which the topic started to gain interest; b) a decrease in volume of Tweeting at some stage, pointing to the time at which interest peaked, with people starting to lose interest in the topic afterwards; and c) a point at which the level of Tweeting about the topic returns almost the same as before the event. Such temporal changes in the trend in the number of edges, hyperlinking practices, sentiment word usage levels were analyzed.

Mapping Granger causality relationships. Lastly, directed Granger causality relationships regarding both positive and negative sentiment contagion were graphed using social network visualization software, Gephi. On the graph, edges represented significant relationships found in Granger causality tests for pair comparisons between each media Twitter account and other media types (e.g., @nytpolitics – television network Twitter accounts, news magazine

Twitter accounts, political commentator Twitter accounts, or online media Twitter accounts).

Measurement

Types of Media Twitter Accounts

In this study, media Twitter accounts were operationalized as the accounts owned and run by media (e.g., newspapers, broadcast companies, and news magazines) or media personalities

(such as political commentators). To categorize various media types, two standards were employed. First, media Twitter accounts were categorized depending on whether the media is traditional media originating from a traditional medium, such as print or broadcasting, or non-

traditional media of the type commonly hosted online. Therefore, in this study, traditional media

Twitter accounts included print media, television network, and news magazine Twitter accounts,

81

while non-traditional media Twitter accounts included online media and political commentators

who originally maintained blogs.

Another standard used to determine the media type was media partisanship. Considering

that the nontraditional news media category cannot fully incorporate different types of online

media, the apparent lack of partisanship was used to determine whether the online media Twitter

account is an online partisan media Twitter account or an online non-partisan media Twitter

account. Taken together, in this study, media Twitter accounts categorized for analysis were

delineated as traditional media Twitter accounts, including print media Twitter accounts

(@nytpolitics, @postpolitics, @usatoday2016, @latimespolitics, and @wsjpolitics), television

network Twitter accounts (@cnnpolitics, @abcpolitics, @nbcpolitics, and @foxnewspolitics),

and news magazine Twitter accounts (@newyorker and @newsweek), and non-traditional media

Twitter accounts, including online partisan media Twitter accounts (@huffpostpol,

@drudgereport, and @salon_politics), online non-partisan media Twitter accounts (@thehill and

@buzzfeedpol), and political commentator Twitter accounts (@natesilver538, @ezraklein, and

@michellemalkin).

Additionally, individual media Twitter accounts’ media bias toward a particular political

ideology (conservative vs. liberal vs. least-biased) was also considered when determining

whether the media Twitter account can be identified as a conservative, liberal, or least-biased

media Twitter account. In doing so, the media bias of the affiliate media platform, from which the account owner media organization is initially originated from, was taken into consideration.

For instances, within the television network Twitter account category, @cnnpolitics was the liberal media Twitter account while @foxnewspolitics was the conservative media Twitter account.

82

Information in the Text Streams

Hyperlink salience. The salience of hyperlinks (links to external sources like URLs, hashtags, and media Twitter accounts mentioned in Tweets) was operationalized as the frequency of hyperlinks observed in each media Twitter account’s network. Three types of hyperlinks were archived and used to calculate the salience of each hyperlink type. For time- series analysis, the sum of each type of hyperlinks was used to generate each line graph.

Sentiment orientation and salience. Sentiment orientation of each word in Tweets was determined using information provided by the positive and negative sentiment word lists as the dictionary-based sentiment analysis method was employed in this study. If positive or negative sentiment words found in Tweets are matched with the words pre-listed on the dictionary, a positive or negative sentiment orientation value, which is 1, was assigned to the sentiment word.

When there is no sentiment word identified in the Tweet, the Tweet was considered neutral. On the other hand, the salience of sentiment words was operationalized as the frequency of sentiment words (e.g., support, win, lead, bad, nasty, attack, problem) observed within the content (Tweets). Non-sentiment words (e.g., saying, talk, Obamacare, meet, public) were not counted towards sentiment salience. For time-series analysis, the sum of positive and negative sentiment words was used to generate each line graph.

Network centrality. In this study, network centrality, calculated based on the number of links that a specific node has within the network, was used to measure each media Twitter account’s popularity within the network. Three types of centrality measures, including betweenness centrality, closeness centrality, and eigenvector centrality, were used to identify different types of agenda-setters within the network.

83

Betweenness centrality indicated the extent to which a media Twitter account connects other media Twitter accounts or media types within the network. The mean distance from a media Twitter account to others within the network was measured using closeness centrality.

Lastly, eigenvector centrality was used to evaluate the influence of each media Twitter account by the value of their links to important accounts within the network.

Cues in the Tweeting Trends

Political events. To test whether political events may trigger a spike in the volume of

Tweeting, three presidential debates, taking place on September 26, October 9, October 19, and one vice-presidential debate, taking place on October 4, were identified as political events in this study.

Media interest. The changing volume of Tweeting around events can reveal when the media Twitter accounts first became interested in it and when this interest started to fade.

Media interest about certain agendas changing over the time periods, were measured by tracking indicators such as the increased volume of Tweeting, pointing to the time at which the topic started to gain interest, a decrease in volume of Tweeting at some stage, pointing to the time at which interest peaked, with media starting to lose interest in the topic afterwards; and a point at which the level of Tweeting about the topic returns almost the same as before the event.

Additionally, peaks in the volume of Tweeting were also used to point to which instances within the broad event, the 2016 U.S. presidential election, generated the most media interest. For example, breaking news and candidates’ scandals could affect the Tweeting trends. The temporal changes in the number of edges, hyperlinking practices, and sentiment word usage levels were also used to measure media interest at that time.

84

Causal Relationship Between Two Time-Series Data Sets

In this study, the Granger causality was used to measure the causal relationship established between media Twitter accounts. When the Granger causality tests results returned significant, the results indicated that A “Granger caused” B, meaning the past values of A and B together predicts the future value of B, rather than the past values of B alone. When the significant Granger causal relationship was found between two media Twitter accounts or different media types, it can be said that there is intermedia agenda-setting influence from one to another. Along the Granger causality test results, if the change in the line graph, presenting the sentiment word salience, precedes the change of values of another line graph, it can be also said that there is the causal relationship between two time-series data sets. Such trends moving in together with a time gap in two different time-series data sets indicate that intermedia agenda- setting influence caused by one towards another exists.

85

CHAPTER V. RESULTS

Descriptive Statistics

The sampling frame of Tweets used for descriptive statistics and weekly trend analysis was from 1 September 2016 to 7 November 2016, resulting in seven time periods: Time 1(20 to

26 September 2016); Time 2(27 September to 3 October 2016); Time 3(4 to 10 October 2016);

Time 4(11 to 17 October 2016); Time 5(18 to 24 October 2016); Time 6(25 to 31 October 2016);

Time 7(1 to 7 November 2016). For time-series analysis, a time-series data set with 49 observation times (days) before the presidential election was generated and used.

Results at Time 1

During the period of Time 1(20 to 26 September 2016), in total, 1627 edges, 1491

sentiment words, and 1922 hyperlinks in Tweets were identified and analyzed. Of those detected

sentiment words (n = 1491), 43.3% (n = 645) were words indicating positive sentiment and

57.7% (n = 846) were words indicating negative sentiment. For the frequency of hyperlinks in

Tweets, the number of top 10 domain hyperlinks in Tweets was 1762, followed by the number of top 10 mentioned Twitter accounts in tweets (n = 132) and the number of top 10 hashtags in

tweets (n = 28).

Table 1 shows that the most popular domain hyperlinks in Tweets were thehill.com (n =

256), twitter.com (n = 84), nbcnews.com (n = 84), .com7 (n = 45), newsweek.com (n = 37),

latimes.com (n = 35), newyorker.com (n = 30), and cnn.com (n = 29). It appeared that some

7 vox.com is a website for opinions about various topics including politics. The website was founded in 2014 by Ezra Klein.

86

media Twitter accounts such as @wajpolitics, @postpolitics, and @abcpolitics often used URL

shorteners. The URLs containing URL shorterners were excluded from analysis. Table 1 also

presents the top 10 hashtags in Tweets. The hashtag of debates (n = 7) was the most popular

hashtag, followed by fns8 (n = 5), thisweek (n = 4), cascademallshooting (n = 3), Burlington (n =

2), factcheckfriday (n = 1), myvote (n = 1), electionday (n = 1), cnnkffpoll9 (n = 1), and partypeople (n = 1). Twitter account names mentioned by media Twitter accounts in Tweets

were identified as another format of hyperlinks. @realdonaldtrump (n = 43) was the most

popular account mentioned by media Twitter accounts, followed by @hillaryclinton (n = 31),

@abc (n = 21), @wsj (n = 7), @foxnewssunday10 (n = 6), @heidiprzybyla11 (n = 6),

@elizacollins112 (n = 6), @mike_pence (n = 4), @svdate13 (n = 4), and @huffpostpol (n = 4).

8 #fns is an abbreviation of Sunday 9 #cnnkffpoll is an abbreviation of Kaiser Family Foundation/CNN partnership poll. 10 @foxnewssunday is the media Twitter account owned by the Fox News political issues program, Fox News Sunday. Journalist Chris Wallace moderates this Fox News political issues program, featuring interview with newsmakers of the day, focusing on the current political topics and issues facing the country. 11 @heidiprzybyla is owned by Heidi Przybyla, a politics reporter of USA today and a political analyst for MSNBC. 12 @elizacollins1 is owned by Eliza Collins, a congressional reporter for the USA today. 13 @svdate is owned by S.V. Date, a correspondent for the Huffington Post.

87

Table 1

Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 1)

Rank Mentioned Domains Hashtags Twitter Accounts 1 trib.al debates @realdonaldtrump 2 thehill.com fns @hillaryclinton 3 twitter.com thisweek @abc 4 dlvr.it cascademallshooting @wsj 5 nbcnews.com burlington @foxnewssunday 6 vox.com factcheckfriday @heidiprzybyla 7 newsweek.com myvote @elizacollins1 8 latimes.com electionday @mike_pence 9 newyorker.com cnnkffpoll @svdate 10 cnn.com partypeople @huffpostpol

The first presidential debate took place on September 26, 2016, and set the record as the most-watched debate in the United States history, with 84 million viewers. While the first debate date was approaching, individuals, groups, and news media made endorsements of a presidential candidate, and each of them became news. Bono, the lead singer of the band U2, reconfirmed his endorsement of Hillary Clinton in an interview with “CBS This Morning” that aired on September 20, 2016. The New York Times endorsed Hillary Clinton for president on

September 24, 2016. Table 2 shows the examples of such news coverage produced by various news media and disseminated by media Twitter accounts during this time period. The examples of news page headlines directed by URLs distributed in tweets include “Bono slams Trump during concert” (thehill.com), “Civil rights museum denies Trump visit request” (thehill.com), and “Clinton-Trump Race Narrows on the Doorstep of the Debates (POLL) (abcnews.go.com).

88

Table 2

Top URLs in Tweet (Time 1)

Rank URLs Headlines 1 thehill.com State rep. under fire for slamming Kaepernick in tweet about Marlins pitcher's death 2 thehill.com Bono slams Trump during concert 3 thehill.com Civil rights museum denies Trump visit request 4 thehill.com Olivia Wilde announces she's having a daughter in a tweet slamming Trump 5 thehill.com Trump camp slams 'ultra-liberal' NYT endorsement of Clinton 6 abcnews.go.com Clinton-Trump Race Narrows on the Doorstep of the Debates (POLL) 7 thehill.com Trump vs. Clinton: Debate of the century gets wilder 8 www.washingtonpost.com Poll: Clinton, Trump in virtual dead heat on eve of first debate 9 www.newsweek.com It's Hillary Clinton's Election To Lose 10 thehill.com Ballots cast by dead voters in Colorado: report

Table 3 provides a summary of word and word-pair frequency. The top 10 words on the

media Twitter accounts’ network were trump (n = 741), clinton (n = 442), donald (n = 284),

debate (n = 177), hillary (n = 165), new (n = 146), campaign (n = 104), obama (n = 102), poll (n

= 98), and presidential (n = 97). For the top 10 word pairs, donald-trump (n = 256) ranked first,

followed by hillary-clinton (n = 145), gary-johnson14 (n = 44), new-york (n = 42), presidential-

debate (n = 36), wsj-nbc15 (n = 29), trump-jr (n = 27), Donald-tump’s (n = 27), Clinton-trump (n

= 25), and presidential-debates (n = 25). Words and word-pairs related to presidential candidates

14 Whether Gary Johnson, the Libertarian Party’s presidential candidate, was going to join the first presidential debate came to the public’s attention at that moment. 15 The word pair of wsj-nbc was reflecting media attention towards WSJ/NBC News Polls.

89

(e.g., trump, clinton), debate (e.g., dabate, presidential-debate), and campaign (e.g., new, campaign) appeared as high frequency.

Table 3

Top 10 Word and Word-pair (Time 1)

Rank Word Count Word Pairs Count 1 trump 741 donald-trump 256 2 clinton 442 hillary-clinton 145 3 donald 284 gary-johnson 44 4 debate 177 new-york 42 5 hillary 165 presidential-debate 36 6 new 146 wsj-nbc 29 7 campaign 104 trump-jr 27 8 obama 102 donald-trump’s 27 9 poll 98 clinton-trump 25 10 presidential 97 presidential-debates 25

Table 4 displays the key media Twitter accounts identified using three types of network centrality measures: betweenness, closeness, and eigenvector centrality. As shown in Table 4, the official campaign Twitter accounts -@realdonaldtrump and @hillaryclinton- remained as two of the top 10 media Twitter accounts through three types of measures. @realdonaldtrump,

@hillaryclinton, @abcpolitics, @huffpostpol, @wsjpolitics, @usatoday2016, @foxnewspolitics,

@nbcpolitics, and @wsj were ranked high by betweenness centrality. All the top 10 accounts were either television networks Twitter accounts or print media Twitter accounts except

@huffpostpol. When closeness centrality was measured, @wsjpolitics ranked first among media

Twitter accounts, followed by @abcpolitics, @usatoday2016, @foxnewspolitics, @nbcpolitics,

@huffpostpol, @postpolitics, and @wsj. @postpolitics entered into the list when closeness

90

centrality measure used while it was not listed with betweenness centrality measure. All the

media Twitter accounts from the list of top 10 ranked by closeness centrality remained the same

ranks when eigenvector centrality was calculated (see Table 4).

Over the period of Time 1, @wsjpolitics with the greatest betweenness centrality was the most influential bridge connecting account groups to each other besides the official campaign accounts. The results also show that @wsjpolitics was aggressive at linking to other influential accounts with highest eigenvector centrality at the same time. For example, @wsjpolitics mentioned @realDonaldTrump by tweeting “According to @WSJ tracker, has gone 13 hours without tweeting, a record for 2016 https://t.co/ZdEjl4hwXA by @jonkeegan” and also mentioned @hillarcylinton by retweeting “RT @PeterWSJ: Three ways @hillaryclinton and

@realdonaldtrump can help themselves in the debate. https://t.co/hA6iN6k3Og via

@WSJPolitics.” All of the high ranked media Twitter accounts across different types of network centrality measures were either print media accounts (@usatoday2016, @postpolitics, @wsj) or television networks accounts (@abcpolitics, @foxnewspolitics, @nbcpolitics). @huffpostpol was the only online news media account ranked top 10.

91

Table 4

Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness, and Eigenvector Centrality (Time 1)

Rank Account Betweenness Account Closeness Account Eigenvector Centrality Centrality Centrality 1 @realdonaldtrump 46.067 @realdonaldtrump 0.048 @realdonaldtrump 0.145 2 @hillaryclinton 28.833 @hillaryclinton 0.040 @hillaryclinton 0.123 3 @abcpolitics 15.600 @wsjpolitics 0.040 @wsjpolitics 0.108 4 @huffpostpol 12.000 @abcpolitics 0.040 @abcpolitics 0.101 5 @wsjpolitics 11.867 @usatoday2016 0.037 @usatoday2016 0.092 6 @usatoday2016 3.600 @foxnewspolitics 0.037 @foxnewspolitics 0.092 7 @foxnewspolitics 3.600 @nbcpolitics 0.032 @nbcpolitics 0.065 8 @nbcpolitics 3.333 @huffpostpol 0.032 @huffpostpol 0.055 9 @wsj 1.100 @postpolitics 0.030 @postpolitics 0.050 10 - - @wsj 0.029 @wsj 0.044

92

Results at Time 2

During the period of Time 2 (27 September to 3 October 2016), in total, 3042 edges,

2717 sentiment words, and 2898 hyperlinks in Tweets were identified and analyzed. Of those

detected sentiment words (n = 2717), 41.8% (n=1136) were words indicating positive sentiment

and 58.2% (n=846) were words indicating negative sentiment. The top 10 domain hyperlink

frequency in Tweets was 2534, followed by the top 10 hashtag frequency in Tweets (n = 216)

and the number of top 10 Twitter accounts mentioned in Tweets (n = 148).

As shown in Table 5, popular domain hyperlinks in Tweets were thehill.com (n = 396),

twitter.com (n = 327), vox.com (n = 185), nbcnews.com (n = 130), newyorker.com (n = 112),

latimes.com (n = 100), snappytv.com16 (n = 93), and washingtonpost.com (n = 87). The top 10

hashtags were vpdebate (n = 90), debatenight (n = 42), debates (n = 36), decision2016 (n = 16), wsjmap17 (n = 9), fallcolors (n = 5), colorado (n = 5), greenmountainfalls (n = 5), tnyfest18 (n =

4), and fnpolitics19 (n = 4). @hillaryclinton (n = 45) was the most popular Twitter account

mentioned in the network, followed by @realdonaldtrump (n = 38), @abc (n = 20), @wsj (n =

13), @abcpolitics (n = 6), @timkaine (n = 6), @foxnews (n = 6), @atensnut20 (n = 5), @salon (n

= 5), and @mike_pence (n = 4).

16 snappytv.com is a live video platform enabling users to edit videos and share them on Twitter. 17 #wsjmap was used to direct Twitter users to ’s map contest. The contest winner was best guessed the 2016 U.S. presidential election results by coloring each state either blue or red. The webpage can be reached at http://graphics.wsj.com/elections/2016/election-map-quiz/#_. 18 @tnyfest is an abbreviation of The New Yorker Festival. 19 #fnpolitics indicates Fox News Politics. 20 The account of @atensnut is owned by Juanita Broaddrick.Breitbart released a video clip interviewing Broaddrick. In the taped interview, she alleged that raped her in Aprill 1978.

93

Table 5

Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 2)

Rank Mentioned Domains Hashtags Twitter Accounts 1 trib.al vpdebate hillaryclinton 2 thehill.com debatenight realdonaldtrump 3 twitter.com debates abc 4 vox.com decision2016 wsj 5 dlvr.it wsjmap abcpolitics 6 nbcnews.com fallcolors timkaine 7 newyorker.com colorado foxnews 8 latimes.com greenmountainfalls atensnut 9 snappytv.com tnyfest salon 10 washingtonpost.com fnpolitics mike_pence

News media coverage primarily centered around discussing the first debate took place on

September 26, 2016, and predicting results of the upcoming vice presidential debate holding on

October 4, 2016 (see Table 6). The news page headline examples are including “The first debate

featured an unprepared man repeatedly shouting over a highly prepared woman” (vox.com),

“Hillary, interrupted” (conservativereview.com), and “Cover Story: Donald Trump Is Barry

Blitt’s ‘Miss Congeniality’” (newyorker.com).

94

Table 6

Top URLs in Tweet (Time 2)

Rank URLs Headlines 1 www.nbcnews.com The Presidential Debates (News Story Aggregation Page) 2 www.nbcnews.com The VP Debate (News Story Aggregation Page) 3 www.vox.com The First Debate Featured An Unprepared Man Repeatedly Shouting Over A Highly Prepared Woman 4 www.conservativereview.com Malkin: Hillary, Interrupted - Conservative Review 5 www.vox.com The Debate Showed Just How Scary A Commander In Chief Donald Trump Would Be 6 www.vox.com Donald Trump’s First Presidential Debate Confirmed He Has No Idea What He’s Talking About 7 www.newyorker.com Cover Story: Donald Trump Is Barry Blitt’s “Miss Congeniality” 8 thehill.com Trump singles out non-Christians at rally 9 thehill.com Race breaking Clinton's way 10 thehill.com NY attorney general: Trump Foundation must stop fundraising now

Nearing the end of Time 2, breaking news stories came out. The New York Times revealed Donald Trump could have avoided paying federal income tax for 18 years through an investigative reporting on October 2, 2016. New York attorney general also announced that he sent cease letter to Donald Trump’s foundation on October 3, 2016, which was led by The

Washington Post’s news coverage about the Trump Foundation published on September 10,

2016. A series of such reporting became to affect media agendas from this time forth.

Table 7 shows the 10 most popular words for Time 2 were trump (n = 1195), clinton (n =

659), debate (n = 540), donald (n = 421), hillary (n = 233), presidential (n = 156), new (n = 154), pence (n = 149), more (n = 140), and trump’s (n = 138). For the top 10 word pairs, donald-trump

(n = 384) ranked first, followed by hillary-clinton (n = 192), presidential-debate (n = 113), mike-

95

pence (n = 76), tim-kaine (n = 57), vice-presidential (n = 48), clinton-trump (n = 45), post-debate

(n = 41), trump-clinton (n = 38), and vp-debates (n = 36). Along with words and word-pairs related to presidential candidates (e.g., trump, clinton) and past presidential debate (e.g., post- debate), a considerable number of keywords were regarding the vice presidential candidates and debate event (e.g., mike-pence, tim-kaine, vice-presidential, vp-debate) as the vice presidential debate was approaching.

Table 7

Top 10 Word and Word-pair (Time 2)

Rank Word Count Word Pairs Count 1 trump 1195 donald-trump 384 2 clinton 659 hillary-clinton 192 3 debate 540 presidential-debate 113 4 donald 421 mike-pence 76 5 hillary 233 tim-kaine 57 6 presidential 156 vice-presidential 48 7 new 154 clinton-trump 45 8 pence 149 post-debate 41 9 more 140 trump-clinton 38 10 trump’s 138 vp-debate 36

Table 8 displays the key media Twitter accounts ranked by three types of network

centrality measures. As shown in Table 8, the official campaign Twitter accounts -

@realdonaldtrump and @hillaryclinton- remained as two of the top 10 media Twitter accounts

most of times. The only exception was @wsjpolitics when closeness centrality measure was

used. The media Twitter accounts ranked by betweenness centrality were @hillaryclinton,

@realdonaldtrump, @wsjpolitics, @michellemalkin, @cnn, @abcpolitics, @foxnewspolitics,

and @usatoday2016. It was worthy of notice that @michellemalkin, a political commentator

96

account, ranked as the only non-print or television network media account. For the key media

Twitter accounts ranked by closeness centrality, @hillaryclinton ranked first, followed by

@wsjpolitics, @realdonaldtrump, @abcpolitics, @usatoday2016, @foxnewspolitics, @abc,

@michellemalkin, @cnn, and @nytpolitics. The accounts with high eigenvector centrality were

@hillaryclinton, ranking first, followed by @realdonaldtrump, @abcpolitics, @wsjpolitics,

@usatoday2016, @foxnewspolitics, @abc, @michellemalkin, @nytpolitics, and @ezraklein. As

compared to the top 10 accounts ranked by closeness centrality, it outstood @ezraklein, a

political commentator account, ranked in the top 10 with high eigenvector centrality along with

@michellemalkin.

Within a given period of time, some print media accounts (@wsjpolitics,

@usatoday2016, and @nytpolitics) and television network accounts (@abcpolitics,

@foxnewspolitics, and @cnn) remained the top 10. It was worth noting that especially

@nytpolitics and @cnn were newly-listed accounts from Time 1. @wsjpolitics particularly showed high betweenness centrality meaning that @wsjpolitics located on the position controlling information flow. @wsjpolitics located on the center of the network was also capable of reaching other accounts at a fast pace due to its high closeness centrality. Two political commentator accounts (@michellemalkin and @ezraklein) showed more presence of

influencers than other television networks media or non-print media accounts, such as online

media or news magazine accounts.

97

Table 8

Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness, and Eigenvector Centrality (Time 2)

Rank Account Betweenness Account Closeness Account Eigenvector Centrality Centrality Centrality 1 @hillaryclinton 60.5 @hillaryclinton 0.034 @hillaryclinton 0.145 2 @realdonaldtrump 55.5 @wsjpolitics 0.034 @realdonaldtrump 0.123 3 @wsjpolitics 46.0 @realdonaldtrump 0.033 @abcpolitics 0.108 4 @michellemalkin 15.0 @abcpolitics 0.029 @wsjpolitics 0.101 5 @cnn 15.0 @usatoday2016 0.029 @usatoday2016 0.092 6 @abcpolitics 10.0 @foxnewspolitics 0.029 @foxnewspolitics 0.092 7 @foxnewspolitics 5.0 @abc 0.026 @abc 0.065 8 @usatoday2016 5.0 @michellemalkin 0.024 @michellemalkin 0.055 9 - - @cnn 0.024 @nytpolitics 0.050 10 - - @nytpolitics 0.023 @ezraklein 0.044

98

Results at Time 3

During the period of Time 3 (4 to 10 October 2016), in total, 1853 edges, 1663 sentiment

words, and 1979 hyperlinks in tweets were identified and analyzed. Of those detected sentiment

words (n = 1663), 34.9% (n=581) were words indicating positive sentiment and 65.1% (n=1082)

were words indicating negative sentiment. The number of the top 10 domain hyperlinks in

Tweets was 1633, followed by the number of top 10 twitter accounts mentioned in Tweets (n =

184) and the number of top 10 hashtags found in Tweets (n = 162). Table 9 displays that the

most popular hyperlink domains in Tweets were twitter.com (n = 306), latimes.com (n = 154),

vox.com (n = 132), snappytv.com (n = 101), washingtonpost.com (n = 86), newyorker.com (n =

72), nbcnews.com (n = 63), and newsweek.com (n = 51). The Top 10 hashtags were debate (n =

84), debates (n = 34), decision2016 (n = 13), hurricanematthew21 (n = 7), wsjmap (n = 6),

tnyarchive22 (n = 4), tnyfest (n = 4), vpdebate (n = 4), worldmentalhealthday (n = 3), and

breaking (n = 3). The most popular media Twitter account mentioned on the Tweeting network

was @ realdonaldtrump (n = 83), followed by @hillaryclinton (n = 59), @abc (n = 9), @wsj (n =

8), @foxnewssunday (n = 6), @ heidiprzybyla (n = 6), @ elizacollins1 (n = 5), @ mike_pence (n

= 3), @ svdate (n = 3), and @ huffpostpol (n = 2).

21 Media Twitter accounts also discussed whether Hurricane Matthew, which was forecast to approach Florida at that moment, could have a significant impact on the 2016 presidential election as it can affect voter registration and function as a leadership test for both candidates (e.g., hurricane-matthew). 22 #tnyarchive indicates the New Yorker archive.

99

Table 9

Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 3)

Rank Mentioned Domains Hashtags Twitter Accounts 1 trib.al debate @realdonaldtrump 2 twitter.com debates @hillaryclinton 3 latimes.com decision2016 @abc 4 vox.com hurricanematthew @wsj 5 dlvr.it wsjmap @foxnewssunday 6 snappytv.com tnyarchive @heidiprzybyla 7 washingtonpost.com tnyfest @elizacollins1 8 newyorker.com vpdebate @mike_pence 9 nbcnews.com worldmentalhealthday @svdate 10 newsweek.com breaking @huffpostpol

The vice presidential debate and the second presidential debate took place on October 4 and October 9, 2016, during the period of Time 3. Viewership went down from the more than 83 million viewers of the first debate to 66.5 million people tuned into the second presidential debate (Thielman, 2016). The race got ugly even before the debate began. WikiLeaks leaked emails from Clinton campaign chairman John Podesta on October 7, 2016, and multiple news media published coverage about what leaked emails reveal about Hillary Clinton’s campaign

(e.g., nytimes.com, time.com, bbc.com). also revealed a tape showing

Donald Trump’s views on women and released reporting about “Trump recorded having an extremely lewd conversation about women in 2005” on October 8, 2016. The revealed information by news media was used against each candidate at the debate and kept trending on social media including Twitter.

100

Table 10 provides a list of the most popular URLs in Tweets during the period of Time 3.

The news page headline examples were including “A Donald Trump presidency would bring

shame on this country” (vox.com), “Most Memorable Lines of the 2nd Presidential Debate”

(abcnews.go.com), and “Vice presidential debate updates: The winner? Our analysts say Pence came out ahead” (latimes.com).

Table 10

Top URLs in Tweet (Time 3)

Rank URLs Headlines 1 www.nbcnews.com The Presidential Debates (News Story Aggregation Page) A Donald Trump presidency would bring shame on this 2 www.vox.com country 3 abcnews.go.com Most Memorable Lines of the 2nd Presidential Debate Megyn Kelly and 's feud over Donald 4 www.vox.com Trump, explained 5 abcnews.go.com 5 Storylines to Watch at Tonight's Presidential Debate 6 www.vox.com This is Donald Trump’s worst nightmare 7 abcnews.go.com 2nd Presidential Debate: 11 Moments That Mattered Vice presidential debate updates: The winner? Our 8 www.latimes.com analysts say Pence came out ahead Showtime’s “The Circus” Offers Dark Lessons In Horse- 9 www.newyorker.com Race Journalism Reports suggest Donald Trump is unhappy with Mike 10 www.vox.com Pence for winning the VP debate

Table 11 displays the summary of popular words and word-pairs in Tweets. The top 10

popular words on the Tweeting network were trump (n = 853), debate (n = 371), clinton (n =

352), donald (n = 333), presidential (n = 108), hillary (n = 107), fact (n = 94), trump’s (n = 91),

pence (n = 86), and new (n = 83). The most popular word-pair on the Tweeting network was

donald-trump (n = 303), followed by hillary-clinton (n = 90), presidential-debate (n = 72), fact-

101

check (n = 68), mike-pence (n = 45), bill-clinton (n = 40), trump-clinton (n = 33), vice-

presidential (n = 30), donald-trump’s (n = 28), and hurricane-matthew (n = 27). The ranked words and word-pairs show that Donald Trump's frontal attacks and communication style drew a lot more attention to political fact-checking and led to increased traffic at dedicated fact- checking sites especially during and after the second presidential debate (e.g., fact, fact-check).

Mike Pence, republican vice presidential candidate, remained in the ranking for post-debate media coverage in tweets (e.g., pence, mike-pence, vice-presidential).

Table 11

Top 10 Word and Word-pair (Time 3)

Rank Word Count Word Pairs Count 1 trump 853 donald-trump 303 2 debate 371 hillary-clinton 90 3 clinton 352 presidential-debate 72 4 donald 333 fact-check 68 5 presidential 108 mike-pence 45 6 hillary 107 bill-clinton 40 7 fact 94 trump-clinton 33 8 trump’s 91 vice-presidential 30 9 pence 86 donald-trump’s 28 10 new 83 hurricane-matthew 27

The key accounts ranked by betweenness centrality were @ realdonaldtrump,

@wsjpolitics, @hillaryclinton, @abcpolitics, @usatoday2016, @latimespolitics, and

@nytpolitics. Most of ranked media Twitter accounts were print media Twitter accounts

(@wsjpolitics, @usatoday2016, @latimespolitics, @nytpolitics) while @abcpoltics was the only

non-print media Twitter account (television network Twitter accounts). The media Twitter

102

account with the highest closeness centrality was @realdonaldtrump, followed by @abcpolitics,

@hillaryclinton, @nytpolitics, @latimespolitics, @usatoday2016, @abc, @wsjpolitics, and

@wsj. The top 10 accounts remained the same when eigenvector centrality measure used: @ realdonaldtrump ranked first, followed by @ abcpolitics, @hillaryclinton, @nytpolitics,

@latimespolitics, @usatoday2016, @abc, @wsjpolitics, and @wsj (see Table 12).

Over the period of Time 3, all the ranked accounts were print media Twitter accounts except @abcpolitics and two official campaign accounts. Considering that there were two times of official election debates (vice presidential and second presidential debate) during a given time period, print media Twitter accounts seemed to become more dominant on the network. The difference in the two campaign accounts’ betweenness centrality had grown from Time 2.

@abcpolitics remained in the top 2 among media Twitter accounts while @wsjpolitics got down ranked when closeness centrality and eigenvector centrality measures were used.

Such results show that @abcpolitics was bridging different account groups to each other and important individual accounts and also capable of distributing news at high speed due to its central location on the network while @wsjpolitics was taking a role more as a connector.

103

Table 12

Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness, and Eigenvector Centrality (Time 3)

Rank Account Betweenness Account Closeness Account Eigenvector Centrality Centrality Centrality 1 @realdonaldtrump 18.000 @realdonaldtrump 0.100 @realdonaldtrump 0.180 2 @wsjpolitics 7.000 @abcpolitics 0.071 @abcpolitics 0.150 3 @hillaryclinton 3.000 @hillaryclinton 0.067 @hillaryclinton 0.136 4 @abcpolitics 1.750 @nytpolitics 0.067 @nytpolitics 0.117 5 @usatoday2016 0.750 @latimespolitics 0.067 @latimespolitics 0.117 6 @latimespolitics 0.750 @usatoday2016 0.067 @usatoday2016 0.117 7 @nytpolitics 0.750 @abc 0.067 @abc 0.089 8 - - @wsjpolitics 0.067 @wsjpolitics 0.074 9 - - @wsj 0.045 @wsj 0.020 10 ------

104

Results at Time 4

During the period of Time 4 (11 to 17 October 2016), in total, 2112 edges, 1971

sentiment words, and 1910 hyperlinks in tweets were identified and analyzed. Of those detected

sentiment words (n = 1971), 36.6% (n=721) were words indicating positive sentiment and 63.4%

(n=1250) were words indicating negative sentiment. The top 10 hyperlink domain frequency in

tweets was 1716, followed by the number of top 10 accounts mentioned in tweets (n = 102) and top 10 hashtags embedded in tweets (n = 92).

As shown in the Table 13, the popular domain hyperlinks in Tweets were twitter.com (n

= 230), vox.com (n = 145), thehill.com (n = 139), latimes.com (n = 115), washingtonpost.com (n

= 96), newyorker.com (n = 87), newsweek.com (n = 59), and usatoday.com (n = 32). The top 10

hashtags were debate (n = 29), debates (n = 22), threeononedebate23 (n = 11), bluelivesmatter (n

= 7), foxnews2016 (n = 5), wsjmap (n = 5), breaking (n = 4), daddywillsaveus24 (n = 3),

voterfraudwhatvoterfraudohthatvoterfraud25 (n = 3), and waroncops (n = 3). For the popular

accounts mentioned by media Twitter accounts, @ realdonaldtrump (n = 43) ranked first,

followed by @hillaryclinton (n = 26), @abc (n = 8), @abcpolitics (n = 4), @elizacollins1 (n =

4), @mikememoli26 (n = 4), @ wsj (n = 4), @ djusatoday (n = 3), @ coopallen27 (n = 3), and @

heidiprzybyla (n = 3). Half of the most mentioned accounts (@elizacollins1, @djusatody,

@coopallen, and @mikememoli) was owned by political reporters working at news media

organizations such as USA today and the .

23 The candidate Donald Trump said during the debate that he felt like it was “one-on-three” debate. #threeononedebate began trending during and after the debate. 24 “Daddy Will Save Us” was the first pro-Trump art show opened in New York City. #DaddyWillSaveUs was used to promote the show. 25 @michellemalkin tweeted: “ICYMI: More #voterfraudwhatvoterfraudohTHATvoterfraud” on Oct. 12, 2016. 26 @mikememoli is owned by Mike Memoli, a reporter for the LA Times. 27 @ coopallen is owned by Cooper Allen, a journalist at USA TODAY.

105

Table 13

Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweets (Time 4)

Rank Mentioned Domains Hashtags Twitter Accounts 1 trib.al debate @realdonaldtrump 2 twitter.com debates @hillaryclinton 3 vox.com threeononedebate @abc 4 thehill.com bluelivesmatter @abcpolitics 5 latimes.com foxnews2016 @elizacollins1 6 dlvr.it wsjmap @mikememoli 7 washingtonpost.com breaking @wsj 8 newyorker.com daddywillsaveus @djusatoday voterfraudwhatvoterfraud 9 newsweek.com @coopallen ohthatvoterfraud 10 usatoday.com waroncops @heidiprzybyla

While the third and final debate date was approaching, the 2016 presidential election was on the top of most issues on Twitter and everywhere, and the election campaigns were getting more and more heated. The New York Times released a news report quoting two women saying

Donald Trump touched them Inappropriately on October 12, 2016. Trump's presidential campaign immediately denied the accusations and called The Times article "fiction" and specifically accused the newspaper of "coordinated character assassination" in an attempt to swing the race to his Democratic opponent, Hillary Clinton. The Palm Beach Post, People magazine, , and BuzzFeed also published articles reporting Donald Trump of similar behavior in the past. On the other hand, Reuters reported that a senior official at the US

State Department tried to push the Federal Bureau of Investigation (FBI) in 2015 into dropping its insistence that an email from Hillary Clinton's private server contained classified information, according to summaries of interviews with FBI officials released by the agency on October 17,

2016. Donald Trump has accused her of jeopardizing national security while she was secretary

106

of state from 2009 to 2013. Table 14 provides a list of the most popular URLs in Tweets leading

to the external news page. Headline examples are including “Donald Trump's problem isn't a

conspiracy. It's him.” (vox.com), “Mike Pence and The Revolution” (newyorker.com), and

“Michael Moore to debut surprise Trump film” (thehill.com).

Table 14

Top URLs in Tweet (Time 4)

Rank URLs Headlines 1 www.vox.com Donald Trump's problem isn't a conspiracy. It's him. The New Yorker Radio Hour: Episode 52: Mikhail 2 www.newyorker.com Baryshnikov, T. C. Boyle, and Germany’s Kriegskinde 3 www.newyorker.com Mike Pence And The Revolution Campaign 2016 updates: Trump denies groping 4 www.latimes.com allegations, blames media and Clinton campaign 5 thehill.com Michael Moore to debut surprise Trump film 6 graphics.wsj.com Guess the 2016 Electoral College Map A competent woman just debated a man who has no idea 7 www.vox.com what he’s talking about 27 charts that will change how you think about the 8 www.vox.com American economy The Obamacare problem that Democrats don’t want to talk 9 www.vox.com about Hillary Clinton is proposing a policy to tackle deep 10 www.vox.com poverty

Table 15 shows the word and word-pair frequency observed for the period of Time 4.

The top 10 words were trump (n = 971), clinton (n = 417), donald (n = 360), debate (n = 181), hillary (n = 181), new (n = 135), campaign (n = 124), trump’s (n = 102), obama (n = 96), and women (n = 78). For the top 10 word pairs on the Tweeting network, donald-trump (n = 328)

107

ranked first, followed by hillary-clinton (n = 116), paul-ryan28 (n = 37), clinton-campaign (n =

35), michelle-obama29 (n = 34), presidential-debate (n = 33), donald-trump's (n = 27), wsj-nbc (n

= 24), bill-clinton (n = 23), and trump-clinton (n = 19).

Table 15

Top 10 Word and Word-pair (Time 4)

Rank Word Count Word Pairs Count 1 trump 971 donald-trump 328 2 clinton 417 hillary-clinton 116 3 donald 360 paul-ryan 37 4 debate 181 clinton-campaign 35 5 hillary 181 michelle-obama 34 6 new 135 presidential-debate 33 7 campaign 124 donald-trump's 27 8 trump’s 102 wsj-nbc 24 9 obama 96 bill-clinton 23 10 women 78 trump-clinton 19

In Table 16, the key media Twitter accounts ranked by three types of network centrality measures were as follows. The media Twitter accounts ranked by betweenness centrality were

@realdonaldtrump, @hillaryclinton, @nytpolitics, @latimespolitics, @abcpolitics,

@foxnewspolitics, @usatoday2016, and @abc. Most of the ranked accounts were print media

Twitter accounts (@latimespolitics, @usatoday2016, @nytpolitics) and television network

Twitter accounts (@abc, @abcpoltics, @foxnewspolitics). For closeness centrality,

@realdonaldtrump ranked first, followed by @hillaryclinton, @abcpolitics, @abc,

28 Donald Trump attacked at Speaker Paul D. Ryan and other “disloyal” Republicans who opposed him on Twitter (e.g., paul-ryan). 29 Michelle Obama commented about Donald Trump’s words on women while speaking at a Hillary Clinton event in New Hampshire on October 13, 2016.

108

@latimespolitics, @nytpolitics, @foxnewspolitics, and @usatoday2016. Notably @cnnpolitics

ranked when closeness and eigenvector centrality were measured. As shown in Table 16, the top

10 accounts remained the same when eigenvector centrality measure used: @ realdonaldtrump ranked first, followed by @ abcpolitics, @hillaryclinton, @nytpolitics, @latimespolitics,

@usatoday2016, @abc, @wsjpolitics, and @wsj.

Over the period of Time 4, almost half of ranked media accounts was print media Twitter accounts (@nytpolitics, @latimespolitics, @usatoday1026) and another half was television network accounts (@abcpolitics, @foxnewspolitics, @cnnpolitics). Online media, news magazine, and political commentator accounts were not on the list. @nytpolitics showed highest betweenness centrality while @abcpolitics showed highest closeness and eigenvector centrality among media Twitter accounts.

109

Table 16

Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness, and Eigenvector Centrality (Time 4)

Rank Account Betweenness Account Closeness Account Eigenvector Centrality Centrality Centrality 1 @realdonaldtrump 14.000 @realdonaldtrump 0.111 @realdonaldtrump 0.180 2 @hillaryclinton 7.000 @hillaryclinton 0.091 @hillaryclinton 0.166 3 @nytpolitics 0.333 @abcpolitics 0.077 @abcpolitics 0.123 4 @latimespolitics 0.333 @abc 0.077 @abc 0.101 5 @abcpolitics 0.333 @latimespolitics 0.071 @latimespolitics 0.095 6 @foxnewspolitics 0.333 @nytpolitics 0.071 @nytpolitics 0.095 7 @usatoday2016 0.333 @foxnewspolitics 0.071 @foxnewspolitics 0.095 8 @abc 0.333 @usatoday2016 0.071 @usatoday2016 0.095 9 - - @cnnpolitics 0.063 @cnnpolitics 0.049 10 ------

110

Results at Time 5

During the period of Time 5 (18 to 24 October 2016), the third and final presidential

debate took place on October 19, 2016. In total, 2995 edges, 2540 sentiment words, and 3044

hyperlinks in tweets were identified and analyzed. Of those detected sentiment words (n =

2540), 40.5% (n= 1028) were words indicating positive sentiment and 59.5% (n=1512) were words indicating negative sentiment. The top 10 domain hyperlink frequency in tweets was

2286, followed by the number of top 10 hashtags embedded in tweets (n = 405) and top 10

Twitter accounts mentioned in tweets (n = 353).

Table 17 shows that the popular domain hyperlinks in Tweets were thehill.com (n = 466), twitter.com (n = 257), newsweek.com (n = 157), latimes.com (n = 129), vox.com (n = 122),

newyorker.com (n = 105), foxnews.com (n = 65), and washingtonpost.com (n = 63). The top 10

hashtags were debate (n = 214), foxnews2016 (n = 78), debatenight (n = 52), decision2016 (n =

34), debates (n = 6), alsmithdinner30 (n = 5), breaking (n = 4), trumpbookreport31 (n = 4),

adiosjeb32 (n = 4), and stopcommoncore33 (n =4). @ realdonaldtrump (n = 123) was the most popular media Twitter account mentioned on the Tweeting network, followed by @hillaryclinton

(n = 98), @foxnews (n = 80), @abc (n = 10), @wsj (n = 10), @howardkurtz34 (n = 9), @

wsjpolitics (n = 7), @foxbusiness (n = 6), @ elizacollins1 (n = 5), and @djusatoday35 (n = 5).

30 #alsmithdinner indicates the Alfred E. Smith Memorial Foundation Dinner, commonly known as the Al Smith Dinner. In this annual fundraiser, Hillary Clinton and Donald Trump took turns roasting each other. 31 # trumpbookreport began trending after a St. Louis alderman and candidate for mayor, Antonio French, tweeted that Trump’s foreign policy debate answers sounded like “a book report from a teenager who hasn't read the book.” 32 The candidate Donald Trump retweeted a message mocking Jeb Bush, “ADIOS, JEB aka JOSÉ,” from one of his follower. Moreover, then, on 19 Oct 2016, @michellemalkin tweeted: "Good riddance. #AdiosJeb #stopcommoncore #stopfeded" 33 #stopcommoncore indicates Common Core opt-out movement, which is against the national Common Core State Standards for academics. The candidate Donald Trump said "Common Core is a total disaster. We can't let it continue." in a campaign ad on his website. 34 @howardkurtz is owned by , a Fox News analyst and the host of "MediaBuzz." 35 @ djusatoday is owned by David Jackson, a White House correspondent for USA Today.

111

Table 17

Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 5)

Rank Mentioned Domains Hashtags Twitter Accounts 1 trib.al debate @realdonaldtrump 2 thehill.com foxnews2016 @hillaryclinton 3 twitter.com debatenight @foxnews 4 newsweek.com decision2016 @abc 5 latimes.com debates @wsj 6 vox.com alsmithdinner @howardkurtz 7 dlvr.it breaking @wsjpolitics 8 newyorker.com trumpbookreport @foxbusiness 9 foxnews.com adiosjeb @elizacollins1 10 washingtonpost.com stopcommoncore @djusatoday

News media kept digging into Donald Trump’s anti-woman agenda and Hillary Clinton’s email scandal. In response to a heated political battle over the agendas and public sentiment, the third and final debate became a field to contest candidates’ policy agendas. Especially Donald

Trump had to resolve his controversial positions on immigration, race relations, and foreign

policy while he has been embroiled in multiple sexual assault allegations and trailing Clinton in

the polls. Consequently, the post-debate news coverage was dominated by Donald Trump with

his words including “locker room banter” in characterizing the taped conversation about women,

“bad hombres” in talking about his immigration plan, and "such a nasty woman" in responding to

Hillary Clinton’s plan on tackling debt and entitlements if she were to become president.

Table 18 shows the popular URLs in tweets during the period of Time 5. The news page

headline examples are including “Hillary Clinton’s 3 debate performances left the Trump

campaign in ruins.” (vox.com), “Fight night in Las Vegas: High stakes for Trump, Clinton and

Chris Wallace” (foxnews.com), and “Why Clinton went for the kill” (thehill.com).

112

Table 18

Top URLs in Tweet (Time 5)

Rank URLs Headlines Hillary Clinton’s 3 debate performances left the Trump 1 www.vox.com campaign in ruins Fight night in Las Vegas: High stakes for Trump, Clinton 2 www.foxnews.com and Chris Wallace 3 thehill.com WikiLeaks releases messages from Obama 4 thehill.com Why Clinton went for the kill National political director steps back from Trump 5 thehill.com campaign 6 thehill.com WikiLeaks posts vague warning shot at Kaine, Brazile 7 thehill.com Huckabee to Biden: Trump can land a 'face kick' Trump campaign advisers went to strip club with news 8 thehill.com staffers: report 9 thehill.com PA GOP files lawsuit over election day poll watchers Michael Moore: Anyone voting for Trump is a 'legal 10 thehill.comt terrorist'

During the period of Time 5, as shown in Table 19, the top 10 popular words on the

Tweeting network were trump (n = 1258), clinton (n = 714), debate (n = 493), donald (n = 376), hillary (n = 277), election (n = 233), campaign (n = 145), new (n = 140), obama (n = 25), and

realdonaldtrump (n = 122). For the top 10 popular word-pairs, donald-trump (n = 329) ranked

first, followed by hillary-clinton (n = 181), presidential-debate (n = 72), clinton-trump (n = 56), debate-foxnews2016 (n = 48), final-debate (n = 47), foxnews-realdonaldtrump (n = 45), election- results (n = 39), trump-clinton (n = 37), and final-presidential (n = 35). Elicited words and word- pairs show that the third debate was at the center of heated online news media networks.

Additionally, Chris-wallace, foxnews-hillaryclinton, supreme-court, rigged-election, open- borders, nasty-woman, and accept-election did not rank in the top 10 but had appeared to mentioned frequently.

113

Table 19

Top 10 Word and Word-pair (Time 5)

Rank Word Count Word Pairs Count 1 trump 1258 donald-trump 329 2 clinton 714 hillary-clinton 181 3 debate 493 presidential-debate 72 4 donald 376 clinton-trump 56 5 hillary 277 debate-foxnews2016 48 6 election 233 final-debate 47 7 campaign 145 foxnews- 45 realdonaldtrump 8 new 140 election-results 39 9 obama 125 trump-clinton 37 10 realdonaldtrump 122 final-presidential 35

The media Twitter accounts ranked by betweenness centrality were @hillaryclinton,

@realdonaldtrump, @latimespolitics, @huffpostpol, @abcpolitics, @wsjpolitics, @cnnpolitics,

@usatoday2016, @foxnewspolitics. Most of the ranked accounts were print media accounts

(@latimespolitics, @wsjpolitics, @usatoday2016) and television network accounts

(@abcpoltics, @cnnpolirics, @foxnewspolitics) (see Table 20). @huffpostpol, online partisan media account, was the only non-print and non-television network accounts ranked in the top 10.

For the top 10 accounts ranked by closeness centrality, @hillaryclinton ranked the highest, followed by @realdonaldtrump, @latimespolitics, @wsjpolitics, @huffpostpol, @abcpolitics,

@foxnewspolitics, @usatoday2016, @nytpolitics, and @michellemalkin. Noting that another print media account, @nytpolitics, and one political commentator account, @michellemalkin, entered into the list when closeness centrality measure used while it was not listed with betweenness centrality measure. As shown in Table 20, all the media Twitter accounts from the

114 list of the top 10 ranked by closeness centrality remained the same ranking when eigenvector centrality was calculated.

Over the period of Time 5, the official campaign account, @hillaryclinton surpassed, the rivalry Republican candidate Donald Trump’s campaign account, @realdonaldtrump in terms of its popularity on the media Twitter network graphed using all the three types of network centrality. This time period was the only and last time that @hillaryclinton ranked higher than

@realdonaldtrump. Another thing to note is that an online partisan media account, @huffpostpol, and a political commentator account, @michellemalkin, ranked along with campaign, print, television network media Twitter accounts.

115

Table 20

Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness, and Eigenvector Centrality (Time 5)

Rank Account Betweenness Account Closeness Account Eigenvector Centrality Centrality Centrality 1 @hillaryclinton 69.500 @hillaryclinton 0.036 @hillaryclinton 0.160 2 @realdonaldtrump 55.500 @realdonaldtrump 0.033 @realdonaldtrump 0.111 3 @latimespolitics 32.000 @latimespolitics 0.029 @latimespolitics 0.080 4 @huffpostpol 18.000 @wsjpolitics 0.028 @wsjpolitics 0.078 5 @abcpolitics 18.000 @huffpostpol 0.028 @huffpostpol 0.078 6 @wsjpolitics 18.000 @abcpolitics 0.028 @abcpolitics 0.078 7 @cnnpolitics 16.000 @foxnewspolitics 0.026 @foxnewspolitics 0.073 8 @usatoday2016 2.000 @usatoday2016 0.026 @usatoday2016 0.073 9 @foxnewspolitics 2.000 @nytpolitics 0.023 @nytpolitics 0.043 10 - - @michellemalkin 0.023 @michellemalkin 0.043

116

Results at Time 6

During the period of Time 6 (25 to 31 October 2016), in total, 2150 edges, 1717

sentiment words, and 1949 hyperlinks in tweets were identified and analyzed. Of those detected

sentiment words (n = 1717), 43.5% (n= 747) were words indicating positive sentiment and

56.5% (n=970) were words indicating negative sentiment. The top 10 domain hyperlink

frequency in tweets was 1754, followed by the number of top 10 Twitter account frequency in

tweets (n = 152) and the top 10 hashtag frequency in Tweets (n = 43).

The popular domain hyperlinks in Tweets were twitter.com (n = 231), thehill.com (n =

175), vox.com (n = 102), newyorker.com (n = 102), latimes.com (n = 102), usatoday.com (n =

78), washingtonpost.com (n = 51), and newsweek.com (n = 41). For the top 10 hashtags in

Tweets, tnyarchive (n = 8) ranked first, followed by climateofhate36 (n = 8), debate (n = 6), throwbackthursday (n = 6), ripvine (n = 3), namethattune (n = 3), handsoffmychildren (n = 3), debatenight (n = 2), wsjmap (n = 2), and 2forthepriceof1 (n =2). The most popular account mentioned by media Twitter accounts was @realdonaldtrump (n = 50), followed by

@hillaryclinton (n = 44), @wsj (n = 11), @elizacollins1 (n = 10), @dougmillsnyt37 (n = 8),

@wwcummings38 (n = 7), @abc (n = 7), @heidiprzybyla (n = 5), @djusatoday (n = 5), and

@hookjan39 (n = 5). More than half of the accounts was owned by journalists such as

@elizacollins1, @dougmillsnyt, @wwcummings, @heidiprzybyla, @djusatoday, and @hookjan

were included (see Table 21).

36 @michellemalkin tweeted “Hillary’s #climateofhate.” 37 @dougmillsnyt is owned by Doug Mills, a New York Times photographer. 38 @wwcummings is owned by William W. Cummings, a digital editor for USA TODAY. 39 @hookjan is owned by Janet Hook, a journalist for the Wall Street Journal.

117

Table 21

Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 6)

Rank Mentioned Domains Hashtags Twitter Accounts 1 trib.al tnyarchive @realdonaldtrump 2 twitter.com climateofhate @hillaryclinton 3 thehill.com debate @wsj 4 vox.com throwbackthursday @elizacollins1 5 newyorker.com ripvine @dougmillsnyt 6 latimes.com namethattune @wwcummings 7 usatoday.com handsoffmychildren @abc 8 dlvr.it debatenight @heidiprzybyla 9 washingtonpost.com wsjmap @djusatoday 10 newsweek.com 2forthepriceof1 @hookjan

After the final debate had taken place on October 20, 2016, news media released coverage about how the debate impacted on undecided voters, widely ranging poll results, and maps of battleground states. Meanwhile, some news developments from the FBI and WikiLeaks have an important bearing on the presidential race. Multiple news media reported that the FBI director sent a letter to Congress saying the FBI had discovered emails in a separate investigation that could be connected to the now-closed probe of whether Clinton mishandled classified information and will re-open the investigation into Hillary Clinton's email server on October 28,

2016. On October 31, 2016, WikiLeaks revealed another email from Democratic Party

Committee Chairwoman Donna Brazile to Hillary Clinton’s camp, which shows Brazile provided questions in advance to Hillary Clinton's campaign during the Democratic primary debates on CNN in succession. On the other hand, the New York Times reported Donald Trump used “legally dubious” accounting maneuver to avoid reporting hundreds of millions of dollars in income on October 31, 2016.

118

Table 22 shows popular URL hyperlinks in Tweets during the period of Time 6. The news page headline examples are including “Clinton’s New Hampshire Lead Widens as Nevada

Remains a Dead Heat.” (wsj.com), “Poll: Clinton builds lead in divided nation worried about

Election Day violence” (usatoday.com), and “Ex-Miss Finland: Trump groped me” (thehill.com).

Table 22

Top URLs in Tweet (Time 6)

Rank URLs Headlines Post not accessible 1 www.facebook.com (NYT Politics daily forecast on Facebook Messenger) Clinton’s New Hampshire Lead Widens as Nevada 2 www.wsj.com Remains a Dead Heat Poll: Clinton builds lead in divided nation worried about 3 www.usatoday.com Election Day violence 4 thehill.com Ex-Miss Finland: Trump groped me Republicans Rode Waves of Populism Until They 5 www.wsj.com Crashed the Party 21 maps and charts that will change how you think about 6 www.vox.com the election 7 thehill.com Johnson gets heated when pressed on tax policy Report: Trump's son-in-law threatened Roger Ailes with 8 thehill.com Trump TV Trump booster Alex Jones: I’m not anti-Semitic, but 9 www.vox.com Jews run an evil conspiracy Trump aide reveals 'three major voter suppression 10 thehill.com operations'

As shown in Table 23, the word and word-pair frequency observed at Time 6 were as follows: The top 10 words were trump (n = 672), clinton (n = 544), donald (n = 220), election (n

= 201), hillary (n = 197), new (n = 150), campaign (n = 135), fbi (n = 106), up (n = 85), and email (n = 78). For the top 10 word pairs, donald-trump (n = 201) ranked first, followed by hillary-clinton (n = 151), election-day (n = 26), clinton-campaign (n = 26), hillary-clinton's (n =

119

26), clinton-email (n = 24), clinton-trump (n = 24), north-carolina (n = 23), nasty-women (n =

23), and 2016-election (n = 20). The results show a considerable amount of popular words and

word-pairs were related to the primary media agendas at the moment, such as fbi, clinton-email,

and nasty-women.

Table 23

Top 10 Word and Word-pair (Time 6)

Rank Word Count Word Pairs Count 1 trump 672 donald-trump 201 2 clinton 544 hillary-clinton 151 3 donald 220 election-day 26 4 election 201 clinton-campaign 26 5 hillary 197 hillary-clinton's 26 6 new 150 clinton-email 24 7 campaign 135 clinton-trump 24 8 fbi 106 north-carolina 23 9 up 85 nasty-women 23 10 email 78 2016-election 20

The top accounts ranked by betweenness centrality were @hillaryclinton,

@realdonaldtrump, @wsjpolitics, @abcpolitics, @usatoday2016, @nytpolitics, @abc, and

@cnnpolitics. All of the ranked accounts were either print media Twitter accounts

(@wsjpolitics, @usatoday2016, @nytpolitics) or television network accounts (@abcpoltics,

@abc, @cnnpolitics) besides the campaign accounts (see Table 24). On the other hand,

@cnnpolitics showed the highest closeness centrality, followed by @ cnn, @latimespolitics,

@hillaryclinton, @realdonaldtrump, @abcpolitics, @nytpolitics, @usatoday2016, @abc, and

@wsjpolitics. Noting that @cnnpolitics, @cnn, and @latimespolitics ranked higher than

120

campaign accounts. As shown in Table 24, @hillaryclinton ranked first, followed by

@abcpolitics, @abc, @postpolitics, @hillaryclinton, @nytpolitics, @usatoday2016,

@natesilver538, @wsjpolitics, and @salon. The results show that @abcpolitics, @abc,

@postpolitics ranked higher than @hillaryclinton. The accounts of @natesilver538 and @salon

entered into the top 10 ranking for the first time since the beginning of the observation.

Over the period of Time 6, it was print media and television network accounts who connected each media account group to another group in the network with high betweenness centrality as similar to other time periods of observation. However, the ranking turned around between campaign accounts and media Twitter accounts when closeness centrality was measured. Media accounts like @cnnpolitics and @latimespolitics were at the center of the

Twitter political news network rather than campaign accounts meaning that those accounts might have better access to information at other vertices at a quick pace. The account with lower mean distance to other accounts (high closeness centrality) might find that their news or agendas reach others in the network more quickly than the other sources’ news with higher mean distance. On the other hand, when it comes to media Twitter accounts tending to link to important other accounts, the ranking turned around once again: @cnnpolitics and @latimespolitics were replaced by @abcpolitics and @postpolitics and two media accounts─@natesilver538 and

@salon-newly entered the rankings.

121

Table 24

Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness, and Eigenvector Centrality (Time 6)

Rank Account Betweenness Account Closeness Account Eigenvector Centrality Centrality Centrality 1 @hillaryclinton 24.500 @cnnpolitics 0.500 @realdonaldtrump 0.203 2 @realdonaldtrump 18.000 @cnn 0.333 @abcpolitics 0.137 3 @wsjpolitics 9.000 @latimespolitics 0.333 @abc 0.110 4 @abcpolitics 8.667 @hillaryclinton 0.059 @postpolitics 0.103 5 @usatoday2016 3.667 @realdonaldtrump 0.056 @hillaryclinton 0.101 6 @nytpolitics 3.667 @abcpolitics 0.056 @nytpolitics 0.100 7 @abc 1.500 @nytpolitics 0.053 @usatoday2016 0.100 8 @cnnpolitics 1.000 @usatoday2016 0.053 @natesilver538 0.067 9 - - @abc 0.048 @wsjpolitics 0.036 10 - - @wsjpolitics 0.042 @salon 0.033

122

Results at Time 7

During the period of Time 7 (1 to 7 November 2016), in total, 2316 edges, 1967

sentiment words, and 2107 hyperlinks in Tweets were identified and analyzed. Of those detected

sentiment words (n = 1967), 45.7% (n= 898) were words indicating positive sentiment and

54.3% (n=1069) were words indicating negative sentiment. The frequency of the top 10 domain

hyperlinks in Tweets was 1754, followed by the frequency of the Top 10 Twitter accounts

mentioned in Tweets (n = 152) and the top 10 hashtags embedded in Tweets (n = 43).

As shown in Table 25, the popular domain hyperlinks in Tweets were twitter.com (n =

322), vox.com (n = 173), newyorker.com (n = 123), thehill.com (n = 116), latimes.com (n = 87),

usatoday.com (n = 66), newsweek.com (n = 51), and fivethirtyeight.com40 (n = 28). The most

popular hashtag in Tweets was foxnews2016 (n = 6), followed by demdirtytricks41 (n = 6),

thisweek (n = 5), hillarysmaidservices42 (n = 5), x27 (n = 3), bluelivesmatter (n =3), headsup (n =

3), cmaawards50 (n = 3), wsjmap (n = 3), and x2014 (n = 2). For the popular account mentioned

in Tweets, @hillaryclinton (n = 57) ranked first, followed by @realdonaldtrump (n = 50),

@heidiprzybyla (n = 18), @wsjpolitics (n = 13), @wsj (n = 13), @abcpolitics (n = 10),

@elizacollins1 (n = 10), @dougmillsnyt (n = 8), @natesilver538 (n = 8), and

@colleenmnelson43 (n = 5).

40 fivethirtyeight.com is a website that focuses on opinion poll analysis, politics, etc. The website was founded on March 7, 2008, as a polling aggregation website with a blog created by analyst Nate Silver. 41 @michellemalkin tweeted: “Flashback #DemDirtyTricks How the Left fakes the hate” on Nov. 2, 2016. 42 @michellemalkin also tweeted: “#HillarysMaidServices Bleach the bathtub. BleachBit the e-mails.” on Nov. 6, 2016. 43 @colleenmnelson is owned by Collen Nelson, White House correspondent at The Wall Street Journal.

123

Table 25

Top Domains, Hashtags, and Mentioned Twitter Accounts in Tweet (Time 7)

Rank Mentioned Domains Hashtags Twitter Accounts 1 trib.al foxnews2016 @hillaryclinton 2 twitter.com demdirtytricks @realdonaldtrump 3 vox.com thisweek @heidiprzybyla 4 newyorker.com hillarysmaidservices @wsjpolitics 5 thehill.com x27 @wsj 6 dlvr.it bluelivesmatter @abcpolitics 7 latimes.com headsup @elizacollins1 8 usatoday.com cmaawards50 @dougmillsnyt 9 newsweek.com wsjmap @natesilver538 10 fivethirtyeight.com x2014 @colleenmnelson

During the final days before the election, both candidates, Hillary Clinton and Donald

Trump, made a combined total of 50 stops in 14 states. Hillary Clinton was accompanied by

celebrities such as Jon Bon Jovi, Bruce Springsteen, Jay-Z, Beyonce, and Lady Gaga, along with

Bill Clinton, Mr. Obama and Mrs. Obama to attract larger crowds. Donald Trump made almost

two more stops per day, on average, than Clinton did. News media followed each candidate’s

final stops in the last week before the election and released follow-up reporting about on-going agenda continued at the same time. On November 4, 2016, Fox News anchor Bret Baier apologized on air for his report that Hillary Clinton faces an indictment as the result of a federal investigation into the Clinton Foundation and for his report that Clinton's private email server had been hacked by five foreign intelligence agencies, which were made on November 2, 2016.

In succession, FBI Director James Comey informed Congress on Sunday that a review of new emails found in relation to the bureau's investigation into Hillary Clinton's use of a private email server had not yielded any reason for charges against the Democratic presidential nominee, on

124

November 6, 2016. On the other hand, multiple news sources including Newsweek and

Huffington post reported about Donald Trump’s child sex abuse claims on November 2, 2016.

In Table 26, the news page headline examples, directed by the popular URLs in Tweets, are including “The Quiet Ruthlessness Of The Clinton Campaign.” (newyorker.com), “Nate

Silver's model gives Trump an unusually high chance of winning. Could he be right?”

(vox.com), and “A FIFTH Clinton presidency? Hill, no!” (conservativereview.com).

Table 26

Top URLs in Tweet (Time 7)

Rank URLs Headlines 1 www.newyorker.com The Quiet Ruthlessness Of The Clinton Campaign The logic behind the 538 model's relatively pro- 2 twitter.com (Ezra Klein’s tweet) Trump results, explained: Nate Silver's model gives Trump an unusually high 3 www.vox.com chance of winning. Could he be right? Weak parties and strong partisanship are a bad 4 www.vox.com combination Donald Trump’s success reveals a frightening 5 www.vox.com weakness in American democracy 6 www.newyorker.com What Bill Clinton Should Do As First Gentleman 7 www.conservativereview.com A FIFTH Clinton presidency? Hill, no! 8 www.wsj.com New Poll Finds Close Race in Georgia The Republican civil war starts the day after the 9 www.vox.com election 10 www.vox.com Print newspapers are dying faster than you think

As shown in Table 27, the top 10 most popular words in the network at Time 7 were trump (n = 646), clinton (n = 508), donald (n = 262), election (n = 226), hillary (n = 220), new (n

= 188), campaign (n = 146), voters (n = 115), more (n = 111), and fbi (n = 93). For the top 10 word pairs, donald-trump (n = 232) ranked first, followed by hillary-clinton (n = 177), clinton-

125

email (n = 42), election-day (n = 31), hillary-clinton's (n = 27), president-obama (n = 27), north- carolina (n = 26), presidential-race (n = 26), donald-trump’s (n = 25), and early-voting (n = 22).

Table 27

Top 10 Word and Word-pair (Time 7)

Rank Word Count Word Pairs Count 1 trump 646 donald-trump 232 2 clinton 508 hillary-clinton 177 3 donald 262 clinton-email 42 4 election 226 election-day 31 5 hillary 220 hillary-clinton’s 27 6 new 188 president-obama 27 7 campaign 146 north-carolina 26 8 voters 115 presidential-race 26 9 more 111 donald-trump’s 25 10 fbi 93 early-voting 22

For the top 10 accounts ranked by betweenness centrality, @realdonaldtrump ranked

first, followed by @hillaryclinton, @abcpolitics, @wsjpolitics, @michellemalkin, @huffpostpol,

@latimespolitics, @usatoday2016, @nytpolitics, and @abc (see Table 28). Along with print

media Twitter accounts (@wsjpolitics, @latimespolitics, @usatoday2016, @nytpolitics) and

television network accounts (@abcpoltics, @abc), a political commentator account,

@michellemalkin, and an online partisan media account, @huffpostpol, entered the ranking.

The account with highest closeness centrality was @realdonaldtrump, followed by

@hillaryclinton, @abcpolitics, @wsjpolitics, @usatoday2016, @nytpolitics, @latimespolitics,

@abc, @cnnpolitics, and @huffpostpol. The account of @michellemalkin, which ranked in the

top 10 account having high betweenness centrality, was replaced by @cnnpolitics in the

closeness centrality rankings. As shown in Table 28, @hillaryclinton ranked first with highest

126 eigenvector centrality, followed by @realdonaldtrump, @abcpolitics, @latimespolitics,

@wsjpolitics, @usatoday2016, @nytpolitics, @abc, @cnnpolitics, and @natesilver538. The account of @huffpostpol, which ranked in the top 10 account having high closeness centrality, was replaced by @ natesilver538 in the Eigenvector centrality rankings.

Over the period of Time 7, different presences and rank order of accounts on each list were observed. For example, @hillaryclinton ranked higher than @realdonaldtrump only when eigenvector centrality measured. Such results may indicate that @realdonaldtrump tends to connect actively various accounts and be located at the center of the network by being frequently connected to others while @hillaryclinton selectively links to some important accounts on the network. Similarly, among political commentator accounts, @natesilver538 only ranked by eigenvector centrality while @michellemalkin was only ranked by betweenness centrality.

@michellemalkin is better at connecting groups even with different ideologies and taking a role as a connector in the network while @natesilver538 is more likely to reach important accounts purposefully. The results may also imply that different positioning in the network and Twitter account running strategy. Lastly, @abcpolitics ranked highest among media Twitter accounts across all the three lists using three types of network centrality measures. With this, different types of Twitter use even in the same media category such as @abcpolitics vs. @nytpolitics. The former media, @abcpolitics, strategically run the account utilizing hyperlink practices while the latter, @nytpolitics, is media adhering to traditional Journalists’ writing norms such as originality of news report.

127

Table 28

Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness, and Eigenvector Centrality (Time 7)

Rank Account Betweenness Account Closeness Account Eigenvector Centrality Centrality Centrality 1 @realdonaldtrump 64.167 @realdonaldtrump 0.037 @hillaryclinton 0.133 2 @hillaryclinton 44.000 @hillaryclinton 0.033 @realdonaldtrump 0.127 3 @abcpolitics 23.583 @abcpolitics 0.033 @abcpolitics 0.118 4 @wsjpolitics 18.750 @wsjpolitics 0.031 @latimespolitics 0.080 5 @michellemalkin 15.000 @usatoday2016 0.029 @wsjpolitics 0.072 6 @huffpostpol 15.000 @nytpolitics 0.029 @usatoday2016 0.068 7 @latimespolitics 4.667 @latimespolitics 0.028 @nytpolitics 0.068 8 @usatoday2016 3.750 @abc 0.027 @abc 0.062 9 @nytpolitics 3.750 @cnnpolitics 0.027 @cnnpolitics 0.054 10 @abc 2.833 @huffpostpol 0.025 @natesilver538 0.047

128

Results of Research Questions and Hypotheses

Analysis of Tweeting Network Change

RQ1 and H1 were designed to graph the changes in the volume of Tweeting during the

seven weeks preceding the 2016 U.S. presidential election. Figure 1 shows a time series

documenting the aggregates of edges of media Twitter accounts included in the analysis. The

number of edges on the Tweeting network increased, but showed a trend of fluctuating after each

presidential and vice presidential debate as follows: Time 1 (1627 edges); Time 2 (3042 edges);

Time 3 (1853 edges); Time 4 (2112 edges); Time 5 (2995 edges); Time 6 (2150 edges); and

Time 7 (2316 edges). Therefore, H1 was supported.

The debate effects on news Tweets. RQ2 and H2a were designed to see if the four

debates had an impact on the volume of tweeting on the network across different times points.

The first presidential debate took place on September 26, 2016; the vice-presidential debate and

the second presidential debate took place on October 4 and October 9, 2016, respectively; and

the third presidential debate took place on October 19, 2016. As shown in Figure 1, there was a

greater number of edges referring to the presidential election in the days following the U.S.

presidential candidates’ debates than prior to the debates. Therefore, H2a was supported.

Figure 1 also displays that the state of the network after each debate reflected the impact of the debates. Post-debate Tweets generated during and just after each debate were as likely to bring up the agendas covered in the debate, attempt to evaluate each candidates’ performance, and predict the reaction of voters. A similar increase appeared repeatedly in the Tweeting

network during the seven weeks, particularly when each debate was held. For example, a search

for each media Twitter accounts’ Tweets for candidates and debates returned a sharply increased

number of edges in the Tweets as compared to the number of edges in Tweets from the day

129

before each debate. The increase in the total edges on the network was 63.7% a day after the first

presidential debate, 79.1% a day after the second presidential debate, and 70.2% a day after the

third presidential debate. The increase for the vice presidential debate was 46.0%.

Figure 1. Aggregated Time Series of Total Edges on Twitter by Day.

Hyperlink frequency on the network. The hypotheses (H2b and H2c) were designed to see if there is a greater number of hyperlinks (including domains, hashtags, and mentioned accounts) in the days following each U.S. presidential debate than prior to them. Figure 2

displays that the number of each hyperlink fluctuated during the period of Times 1 through7

(seven weeks total) until the day before the presidential election. The number of domain hyperlinks in Tweets spiked particularly around the first and third presidential debates (at Time 2 and Time 5). The most popular type of hyperlinks used by media Twitter accounts was domain hyperlinks. Three types of hyperlinks (domains, hashtags, and mentioned account names)

130 included in the analysis fluctuated following similar patterns and appeared to be highly event- sensitive. This shows that hyperlinks used by media Twitter accounts rise in reaction to specific events (such as the presidential debates at Time 2 and Time 5), and therefore, H2b and H2c were supported.

Figure 2. Aggregated Time Series of Total Hyperlinks on Twitter by Week.

However, looking closely at the top 10 hashtags and mentioned accounts in Figure 3, all of them spiked at Time 5, while each had their second-biggest spikes at different times. Top 10 hashtags spiked at Time 2, when the first debate took place on September 26, 2016, while top 10 mentioned accounts spiked at Time 3, when the vice presidential debate and the second presidential debate took place. Another difference noticed is that the top 10 hashtags decreased more rapidly than the top 10 mentioned accounts did as the election approached.

131

Figure 3. Aggregated Time Series of Top 10 Hashtags and Top 10 Mentioned Accounts by Week.

Sentiment word frequency on the network. H2d was designed to see if there was a greater number of sentiment words in Tweets in the days following the U.S. presidential candidates’ debates than prior to the debates. Figure 4 displays the aggregated number of positive and negative sentiment words, and the lines show a similar pattern in changes of total edge. This shows that positive and negative sentiment words used by media Twitter accounts rise in reaction to specific events such as the presidential debates at Time 2 (after the first presidential debate) and Time 5 (after the third presidential debate). Therefore, the results supported the debate effects around two presidential debates but did not follow the expectation about the debate effects on sentiment word usage by media Twitter accounts for the rest of debates, vice-presidential debate (Time 3) and the second presidential debate (Time 4).

132

Figure 4. Aggregated Time Series of Sentiment Words Frequency on Twitter by Week.

Prevalent sentiment on the network. H2e was designed to see if there is a greater number of negative sentiment words in the Tweeting network than positive sentiment words as the election approached and the presidential candidates’ victory became more closely contested.

Figure 5 shows both negative and positive sentiment words were following the total edge line, which increased after the debates. The results re-confirmed H2d. Further, the line graph of negative sentiment words in Tweets was consistently above the line graph of positive sentiment words. Therefore, H2e was supported.

133

Figure 5. Aggregated Time Series of Negative and Positive Sentiment Words by Day.

134

Across different time points the number of negative sentiment words surpassed the number of positive sentiment words, but the gap between them decreased as Election Day approached. Figure 6 illustrates that the gap between positive and negative sentiment word percentage widened around Time 3 and Time 4, but then narrowed to 8.6% at Time 7.

Figure 6. Aggregated Time Series of Positive and Negative Sentiment Word Percentages.

Time-series of three indicators. H2f was designed to see to what extent the three types of indicators (total edges, hyperlinks, sentiment words) moved together across different times.

Figure 7 displays that the three indicators were fluctuating and revealing several patterns. First, the line graph of sentiment words was more volatile and spiked more dramatically, while total edge and hyperlinks showed slightly less variation across different times. Second, there was a gradual trend of increased media attention (edges) and use of hyperlinks and sentiment words as the election approached (H2f was supported). Additionally, total edges, hyperlinks, and the

135 number of positive and negative sentiment words in Tweets spiked within time periods when the first and third U.S. presidential debates took place.

Figure 7. Aggregated Time Series of Total Edges, Hyperlinks, and Sentiment Words.

136

Cross-Linking Across Different Media Twitter Accounts

RQ3 and a set of hypotheses (H3a-H3c) were designed to graph the temporal changes of

cross-links from a media Twitter account to another media Twitter account (mentioning) during

the seven weeks preceding the 2016 U.S. presidential election as compared to tweets generated

by one media Twitter account posting its own content (Tweets).

Debate effects on cross-linking practices. H3a and H3b were designed to see if there is

a greater number of cross-linking in tweets in the days following the U.S. presidential debates

than prior to them. Figure 8 shows that the number of cross-links spiked when the second and

final presidential debates took place at Time 3 and Time 5 (H3a was supported). News media

kept digging into Donald Trump’s misogynistic agenda (at Time 3) and Hillary Clinton’s email scandal (at Time 5). Figure 8 also displays that the number of cross-linking practices increased rapidly after the second and third debates (H3b was supported).

The most common types of cross-links were edges between media Twitter accounts and campaign Twitter accounts, followed by edges among different accounts owned by the same media (e.g., @wsj and @wsjpolitics). The links across different media Twitter accounts were the least common.

Crosslinking & sentiment. H3c was designed to see if there is a greater number of

positive sentiment words in cross-linking edges (mentions) than non-cross linking edges

(Tweets). Figures 9a and 9b display that cross-linking edges have a greater percentage of

positive sentiment words than negative sentiment words in them as compared to non-cross

linking edges.

137

Figure 8. Aggregated Time Series of Tweets and Cross-links.

138

Figure 9a. Aggregated Time Series of Percentage of Positive and Negative Sentiment Words in Tweets by Day.

Figure 9b. Aggregated Time Series of Percentage of Positive and Negative Sentiment Words in Mentions (Cross-links) by Day.

139

Media Twitter Accounts’ Use of Sentiment Words

A series of one-way ANOVAs was conducted to address RQ4 regarding media Twitter account type differences that existed in using positive and negative sentiment words in Tweets.

H4a was designed to see if media Twitter accounts that belong to traditional media (print media, television networks, and news magazines) used a greater amount of positive sentiment words than did non-traditional media (online partisan and non-partisan media and political

commentator). On the other hand, H4b was designed to see if media Twitter accounts that

belong to nontraditional media used a greater amount of negative sentiment words than

traditional media.

For positive sentiment words, ANOVA results indicated a significant mean difference,

F(5, 15124)=24.31, p <.001, across the six media Twitter account types. Online non-partisan

media reported the highest level of positive sentiment words usage (M=0.83, SD=0.74), followed

by print media (M=0.78, SD=0.77), political commentators (M=0.77, SD=0.89), news magazines

(M=0.66, SD=0.79), television networks (M=0.65, SD=0.74), and online partisan media

(M=0.52, SD=0.73). H4a was partially supported. Additionally, there were significant differences between online partisan media Twitter accounts (returning the lowest positive sentiment usage) and all other types of media Twitter accounts (See Table 29).

For negative sentiment words, ANOVA results also indicated a significant mean difference, F(5, 15127)=15.57, p < .001, across the six media Twitter account categories.

Political commentators reported the highest level of negative sentiment word usage (M=0.62,

SD=0.83), followed by news magazines (M=0.54, SD=0.74), online partisan media (M=0.50,

SD=0.72), print media (M=0.49, SD=0.73), television networks (M=0.45, SD=0.68), and online non-partisan media (M=0.44, SD=0.65). H4b was partially supported. There were significant

140 differences between political commentators’ Twitter accounts and all other types of media

Twitter accounts except between political commentators’ and news magazines’ Twitter accounts.

The mean difference between news magazines and online partisan media was also significant

(see Table 29). Overall, the mean differences between all media Twitter account categories were more distinct in positive sentiment word usage rather than negative sentiment word usage in tweets.

141

Table 29

Media Category Differences in Sentiment Words Frequency

Media Twitter Account M SD F df p Post Hoc analysis 1 2 3 4 5 6 Positive 1 Print Media 0.78 0.77 24.31 15129 .00 ** *** *** Sentiment Words 2 TV Network 0.65 0.74 *** * *** ** 3 News Magazine 0.66 0.79 *** * *** ** 4 Online Partisan 0.52 0.73 *** * * *** *** 5 Online Non-Partisan 0.83 0.74 *** *** *** 6 Political Commentator 0.77 0.89 ** ** *** Negative 1 Print Media 0.49 0.73 15.57 15129 .00 *** Sentiment Words 2 TV Network 0.45 0.68 *** 3 News Magazine 0.54 0.74 * 4 Online Partisan 0.50 0.72 * 5 Online Non-Partisan 0.44 0.65 * *** 6 Political Commentator 0.62 0.83 *** *** * *** Note. 1 = Print media; 2 = TV network; 3 = News Magazine; 4 = Online Partisan; 5 = Online Non-Partisan; 6 = Political Commentator; Post Hoc analyses (Scheffe) illustrated by * (p<.05), ** (p<.01), or *** (p<.001) superscripts

142

Network Analysis of Media Twitter Accounts

Network analysis has been conducted to determine the key media Twitter accounts and

the nature of their popularity on the network and to see if there were partisan differences or

media category differences (RQ5). Table 30 provides a summary of media Twitter accounts

ranked by three types of network centrality measures when every relationship through Times 1

through 7 was graphed in one network.

Table 31 provides overall information about differences in the Twitter media account

type distribution for each network centrality. For betweenness centrality, almost half of the

accounts were print media accounts (48.9 %; n=22), followed by television network accounts

(40 %; n=18), online partisan media (6.7 %; n=3), and political commentator accounts (4.4%;

n=2). For closeness centrality, half of the accounts were print media accounts (50 %; n=27),

followed by television network accounts (40.7 %; n=22), online partisan media (5.6 %; n=3), and

political commentators’ accounts (3.7%; n=2). On the other hand, when eigenvector centrality

was measured, a half of the accounts were print media accounts (50 %; n=27), followed by

television network accounts (35.2 %; n=19), political commentators’ accounts (9.3%; n=5), and online partisan media (5.6 %; n=3). Therefore, H5a was supported.

The results show that print media accounts were the most prevalent media category across all three types of network centrality rankings, followed by television network accounts, online non-partisan media accounts, and political commentators’ accounts. Regarding the ranking order, there are some parallels between the two rankings of betweenness centrality and closeness centrality, but the composite of the accounts in the eigenvector centrality ranking is somewhat distinct from the other two.

143

Table 30

Top 10 Vertices in the Tweeting Network, Ranked by Betweenness, Closeness, and Eigenvector Centrality: Time 1-7

Rank Account Betweenness Account Closeness Account Eigenvector Centrality Centrality Centrality 1 @realdonaldtrump 110.986 @realdonaldtrump 0.033 @hillaryclinton 0.133 2 @hillaryclinton 67.860 @hillaryclinton 0.029 @realdonaldtrump 0.127 3 @michellemalkin 32.427 @michellemalkin 0.025 @abcpolitics 0.118 4 @huffpostpol 21.582 @wsjpolitics 0.024 @latimespolitics 0.080 5 @wsjpolitics 18.681 @natesilver538 0.024 @wsjpolitics 0.072 6 @nbcpolitics 5.750 @abc 0.023 @usatoday2016 0.068 7 @ezraklein 5.260 @ezraklein 0.023 @nytpolitics 0.068 8 @cnn 4.917 @abcpolitics 0.023 @abc 0.062 9 @abc 4.582 @latimespolitics 0.023 @cnnpolitics 0.054 10 @latimespolitics 3.515 @huffpostpol 0.023 @natesilver538 0.047

144

The proportion of political commentator Twitter accounts in the eigenvector centrality ranking was relatively high as compared to those in the betweenness centrality and closeness centrality rankings. Additionally, it is worth noting that online non-partisan media and news magazine accounts were not in any top 10 ranking. These results account for several possibilities. If two types of media accounts, online non-partisan media, or news magazine accounts, were isolated rather than being connected to other media Twitter accounts, the importance of the accounts on the network cannot be evaluated enough even though they publish numerous tweets. Another possibility is that if those accounts are more likely to stick to the traditional style of news writing, the results possibly indicate the two types of media accounts’ low engagement on their Twitter accounts, enabling various hyperlinking practices.

Table 31

Distribution of Media Twitter Account Category

Betweenness Closeness Eigenvector

Centrality Centrality Centrality Print 48.9 % (n=22) 50 % (n=27) 50 % (n=27) TV Network 40 % (n=18) 40.7 % (n=22) 35.2 % (n=19) Online partisan 6.7 % (n=3) 5.6 % (n=3) 5.6 % (n=3) Political commentator 4.4% (n=2) 3.7 % (n=2) 9.3 % (n=5) Online non-partisan - - - News Magazine - - - Total 100% (n=45) 100 % (n=54) 100 % (n=54)

145

Tables 32, 33, and 34 display the top 10 media Twitter accounts ranked by three types of

network centrality measures for Times 1 through 7. Different types of network centrality measures resulted in different composites of key media accounts and ranking orders. The results show that each media Twitter account, ranked by different network centralities, are likely to take different social roles on the network and could possibly be caused by their distinctive Twitter account management strategies or critical events happening at given moment.

For example, on October 12, 2016, the New York Times released an investigative news report quoting two women as saying that Donald Trump had touched them inappropriately, which is at Time 4. @nytpolitics ranked third in the betweenness centrality ranking (see Table

32) while @nytpolitics ranked sixth both in closeness centrality (see Table 33) and eigenvector centrality (see Table 34). Moreover, @nytpolitics’s ranking climbed from fourth (Time 3) to third (Time 4) in the betweenness centrality ranking, while rankings in closeness centrality and eigenvector centrality dropped from fourth (Time 4) to sixth (Time 4). The results indicate that

@nytpolitics became a connector, bridging news communities after the breaking news as other accounts tend to refer to the news coverage of the New York Times.

There is another example. After the final debate took place on October 20, 2016,

WikiLeaks revealed another email from Donna Brazile, the Democratic Party Committee chairwoman, to Hillary Clinton’s camp on October 31, 2016. The hacked emails showed that

Brazile, who was also working as a commentator for CNN, provided questions in advance to

Hillary Clinton's campaign during the Democratic primary debates aired on CNN. CNN reacted immediately and disclosed that Donna Brazile’s resignation from the network was accepted on

October 14, 2016. At Time 6, @cnnpolitics and @cnn ranked first by closeness centrality (see

Table 33) while @cnnpolitics ranked eighth by betweenness centrality (see Table 32) and did not

146 rank in the top 10 by eigenvector centrality. @cnn was not ranked by either betweenness centrality or eigenvector centrality. Such results indicate that @cnn and @cnnpolitics were located in the center of the network (high closeness centrality) because they had the exclusive news source that other media Twitter accounts may want to reach.

147

Table 32

Top 10 Vertices, Ranked by Betweenness Centrality in Tweet: Time 1-7

Rank Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7

1 @realdonaldtrump @hillaryclinton @realdonaldtrump @realdonaldtrump @hillaryclinton @hillaryclinton @realdonaldtrump

2 @hillaryclinton @realdonaldtrump @wsjpolitics @hillaryclinton @realdonaldtrump @realdonaldtrump @hillaryclinton

3 @abcpolitics @wsjpolitics @hillaryclinton @nytpolitics @latimespolitics @wsjpolitics @abcpolitics

4 @huffpostpol @michellemalkin @abcpolitics @latimespolitics @huffpostpol @abcpolitics @wsjpolitics

5 @wsjpolitics @cnn @usatoday2016 @abcpolitics @abcpolitics @usatoday2016 @michellemalkin

6 @usatoday2016 @abcpolitics @latimespolitics @foxnewspolitics @wsjpolitics @nytpolitics @huffpostpol

7 @foxnewspolitics @foxnewspolitics @nytpolitics @usatoday2016 @cnnpolitics @abc @latimespolitics

8 @nbcpolitics @usatoday2016 - @abc @usatoday2016 @cnnpolitics @usatoday2016

9 @wsj - - - @foxnewspolitics - @nytpolitics

10 ------@abc

148

Table 33

Top 10 Vertices, Ranked by Closeness Centrality in Tweet: Time 1-7

Rank Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7

1 @realdonaldtrump @hillaryclinton @realdonaldtrump @realdonaldtrump @hillaryclinton @cnnpolitics @realdonaldtrump

2 @hillaryclinton @wsjpolitics @abcpolitics @hillaryclinton @realdonaldtrump @cnn @hillaryclinton

3 @wsjpolitics @realdonaldtrump @hillaryclinton @abcpolitics @latimespolitics @latimespolitics @abcpolitics

4 @abcpolitics @abcpolitics @nytpolitics @abc @wsjpolitics @hillaryclinton @wsjpolitics

5 @usatoday2016 @usatoday2016 @latimespolitics @latimespolitics @huffpostpol @realdonaldtrump @usatoday2016

6 @foxnewspolitics @foxnewspolitics @usatoday2016 @nytpolitics @abcpolitics @abcpolitics @nytpolitics

7 @nbcpolitics @abc @abc @foxnewspolitics @foxnewspolitics @nytpolitics @latimespolitics

8 @huffpostpol @michellemalkin @wsjpolitics @usatoday2016 @usatoday2016 @usatoday2016 @abc

9 @postpolitics @cnn @wsj @cnnpolitics @nytpolitics @abc @cnnpolitics

10 @wsj @nytpolitics - - @michellemalkin @wsjpolitics @huffpostpol

149

Table 34

Top 10 Vertices, Ranked by Eigenvector Centrality in Tweet: Time 1-7

Rank Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7

1 @realdonaldtrump @hillaryclinton @realdonaldtrump @realdonaldtrump @hillaryclinton @realdonaldtrump @hillaryclinton

2 @hillaryclinton @realdonaldtrump @abcpolitics @hillaryclinton @realdonaldtrump @abcpolitics @realdonaldtrump

3 @wsjpolitics @abcpolitics @hillaryclinton @abcpolitics @latimespolitics @abc @abcpolitics

4 @abcpolitics @wsjpolitics @nytpolitics @abc @wsjpolitics @postpolitics @latimespolitics

5 @usatoday2016 @usatoday2016 @latimespolitics @latimespolitics @huffpostpol @hillaryclinton @wsjpolitics

6 @foxnewspolitics @foxnewspolitics @usatoday2016 @nytpolitics @abcpolitics @nytpolitics @usatoday2016

7 @nbcpolitics @abc @abc @foxnewspolitics @foxnewspolitics @usatoday2016 @nytpolitics

8 @huffpostpol @michellemalkin @wsjpolitics @usatoday2016 @usatoday2016 @natesilver538 @abc

9 @postpolitics @nytpolitics @wsj @cnnpolitics @nytpolitics @wsjpolitics @cnnpolitics

10 @wsj @ezraklein - - @michellemalkin @salon @natesilver538

150

The Temporal Dynamics of Intermedia Agenda-Setting

Intermedia agenda-setting among different media types. RQ 6 sought to determine

the extent to which each media Twitter account exerted an intermedia attribute agenda-setting

impact on other media.

First, Table 35 summarizes the results of Granger causality tests regarding negative

sentiment contagion. The results show that four Granger causal relationships were significant:

negative sentiment words created by the online non-partisan media accounts Granger caused negative sentiment words created print media accounts (F= 2.78, p < .05); the online partisan media Granger caused television network (F= 2.89, p < .05) and the political commentator (F=

3.31, p < .05); and political commentator Granger caused news magazine (F= 4.14, p < .01).

Since all the Granger causal relationships were caused by media Twitter accounts included in nontraditional media types, H6a was supported.

Apart from these four relationships, Granger causality did not return significant casual relationships. Additionally, it appears that Granger causality runs one-way from online non- partisan media to print media, from online partisan media to television networks, from online partisan media to political commentator, and from political commentator to news magazine and not the other way.

151

Table 35

Pairwise Granger Causality Test Results: Negative Sentiment on Tweeting Networks

X Online Print TV Online News Political Non- media networks Partisan Magazine Commentator Y Partisan

Print media - 1.36 2.11 2.78* 1.00 1.62

TV networks 1.58 - 2.89* 1.51 1.15 1.91

Online Partisan 1.40 0.71 - 0.68 0.69 0.88

Online Non- 0.83 0.71 1.19 - 1.55 0.66 Partisan

News 2.02 1.94 1.52 1.16 - 4.14** Magazine

Political 0.71 0.71 3.31* 1.36 1.15 - Commentator

Note. Lags = 6. Causality Relationship direction: X Granger Cause Y. * p < .05, ** p < .01

Table 36 provides the results of the Granger causality tests regarding positive sentiment contagion. The results returned that two Granger causal relationships were significant: positive sentiment words created by print media accounts Granger caused positive sentiment words created news magazine accounts (F= 3.21, p < .05); and online non-partisan media Granger

152 caused print media (F= 2.34, p =.05). Thus, H6b was not supported. Apart from these two relationships, the Granger causality tests did not return significant casual relationships.

Table 36

Pairwise Granger Causality Test Results: Positive Sentiment on Tweeting Networks X Online Print TV Online News Political Non- media networks Partisan Magazine Commentator Y Partisan

Print media - 1.43 2.07 2.34† 0.55 1.02

TV networks 0.80 - 1.31 0.57 0.78 1.12

Online Partisan 1.23 1.58 - 1.53 1.98 0.77

Online Non- 0.43 1.42 1.80 - 1.10 0.78 Partisan

News 3.21* 1.13 0.65 0.38 - 1.22 Magazine

Political 1.43 0.49 1.19 0.98 0.69 - Commentator

Note. Lags = 6. Causality Relationship direction: X Granger Cause Y. † p = .05, * p < .05, ** p < .01

153

Exploring how the sentiment contagion in the media Twitter account network over time

may reveal media account type differences. The results of the Granger causality tests were evidenced for the intermedia agenda influence among media Twitter account types (print media, television networks, political commentators, online partisan media, online non-partisan media, and news magazine). The empirical evidence was further supported by Figure 10 and 11.

Regarding negative sentiment, it was evident that online non-partisan media moved slightly ahead of print media, and online partisan media moved ahead of television network and political commentator, and political commentator moved ahead of news magazine (See Figure

10). On the other hand, regarding positive sentiment, Figure 11 shows bi-directional relationships. For example, print media moved ahead of online partisan media, and online non- partisan media moved ahead of print media.

Adding to different directions of causal relationships established among various media

Twitter account types, it was also found that negative or positive sentiment exerted different patterns in sentiment contagion with a comparison of Figure 10 and 11.

154

Figure 10. Aggregated Time Series of Negative Sentiment Words by Media Type.

155

Figure 11. Aggregated Time Series of Positive Sentiment Words by Media Type.

156

Intermedia agenda-setting among individual media Twitter accounts. RQ 7 sought

to determine the extent to which each media Twitter account with varied political ideologies

exerted an intermedia agenda-setting influence on other media Twitter accounts.

Print media Twitter accounts. Sentiment contagion among individual print media

Twitter accounts with various ideologies (@wsjpolitics, @postpolitics, @nytpolitics, and

@usatoday2016) was investigated using the Granger causality tests and graphical time-series

approach (see Table 37). The results of the Granger causality tests returned five significant

Granger causal relationships: negative sentiments in @wsjpolitics Granger caused both positive

sentiments (F= 2.76, p < .05) and negative sentiments (F= 3.33, p < .05) in @postpolitics; negative sentiments in @usatoday2016 Granger caused both positive sentiments (F= 2.31, p

= .05) and negative sentiments (F= 2.32, p = .05) in @wsjpolitics; and negative sentiments in

@nytpolitics Granger caused negative sentiments in @wsjpolitics (F= 2.38, p = .05).

Additionally, positive sentiments in @wsjpolitics Granger caused negative sentiments in

@wsjpolitics (F= 2.81, p < .05).

Such results support that the intermedia agenda-setting influence is multi-directional among conservative (@wsjpolitics), liberal (@postpolitics, @nytpolitics), and the least-biased

(@usatoday2016) media Twitter account. Figure 12 further supports these causal relationships found in the Granger causality test results. Additionally, the multiple line graphs of positive and negative sentiment in each print media Twitter account provides evidence on the negative sentiment leading the positive sentiment.

157

Table 37

Pairwise Granger Causality Test Results: Negative and Positive Sentiment on Print Media Tweeting Networks

X @usatoday @nytpolitics @postpolitics @wsjpolitics @usatoday @nytpolitics @postpolitics @wsjpolitics Y negative negative negative negative positive positive positive positive @usatoday - 1.73 0.38 1.54 1.03 0.56 0.63 0.87 negative

@nytpolitics 1.80 - 0.27 0.18 1.37 0.70 0.36 0.73 negative

@postpolitics 0.99 1.11 - 3.33* 0.75 1.24 1.05 1.26 negative

@wsjpolitics 2.32† 2.38† 0.75 - 1.69 2.23 1.41 2.81* negative

@usatoday 1.55 1.46 0.30 1.06 - 0.52 0.51 0.67 positive

@nytpolitics 2.02 0.27 1.21 1.19 2.17 - 1.25 0.56 positive

@postpolitics 1.92 2.14 0.86 2.76* 1.19 1.10 - 1.06 positive

@wsjpolitics 2.31† 1.77 1.02 1.46 1.90 0.64 1.75 - positive Note. Lags = 6. Causality Relationship direction: X Granger Cause Y. † p = .05, * p < .05, ** p < .01

158

Figure 12. Aggregated Time Series of Negative Sentiment Words of @usatoday2016, @nytpolitics, @postpolitics, and @wsjpolitics.

159

Television network Twitter accounts. Individual media Twitter account with various

ideologies (@cnnpolitics, @foxnewspolitics, and @abcnewspolitics) were included for causality analysis. As shown in Table 38, negative sentiments in @cnnpolitics Granger caused positive

sentiments (F= 2.50, p < .05) and negative sentiments (F= 2.90, p < .05) in @foxnewspolitics while positive sentiments in @cnnpolitics Granger caused positive sentiments in

@abcnewspolitics (F= 2.47, p = .05). At the same time, negative sentiments in @cnnpolitics were Granger caused by both positive (F= 3.26, p < .05) and negative sentiments (F= 2.86, p

< .05) in @abcnewspolitics. Such results show that Granger causality runs two-way from

@cnnpolitics to @abcnewspolitics, while Granger causality runs one-way from @cnnpolitics to

@foxnewspolitics. Additionally, negative sentiments in @foxnewspolitics Granger caused positive sentiments @foxnewspolitics (F= 11.96, p < .001) and the casual relationship was significant vice versa (F= 13.59, p < .001). The results confirmed two-way causal relationships for both sentiments between two liberal television network Twitter accounts (@cnnpolitics and

@abcnewspolitics). However, across different political ideologies, only a one-way causal relationship was found, from negative sentiments in @cnnpolitics to positive and negative sentiments in @foxnewspolitics. As with the print media Twitter accounts, there was only a negative sentiment-initiated sentiment contagion across different political ideologies.

These causal relationships found that utilizing Granger causality tests were identified in the aggregated line graphs (Figure 13). The causal relationships between a positive and negative sentiment in the same account (e.g., @abc_negative-@abc_positive and @fox_negative-

@fox_positve, see Figure 13) were especially noticeable.

160

Table 38

Pairwise Granger Causality Test Results: Negative and Positive Sentiment on Television Network Tweeting Networks

X @cnnpolitics @foxnewspolitics @abcnewspolitics @cnnpolitics @foxnewspolitics @abcnewspolitics Y negative negative negative positive positive positive @cnnpolitics - 1.05 2.86* 1.09 1.29 3.26* negative

@foxnewspolitics 2.90* - 0.28 0.44 13.59*** 0.31 negative

@abcnewspolitics 0.83 0.59 - 1.23 0.40 1.14 negative

@cnnpolitics 2.16 1.87 0.32 - 2.12 1.06 positive

@foxnewspolitics 2.50* 11.96*** 0.22 0.40 - 0.26 positive

@abcnewspolitics 2.06 0.66 0.73 2.47* 0.43 - positive Note. Lags = 6. Causality Relationship direction: X Granger Cause Y. † p = .05, * p < .05, ** p < .01, *** p < .001

161

Figure 13. Aggregated Time Series of Negative Sentiment Words of @cnnpolitics, @foxnewspolitics, and @abcnewspolitics.

162

Online media Twitter accounts. The causal relationships across online media Twitter accounts were not returned much (see Table 39). Online media Twitter accounts with various political ideologies (@drudgereport, @huffpostpol, and @thehill) were entered, and the Granger causality test results indicated that positive sentiment in @huffpostpol Granger caused positive sentiments in @thehill (F= 2.66, p < .05), and negative sentiments in @thehill Granger caused positive sentiments in @thehill (F= 2.39, p = .05). No causal relationship involving the conservative media Twitter account (@drudgereport) was found. As shown in Figure 14, the line graphs tend to move together compared to print media Twitter accounts and television network Twitter accounts.

Table 39

Pairwise Granger Causality Test Results: Negative and Positive Sentiment on Online Media Tweeting Networks

X @drudgereport @huffpostpol @thehill @drudgereport @huffpostpol @thehill Y negative negative negative positive positive positive @drudgereport negative - 0.83 0.32 0.47 0.53 0.36 @huffpostpol negative 1.58 - 0.69 1.41 0.43 0.79 @thehill negative 0.87 1.96 - 0.58 1.96 1.62 @drudgereport positive 0.35 0.81 0.91 - 0.78 0.78 @huffpostpol positive 0.83 0.48 0.66 0.93 - 0.78

@thehill † positive 0.70 1.58 2.39 0.41 2.66* - Note. Lags = 6. Causality Relationship direction: X Granger Cause Y. † p = .05, * p < .05, ** p < .01, *** p < .001

163

Figure 14. Aggregated Time Series of Negative Sentiment Words of @drudgereport, @huffpostpol, and @thehill.

164

Political commentator Twitter accounts. Similar to online media Twitter accounts, there were no significant causal relationships within political commentator Twitter accounts among the liberal (@ezraklein and @natesilver538) and conservative political commentator

Twitter accounts (@michellemalkin). These results indicated that online media and political commentator Twitter accounts did not have as much influence on intermedia agenda-setting across different political ideologies as with the print and television network media Twitter accounts (see Table 40). In Figure 15, the line graphs present similar spiking patterns regardless of political ideologies. For example, at Time 5, both sentiments, negative and positive, for each political commentator Twitter account increased simultaneously.

165

Table 40

Pairwise Granger Causality Test Results: Negative and Positive sentiment on Political Commentator Tweeting Networks

X @michellemalkin @ezraklein @ natesilver538 @michellemalkin @ ezraklein @ natesilver538 Y negative negative negative positive positive positive @michellemalkin - 0.23 1.02 0.71 0.17 1.02 negative

@ ezraklein 0.60 - 0.99 0.63 0.38 0.87 negative

@ natesilver538 1.34 0.38 - 1.33 0.47 0.28 negative

@michellemalkin 0.96 0.46 0.83 - 0.29 0.92 positive

@ ezraklein 0.57 0.55 1.02 0.65 - 0.66 positive

@ natesilver538 0.61 0.96 1.14 0.73 1.06 - positive Note. Lags = 6. Causality Relationship direction: X Granger Cause Y. † p = .05, * p < .05, ** p < .01, *** p < .001

166

Figure 15. Aggregated Time Series of Negative Sentiment Words of @michellemalkin, @ezraklein, and @natesilver538.

167

Agenda-Setters of Public Sentiment on the Twitter News Network

RQ 8 sought to determine the extent to which each individual media Twitter account

with varied political ideologies exerted an intermedia attribute agenda-setting impact on other

types of media Twitter accounts. To determine the temporal order among multiple time-series

data, Granger causality tests were conducted. As shown in Table 41, there were two Causality

Relationship directions, either from an individual media Twitter account to a group of media

Twitter accounts or from a group of media Twitter accounts to an individual media Twitter

account.

Regarding negative sentiment, the results show intermedia influence from @nytpolitics to

news magazine (F= 3.71, p < .01); from @cnnpolitics to online non-partisan media (F= 2.66, p

< .05); from @drudgereport to print media (F= 2.77, p < .05); from @drudgereport to political

commentator (F= 3.45, p < .05); from @drudgereport to television network (F= 4.82, p < .01);

from @michellemalkin to news magazine (F= 2.38, p = .05); and from @ezraklein to news

magazine (F= 3.89, p < .01). These were significant Granger causality relationships caused by

individual media Twitter accounts. On the other hand, intermedia influence caused by media

groups toward individual media Twitter accounts were from print media to @newsweek (F=

2.55, p < .05), @cnnpolitics (F= 2.34, p = .01), and @huffpostpol (F= 2.95, p < .05); from

television networks to @wsjpolitics (F= 2.57, p < .05); from online non-partisan media to

@wsjpolitics (F= 3.49, p < .01); and from online partisan media to @michellemalkin (F= 2.43, p

< .05) and @ezraklein (F= 4.26, p < .01). Figure 18 visually represented such intermedia agenda-setting dynamics found across media Twitter accounts and media categories.

168

Table 41

Granger Analysis between Individual Media Twitter Accounts and a Media Category : Negative Sentiment

Intermedia influence

From (X) To (Y) F Statistics

@nytpolitics News magazine 3.71**

@cnnpolitics Online non-partisan media 2.66*

@drudgereport Print media 2.77*

@drudgereport Political commentator 3.45*

@drudgereport Television network 4.82**

@michellemalkin News magazine 2.38†

@ezraklein News magazine 3.89**

Print media @newsweek 2.55*

Print media @cnnpolitics 2.34†

Print media @huffpostpol 2.95*

Television networks @wsjpolitics 2.57*

Online non-partisan media @wsjpolitics 3.49**

Online partisan media @michellemalkin 2.43*

Online partisan media @ezraklein 4.26**

Note. Lags = 6. Causality Relationship Direction: X Granger Cause Y. † p = .05, * p < .05, ** p < .01

169

Figure 16. Directed Granger Causality Graph: Negative Sentiment. The edges represent significant relationships found in Granger causal tests for each media Twitter account-media group pair comparison. The arrows denote the direction of Granger causality relationship. The size of nodes was adjusted to reflect the out-degree centrality44 calculated in the network.

44 Degree centrality is a simple count of the total number of connections linked to a node. Degree is the measure of the total number of edges connected to a particular node. For directed networks, when the direction of influences exist, in-degree is the number of connections that point inward at a node while out-degree is the number of connections that originate at a node and point outward to other vertices.

170

Regarding positive sentiment, the results summarized in Table 42 show intermedia

influence from @postpolitics to online partisan media (F= 2.58, p < .05); from @thehill to print

media (F= 2.33, p = .05); from @huffpostpol to online non-partisan media (F= 2.74, p < .05); from @newyorker to online partisan media (F= 3.11, p < .05); and from @newsweek to television network (F= 3.52, p < .01). These were significant Granger causality relationships caused by individual media Twitter accounts. On the other hand, intermedia influence caused by media groups toward individual media Twitter accounts were from print media to @newyorker

(F= 2.96, p < .05); from online non-partisan media to @ cnnpolitics (F= 3.45, p < .05); from

Online partisan media to @newsweek (F= 2.73, p < .05); and from political commentator to

@newyorker (F= 2.42, p < .05). Figure 17 visually represents such intermedia agenda-setting effects found across media Twitter accounts and media categories.

By comparing Table 41 and 42, it can be seen, for each negative and sentiment, there are different sets of individual media Twitter accounts leading media account groups that were returned as Granger causes. Second, a greater number of Granger causal relationships regarding intermedia influence in the negative sentiment were identified rather than intermedia influence in the positive sentiment.

171

Table 42

Granger Analysis between Individual Media Twitter Accounts and a Media Category : Positive Sentiment

Intermedia influence

From (X) To (Y) F Statistics

@postpolitics Online partisan media 2.58*

@thehill Print media 2.33†

@huffpostpol Online non-partisan media 2.74*

@newyorker Online partisan media 3.11*

@newsweek Television network 3.52**

Print media @newyorker 2.96*

Online non-partisan media @cnnpolitics 3.45*

Online partisan media @newsweek 2.73*

Political commentator @newyorker 2.42*

Note. Lags = 6. Causality Relationship Direction: X Granger Cause Y. † p = .05, * p < .05, ** p < .01

172

Figure 17. Directed Granger Causality Graph: Positive Sentiment. The edges represent significant relationships found in Granger causal tests for each media Twitter account-media group pair comparison. The arrows denote the direction of Granger causality relationship. The size of nodes was adjusted to reflect the out-degree centrality calculated in the network.

173

CHAPTER VI. DISCUSSION

This study investigated the intermedia agenda-setting dynamics among various types of

media’s Twitter accounts with different political ideologies. With the advent of online news

media outlets, intermedia agenda-setting research has newly expanded the scope of studies on

media’s influence to other media across different platforms (e.g., from newspapers to blogs).

Social media have attracted attention particularly as news media platforms rather than as mere

social networks (e.g., more than 80% of Tweets are related to news, see Kwak, Lee, Park, &

Moon, 2010). In addition to the importance of intermedia influence across platforms, including social media, intermedia influence within social media requires further study, as social media

themselves have become a public sphere in which each attempts to set agendas. These media

have become a hub of information and news outlets have flocked to take advantage of the large

scale of their audiences and their connectivity. Nevertheless, research on intermedia agenda-

setting within social media is scant compared to that on intermedia agenda-setting across

platforms.

The time frame of Twitter account intermedia agenda-setting activity on which this study

focused were the seven weeks before the 2016 U.S. presidential election. It is important to know

the way in which media Twitter accounts influence each other and attempt to dominate the social

media sphere, as this new public sphere has become the most heated battlefield for winning

voters. More interestingly, because social media are platforms for communication via emotion

and sharing opinions, public sentiment deserves more attention. For example, previous studies

have found that public sentiment on Twitter often is influenced by the outcomes of media’s

intermedia agenda-setting attempts (e.g., Conway et al., 2015; Vargo & Guo, 2016), and

sentiment not only is transferable/contagious, but also can be a decisive factor influencing the

174

winner of a political race. Nonetheless, public sentiment itself is not merely a simple

consequence of one media outlet’s attempts to dominate a certain agenda. Twitter accounts

behave proactively to lead the news distribution cycle in Twitter, and sometimes react to other

media’s strategies, or seek alignments using others sources. In doing so, social media provide the most optimal platform for them to move strategically and influence each other. To reveal such intermedia agenda-setting dynamics among various media’s Twitter accounts, multiple

approaches were used to address questions about the temporal changes and sentiment contagion

in the Tweeting network during the seven-week period in question.

Media Twitter Accounts

Several findings identified the general characteristics of the key media Twitter accounts

in the Tweeting network. First, there were differences in the accounts’ positions within the

network. In general, print media and television network Twitter accounts demonstrated higher

network centrality than did other types of media Twitter accounts. The results showed that

traditional media Twitter accounts, such as print media and television networks, play roles in the

Tweeting network by bridging isolated media Twitter accounts, and are located in the center of

networks, so that information reaches them quickly; further, they are connected to other

important accounts.

It is worthwhile to notice that both campaign accounts largely were ranked highest. This pattern suggests that both campaign accounts were used as popular news sources for other media

Twitter accounts, and consequently connected different media Twitter accounts with varying platform orientations and political ideologies. Moreover, the results showed specifically that

@realdonaldtrump ranked higher overall than did @hillaryclinton. This corresponds to other

media reports about the success of the Trump social media campaign in dominating the social

175 media sphere (e.g., Lapowsky, 2016). Further, the results may indicate that @realdonaldtrump tended to connect actively to various accounts and was located at the center of the network, as it was mentioned frequently by others, while @hillaryclinton linked selectively to some important accounts on the network or was mentioned by a relatively limited number of people. In addition, when every relationship during Time 1-7 was graphed in one network (see Table 30),

@hillaryclinton ranked higher than did @realdonaldtraump only when the eigenvector centrality was measured. These results account for each campaign’s distinctive strategy in using social media accounts. For example, @realdonaldtrump, which had high betweenness and closeness centrality, was likely to reach wider audiences and deliver news at a faster rate than was

@hillaryclinton, while @hillaryclinton opted to narrow the target audience by selecting mediated news platforms.

Further, differences were found even within the same media Twitter account category.

For example, among political commentators’ accounts, @natesilver538, which had a high eigenvector centrality, was more likely to reach important accounts purposefully, while

@michellemalkin, which had high betweenness centrality, connected groups better, even those with different ideologies, and played a role as a connector in the network. This case also shows that each media Twitter account chose how to run its account selectively and compose and distribute news in Twitter as part of agenda-setting attempts, and the connectivity built into

Twitter facilitated interactions among media accounts. Further, some accounts tended to remain in the top 10 list throughout the time period, but some were newly entered or disappeared.

Taken together, although this study did not measure the agenda-setting influence each media

Twitter account exerted on others directly, their ranking and consistency in presenting high

176

network centrality demonstrated each account’s positioning strategy and significant roles as

agenda-setters in the Tweeting network.

The top 10 Twitter accounts mentioned by media Twitter accounts were further

categorized as three types: campaign Twitter accounts owned by vice- and presidential

candidates (i.e., @realdonaldtrump, @hillaryclinton, @mike_pence, @timkaine), affiliate media

Twitter accounts (e.g., @abc, @wsj, @foxnewssunday), and individual Twitter accounts that

involved certain events or issues related to the race (e.g., @atensnut, owned by Juanita

Broaddrick who accused Bill Clinton of sexual assault, was mentioned by media Twitter

accounts). These types of Twitter accounts that formed the election-related news Tweeting

network suggest which Twitter accounts were involved in agenda-building in Twitter, and how

much further the political news network can be expanded to analyze intermedia-agenda setting in

social media in future research.

Media Interest

In this study, different types of indicators reflected media attention to certain events, topics, or issues throughout the analyses. Together with the changes in the volume of Tweeting that signaled media interest, the set of popular URLs and keywords/word pairs in Tweets also served as sensors that detected media Twitter accounts’ interest about that time. The results of computer-assisted content analysis showed the set of popular URLs and keywords/word pairs that directed social media users to events or issues, and which media Twitter accounts paid attention to them at that moment. Thus, the frequency of URLs to other online media, such as websites and blogs, or popular keywords in the Tweets, were useful indicators to gauge media interest. This makes Twitter a significant source of information about emerging events. On the other hand, the number of hashtags used by individual media Twitter accounts in the sample was

177

not as high as expected. The total number of hashtags used was very low throughout the seven-

week data collection period, even including those times when the total number of Tweets in the

network spiked after the presidential debate. However, despite their low presence, the use of

hashtags clearly was a deliberate practice. Previous studies have indicated that hashtags are used

both to engage in a discussion on a particular topic and to promote oneself (Laniado & Mika,

2010). The popular hashtags observed in this study showed that media Twitter accounts tended

to use hashtags to promote their news about the election, campaign, or race, rather than to raise

controversial agendas or engage in a discussion led by someone else. For example, most of the

popular hashtags observed in the network regarded debates (#debates, #factcheckfriday), poll

results (#cnnkffpoll), or the election (#myvote, #electionday).

Debate Effects

The results of a time-series analysis used to graph changes in three indicators (the total

number of edges, hyperlinks, and sentiment words) supported the proposition that, as political

events, the debates affected the production and dissemination patterns of news. Not only did the

volume of edges produced spiked immediately after each debate, but three types of hyperlinks

and sentiment words used in Tweets increased as well, and the state of the network after each

debate reflected the effect of the debates (Figure 1). Tweets generated during and immediately

after each debate addressed the agendas covered in the debate, evaluated the candidates’

performances, and predicted the mentality of voters. Media Twitter accounts’ Tweets about

candidates and debates increased greatly in number compared to the number of Tweets the day before the debate; a similar increase in three indicators was observed repeatedly following each

debate. For example, the total edges on the network increased 63.7% the day after the first

178

presidential debate, 79.1% the day after the second, and 70.2% the day after the third. The total

edges increased 46.0% for the vice-presidential debate.

The number of three types of hyperlinks (domains, hashtags, and accounts mentioned)

also increased more in the days following the debates than prior to them, but they fluctuated

according to similar patterns and appeared to be highly event sensitive. However, in looking

more closely at the graphs that showed the number of different types of hyperlinks in the

network, each had its second largest spike at different times (see Figure 2). For example, the top

10 hashtags spiked at Time 2, when the first debate took place on September 26, 2016, while the

top 10 accounts mentioned spiked at Time 3, when the vice-presidential debate and second

presidential debate were held. Moreover, the frequency of the top 10 hashtags decreased more

rapidly than did the frequency of the top 10 accounts mentioned as the election approached.

These results suggest that each type of hyperlinking was practiced in different circumstances or

to achieve different goals. In this respect, further investigations are required on each hyperlink

type representing different agenda-setting strategies within social media.

The aggregated number of positive and negative sentiment words exhibited a pattern

similar to that of the total edge number. Both negative and positive sentiment words followed

the total edge line, which increased after debates, but the number of negative sentiment words

surpassed the number of positive sentiment words across different time points, and the gap between them decreased as the election approached (see Figure 5 and 6). Interestingly, among

the three indicators that fluctuated and revealed several patterns, sentiment words were more

volatile and spiked more dramatically, while total edge and hyperlinks showed slightly less

variations over time. These results suggested that sentiment words are a more susceptible and

articulate indicator or predictor in analyzing public opinions and sentiment in social media.

179

Lastly, the temporal changes in cross-links from one media Twitter account to another

(mentioning) also revealed debate effects. Cross-linking indicated journalists’ or each media

account’s action in giving credit to others, and in practice, it increased the mentioned account’s publicity as a consequence. There was a larger number of cross-links in the Tweeting network in the days following the U.S. presidential candidates’ debates than prior to them. The most common types of cross-links were established between media and campaign Twitter accounts, followed by those among different accounts owned by the same media (@wsj-@wsjpolitics),

with links across different media Twitter accounts the rarest. These results indicated that

traditional journalistic practices remained in Twitter news writing, as well as the fact that

hashtags were used seldom, as mentioned above. This supports previous studies’ findings about

journalists’ behavior in social media. Journalists began to attend to social media as an outlet for

the news, but in writing social media pieces, they reference other journalists or media sources

rarely, and even when they do, they tend to reference their own organization’s sources (Russell

et al., 2016). Further, the results also showed that cross-linked Tweets included a greater

percentage of positive than negative sentiment words compared to non-cross-linked Tweets.

This seems logical, as media Twitter accounts are more likely to link to their affiliate accounts to

support them. However, it can be expected that various types of cross-linking practices will be

more popular in multimedia journalism practices. In this respect, questions about the way in

which each news media manages its multiple news windows and uses them to increase publicity,

and the way in which hyperlinking is taken into consideration in journalists’ writing remain

interesting. Future research on those topics will be able to expand the scope of current agenda-

building and intermedia agenda-setting studies.

180

Sentiment Words

The range in the highest and lowest use also differed between positive and negative

sentiment words. The range of positive sentiment word use was 0.52-0.83 per Tweet, while the

range of negative sentiment words used was 0.44-0.62 per Tweet. Overall, media Twitter

accounts hesitated to use negative sentiment words. Moreover, the use of positive and negative

sentiment words differed across different media Twitter account categories. Online non-partisan media reported the highest use of positive sentiment words, followed by print media, political commentators, news magazines, television networks, and online partisan media. Interestingly, online partisan media in particular showed a distinctively low use of positive sentiment words compared to all other media Twitter accounts. However, the rank order changed drastically in the use of negative sentiment words. Political commentator accounts, news magazine accounts, and online partisan media accounts rose, while online partisan media and print media fell.45

Television network accounts maintained the same rank. Among the accounts, political

commentators reported the highest level of negative sentiment word use.

This study hypothesized that traditional media Twitter accounts would use a greater

number of positive than negative sentiment words. However, the results did not show any

general pattern indicating differences in use between traditional and non-traditional media

Twitter accounts. Instead, it suggested alternative standards that categorized media Twitter

accounts with respect to their use of positive and negative sentiment words. Partisanship was

one; interestingly, two different types of online media Twitter accounts, online partisan and non-

partisan media Twitter accounts, used sentiment words differently. Online non-partisan media

45 Political commentators rose from 3rd to 1st; news magazines fell from 4th to 2nd; online partisan media rose from 6th to 3rd; print media fell from 2nd to 4th; online non-partisan media fell from 1st to 6th, and television networks remained 5th across the two sentiment types.

181

Twitter accounts used more positive sentiment words and fewer negative sentiment words than

did partisan accounts. It is logical that partisanship resulted in more negative sentiment words,

as being critical of opponent candidates is a common strategy observed in political

communication contexts. In contrast, in the case of online non-partisan media’s positive word

use, positive words might indicate that they remained neutral with respect to candidates in both

parties. However, to investigate such differences in more depth, future research needs to

consider the tone or intensity of each negative or positive word in messages. The difference in use of positive sentiment words between the two types of online media Twitter accounts was significant, but that in the use of negative sentiment words was not.

As well as partisanship, media characteristics are worth noting. For example, three types of traditional media showed different patterns in their use of positive and negative sentiment words. With respect to positive sentiment word use, print media ranked second, while news magazines ranked fourth. However, with respect to the use of negative sentiment words, news magazines ranked second, while print media ranked fourth. On the other hand, television networks remained fifth across the two types of sentiment words used. This difference can be

explained by each media’s standpoint in disseminating its opinions, which is associated easily

with emotions. Even within the same category of traditional media, each print, television

network, and news magazine differed from the other based on writing style and content.

Considering that news magazine accounts are highly likely to distribute opinions in a longer

format, their greater use of negative sentiment words than in print media that disseminate

relatively short and fact-centered reports seems reasonable. Thus, the results showed that the

characteristics of each media organization itself may affect their Tweet messages that use

sentiment.

182

Different Types of Agenda Setters

This study used the network analysis method to identify the key media Twitter accounts that attempt to set media agenda in the social media sphere, and to determine whether there were partisan or media category differences in running their Twitter accounts and disseminating news strategically. Different types of network centrality were expected to reveal the nature of their popularity in the political news Tweeting network during the seven weeks of the presidential campaign.

With respect to traditional media Twitter accounts, it appeared that they occupied agenda-setting positions with high network centrality. Print media and television networks’ accounts combined captured more than 85% of the top 10 ranked accounts for all three measures of network centrality (see Table 31). The results indicated that traditional media Twitter accounts are agenda-setters that connect isolated media Twitter accounts, control the news and information flow, and reach important sources. Print media accounts in particular were the most prevalent media category across all three types of rankings, followed by accounts for television networks, online non-partisan media, and political commentators. Political commentator accounts did not occupy a large proportion in the top network centrality ranking, interestingly, the proportion of political commentator accounts in the eigenvector centrality ranking (9.3%) was relatively high compared to those in the betweenness (4.4%), and closeness centrality (3.7%) rankings. In general, eigenvector centrality implies that some links are more valuable than others. In this respect, political commentator accounts might include few, but important links, which could be a part of the strategy they use to set their agendas in the social media sphere. In contrast, online non-partisan media and news magazine accounts did not appear in any top 10 ranking. In this case, it is possible that media accounts are isolated, rather than connected to

183 other accounts, their influence on, or importance in the network cannot be significant, even though they publish many Tweets. Such results show the existence of multi-layered intermedia agenda-setting attempts by media accounts within Twitter, which the confrontation between traditional and non-traditional media alone cannot explain.

Further, the results showed the different composites of key media accounts and the rank orders for each type of network centrality measure. This accounts for the fact that individual media Twitter accounts are likely to play different roles in the network, which may be caused by their distinctive Twitter running strategies or critical events occurring at that moment. They are seldom static in the top network centrality rankings of media Twitter accounts, as managing these accounts requires their response strategies to be time and event sensitive. For example, when the New York Times released an investigative news report on October 12, 2016 that quoted two women who stated that Donald Trump had touched them inappropriately, @nytpolitics moved up in the betweenness centrality ranking, and fell in both the closeness centrality and eigenvector centrality rankings. These results indicated that the New York Times’ exclusive report led to changes in the intermedia agenda-setting dynamics in Twitter. @nytpolitics tended to become a connector that bridged news communities on the network after the breaking news, while other accounts turned to different strategies, such as being connected closely to

@nytpolitics or referring rapidly to the New York Times news coverage. These results showed that intermedia agenda-setting dynamics within Twitter are more complicated and multi-layered even compared to those in other online platforms.

Sentiment Intermedia Agenda-Setting

This dissertation used two types of approaches, statistical and graphical times-series, to determine the extent to which each media Twitter account exerted an intermedia agenda-setting

184

influence on another. With respect to negative sentiment, two types of relationships were found.

First, negative sentiments in online media Twitter accounts Granger caused negative sentiments

in traditional media Twitter accounts. Negative sentiments in online non-partisan media Twitter accounts Granger caused negative sentiments in print media Twitter accounts, while negative sentiments in online partisan media Twitter accounts Granger caused negative sentiments in television network Twitter accounts. These results contrasted with several expectations. As explained above, past studies have found that the direction of the influence of intermedia agenda- setting is from traditional to less traditional media (e.g., Lee et al., 2005; Lim, 2006; Meraz,

2009; Roberts et al., 2002). However, the results of this study revealed the opposite relationship between online and traditional social media. Further, media partisanship resulted in different pairs of Granger causality relationships, from online non-partisan media to print media and from online partisan media to television network accounts.

Another interesting relationship in the negative sentiment contagion were the continual one-way causal relationships initiated by online partisan media Twitter accounts. Negative sentiments in political commentators’ accounts, led by negative sentiments in online partisan media Twitter accounts, influenced negative sentiments in the news magazine accounts’ Tweets.

On the other hand, in the positive sentiment contagion, another series of continual one-way causal relationships was found. Positive sentiments created by online non-partisan media caused positive sentiments in print media, after which print media caused positive sentiments in the news magazine Twitter accounts. Taken together, these results demonstrated the influence of online media and partisanship on intermedia agenda-setting dynamics within social media. It also showed that different types of sentiment can lead to variations in intermedia agenda-setting dynamics. Moreover, considering the datasets collected during the presidential campaign, as

185

previous studies about negative political campaigns have found (e.g., Thelwall et al., 2011), the

effectiveness of using negative sentiments also was reconfirmed by not only the spike in Twitter feeds, but also the causal relationships facilitated among media Twitter accounts.

There also were causal relationships found among media Twitter accounts with different political ideologies in each category, print media, television networks, news magazines, online media, and political commentators. Within print media Twitter accounts, the intermedia agenda- setting influence was multi-directional among conservative media Twitter accounts

(@wsjpolitics), liberal media Twitter accounts (@postpolitics, @nytpolitics), and the least

biased media Twitter account (@usatoday2016). For example, negative sentiments in

@wsjpolitics Granger caused positive and negative sentiments in @postpolitics; negative

sentiment in @usatoday2016 Granger caused positive and negative sentiments in @wsjpolitics;

negative sentiments in @nytpolitics Granger caused negative sentiments in @wsjpolitics.

Interestingly, only negative sentiments initiated these casual relationships. Within television

network media Twitter accounts, two-way causal relationships were found for both sentiments

between two liberal media Twitter accounts (@cnnpolitics and @abcnewspolitics). However,

across different political ideologies, only a one-way causal relationship was found, from negative

sentiments in @cnnpolitics to positive and negative sentiments in @foxnewspolitics. As in the

print media Twitter accounts, negative sentiments led the intermedia agenda-setting influence

only across different political ideologies.

On the other hand, within online media Twitter accounts, no causal relationship involving

the conservative media Twitter account (@drudgereport) was found, while a one-way intermedia

agenda-setting influence was found, from positive sentiments in @huffpostpol to positive

sentiments in @thehill. Similarly, there were no significant causal relationships within political

186

commentator Twitter accounts among liberal (@ezraklein and @natesilver538) and conservative political commentator Twitter accounts (@michellemalkin). These results indicated that online

media and political commentator Twitter accounts did not have as much influence on intermedia

agenda-setting across different political ideologies as did print and television network media

Twitter accounts. More interestingly, negative sentiment contagion was prominent in the

intermedia agenda-setting influence among traditional media Twitter accounts, but such an

influence was not observed across accounts with different ideologies in non-traditional media

Twitter accounts.

Lastly, the intermedia agenda-setting influence each individual media Twitter account

exerted on the groups of media Twitter accounts was investigated and mapped. Most importantly, several media Twitter accounts were identified as media agenda setters that led sentiment contagion in certain media categories. The most notable media Twitter account that led negative sentiments in multiple media Twitter account categories was @drudgereport. As an online partisan media Twitter account, @drudgereport influenced print media, television networks, and political commentator categories. The results supported the findings of previous studies of the influence of media partisanship within the online news media-centered environment (e.g., Vargo et al., 2016). Another notable pattern was the existence of two political commentator Twitter accounts, @michellemalkin and @ezraklein, which led negative sentiments in news magazines, including @newsweek and @newyorker. The results supported the intermedia agenda-setting influence of online bloggers on traditional media. In addition, among traditional media Twitter accounts, @nytpolitics, which led negative sentiments in news magazines, and @cnnpolitics, which led negative sentiments in online non-partisan media, were identified as agenda-setters that influenced accounts in other media categories.

187

On the other hand, with respect to the intermedia agenda-setting influence on positive sentiments, @postpolitics and @newyorker, which led online partisan media and @newsweek, which led television network, were identified. Among non-traditional media Twitter accounts,

@thehill, which led print media, and @huffpostpol, which led online non-partisan media, were identified. Interestingly, no conservative media Twitter account played the role of agenda-setter in influencing positive sentiment in other media categories. However, there were more evident individual agenda setters that affected negative sentiment in multiple media categories, while in positive sentiment contagion, there was no media Twitter account with distinctive out-degree centrality that led different types of media Twitter accounts simultaneously. Further, a greater number of intermedia influences were identified on negative than positive sentiment.

News Marketing via Twitter Accounts

The most important implication of this study is that the intermedia agenda-setting dynamics, which have existed since long before the age of Twitter but were hidden behind editors’ desks, were brought to light. News media editors used to pick and choose news stories while isolated from others, such as media competitors, but Twitter has now become a place where all these interactions converge into one another and can be seen. Journalistic co- orientation, observing other journalists’ behavior and then taking them as standards to evaluate the quality of their own work, helped reduce the uncertainty of new information and compensate for any lack of contact with the audience (Vonbun et al., 2016). But today, journalists and the media no longer suffer from uncertainty of information regarding what their competitors may have or limited channels to gauge audience reactions to their products thanks to Twitter. Almost all of the media are owning and running their own accounts, and they are connected to each other simply through the contents, links, or even audiences following multiple news sources.

188

On the other hand, it is not surprising that the hyperactive engagement by various media entities on Twitter indicates that it is no longer all about political power games when seeking domination of media agenda. Selling news stories, recruiting readers and enhancing readership has become added major goals that every media Twitter account is pursuing these days. Media

Twitter accounts are like sales personnel directing audiences to the websites and similar platforms from which media organizations originated, and we can consider that social media linkage has become a major inflow channel to and from news content.

Consequently, there is an increasing possibility that media Twitter accounts will be inclined to follow the public’s interests and select easy ways to appeal to those by paying more attention to how the public thinks about certain issues. As with the negative sentiment contagion on the media Tweeting network shown above, they may use more negativity than positivity, as negativity has been known as a better strategy to attract the public. The results that online partisan-media accounts providing opinionated pieces created a great sensation also support how sentiment intensive content can be influential and tempting for the media. The election results of the 2016 U.S. presidential election provided a preview of how sensational news and information can be promoted by the media through social media accounts.

As such, news media jumped into the open market of selling their stories via Twitter.

The means are hyperlinks such as hashtags and popular keywords appealing to a large audience.

They are aggressively promoting their product (news stories) when there is a chance such as debates, elections, and candidate’s scandals. As mentioned above, such behavior is not something that has suddenly appeared, but it is now becoming far more visible thanks to Twitter.

The media has dived into Twitter to make a breakthrough in business models, enhance media brand images, and market stories. They employ media strategies optimized to grow their

189 presence on social media and try to sell their stories to broader audiences. If it is the fact that

Twitter has fueled such media behavior, further research about political correctness or social media journalism ethics codes are necessary.

Limitations and Future Research

This study investigated intermedia agenda-setting dynamics among various media

Twitter accounts using multiple analysis methods, including network, hyperlink, and sentiment analyses. In addition, the temporal dynamics of intermedia agenda-setting influences in the

Tweeting network were analyzed using statistical and graphical time-series approaches. Because of its exploratory nature, there were some limitations in the study’s methods and results.

First, to collect Tweets and extract meaningful information from the corpora, a computer program that enables computer-assisted content and network analysis was used. Using such a method to gather Tweets allows researchers to collect Tweets periodically and automatically using predefined queries, and archives the results. However, it also has limitations. Despite its methodological strength, it has a sampling limitation. Twitter may not return all Tweets that match researchers’ requests (for example, all Tweets including a given search word). Moreover,

Twitter also returns a maximum number of results per query depending on the API used, so it may give particularly incomplete results under certain circumstances (Thelwall, 2014). Thus, the dataset collected should be treated as a sample rather than as a comprehensive set, as it did not include all the media Tweets during the seven-week data collection period.

The dataset itself, which was a collection of Tweets, also has limitations, as Tweets have unique characteristics. Previous studies have found that Twitter users tend to rely on different methods to adhere to the 120-character limit while conveying their intended meaning and emotions (Tao et al., 2014). However, the current computer-assisted content program cannot

190

analyze the variations that Twitter users adopt in writing their messages. For example, many

users employ abbreviations, eliminate vowels, drop articles, or use acronyms (Gouws, Metzler,

Cai, & Hovy, 2011). Additionally, URL shortners (e.g., trib.al, dlvr.it) are used often to save

characters when users include links to sources outside Twitter. Consequently, such Tweet

writing and posting behaviors eliminated certain valuable information from the given datasets.

For example, in the dataset used for this study, truncated URLs expunged the URL’s domain

name from the hyperlinks, and the data screening process filtered out a considerable number of

words with sarcastic meanings, newly coined words, and acronyms.

In addition to a graphical approach, the Granger causality concept was used in this study

to investigate intermedia agenda-setting influence across media Twitter accounts. One media

account’s influence on another was determined statistically using the Granger causality

relationship. As explained above, Granger causality is calculated based on the relationship between past and future values for each x and y. If y can be predicted better from past values of x and y together, than from past values of y alone, x is said to “Granger cause” y (Freeman,

1983). The strength of this method is that it allows researchers to use time-series analysis to determine the casual relationship’s statistical significance. However, as the formula above shows, other external factors that might affect the relationship can be overlooked. For example, the nature of events could affect media interest at that moment, or the medium’s bias could be contained in the news or be perceived by other media accounts. However, such factors could not be included or controlled to determine causality in this study.

The limitations described above highlight the importance of making the right design decisions for research that uses big data on a large scale, such as social media data. Because

Twitter does not provide complete access to its message archive, a design decision has to be

191

made with respect to the selective data collection process (Tao et al., 2014). For example,

depending on the research goals, researchers can collect a stream of Tweets that contain

particular keywords or hashtags, or those posted by users within a particular geographic area, or

all Tweets posted by a set of pre-defined Twitter users. In this study, the topic agendas and

agendas conveyed through hyperlinks were not analyzed directly because sentiments, as

attributes of the agenda and dynamics of intermedia agenda-setting influence using network

concepts, were the focus. In doing so, each media Twitter account name for political news

distribution was used to graph the media Tweeting network. However, when research questions

address topics or issues agendas, the search keywords should be issues or topics rather than each

media Twitter account name. If the goal of social media research is prediction of election

results, sentiment or opinions that represent agreement or disagreement with the agendas each

candidate sets need to be followed to compare the results with the poll results. Different research

designs are possible, but the decisions used to design data collection procedures must be

consistent with the research goals and questions. In addition, future research needs to consider

using more articulated big data analytics that allow researchers to code and program the search query or algorithms more freely to extract more information from the datasets. For example, sentiment analysis using machine learning may be a better option to capture social media

languages or hidden emotions rather than dictionary-based sentiment analysis.

In this study, time-series analysis was one of the primary research approaches, while

content analysis was conducted only for descriptive statistics and was used in a limited way to

conduct other analyses. However, in future research, the combination of content analysis and

time-series analysis can be beneficial for researchers who want to collect more copious

information from time-series data. For example, post-event analysis is often employed with

192 content analysis to learn more about the dynamics of events (for example, how diseases spread), and as a learning tool for organizations and governments (Tao et al., 2014). Moreover, considering that social media data in particular contain both a great amount of quantitative and qualitative information, a multiple methods approach may be more effective.

The research framework of this study is worth advancing further. Social media research using non-traditional methods, such as time-series, sentiment, hyperlink, and network analyses, requires more work to develop the right forms of analysis and ask the right kinds of research questions. Because it is its nature to be data- rather than question-driven, as is other traditional media research, much of the current quantitative research on Twitter focuses on measuring and comparing specific structural parameters in very large data samples that sometimes lack theoretical foundations and explanations (Gaffney & Puschmann, 2014). To overcome such limitations, this study employed an intermedia agenda-setting theory and investigated media influence using its related concepts with evidence found in the datasets. Nonetheless, the theory and method employed in this study requires more research for application in future studies.

First, the results showed that each media Twitter account uses various strategies to run its accounts to distribute news and recruit readers. It also seemed that traditional media ethics were employed in part, but practices and norms of journalistic conduct applied only in social media also were noticeable. In this respect, future research needs to focus on the way in which the intermedia agenda-setting influence within social media differs from those in other types of online platforms with respect to journalistic practices and media strategies. Secondly, although the agenda-building process was not the focus of this study, the results showed that intermedia agenda-setting influences are intertwined with each media’s agenda-building process, a process that occurs in this platform and becomes more complicated. Further, although the study focused

193 only on the interaction among media Twitter accounts, the social agenda-building process in which individuals, i.e., non-media related Twitter users, are involved cannot be overlooked. The bi-directional interaction between media agendas and social or public agendas still remains important. In consideration of such gaps, intermedia agenda-setting within social media needs to be investigated further. Lastly, one of the goals of this study was to see whether sentiment is another type of attribute agenda that transfers across media. The results showed positive or negative sentiment contagion at various levels, with the intermedia attribute agenda-setting influence more evident in negative than positive sentiment. In future research, it will be beneficial to employ sentiment analysis with advanced scales that enable the intensity of sentiment or a wider range of sentiment orientations to be captured, rather than negative and positive sentiment alone.

194

REFERENCES

Ananny, M. (2014). Networked press freedom and social media: Tracing historical and

contemporary forces in press‐public relations. Journal of Computer-Mediated

Communication, 19(4), 938-956. doi: 10.1111/jcc4.12076

Banning, S. A., & Sweetser, K. D. (2007). How much do they think it affects them and whom do

they believe?: Comparing the third-person effect and credibility of blogs and traditional

media. Communication Quarterly, 55(4), 451-466.

http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/01463370701665114

Beckett, C. (2015, April 28). Our partisan press: Does it matter to journalism or politics?

blogs.lse.ac.uk. Retrieved from: http://eprints.lse.ac.uk/63847/1/blogs.lse.ac.uk-

Our%20partisan%20press%20does%20it%20matter%20to%20journalism%20or%20polit

ics.pdf

Blake, A. (2016). Welcome to the next, most negative presidential election of our lives.

Retrieved from: https://www.washingtonpost.com/news/the-fix/wp/2016/07/29/clinton-

and-trump-accept-their-nominations-by-telling-you-what-you-should-vote-against/

Boczkowski, P. J., & De Santos, M. (2007). When more media equals less news: Patterns of

content homogenization in Argentina's leading print and online newspapers. Political

Communication, 24(2), 167-180.

http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/10584600701313025

Broersma, M., & Graham, T. (2012). Social media as beat: tweets as a news source during the

2010 British and Dutch elections. Journalism Practice, 6(3), 403-419.

http://dx.doi.org/10.1080/17512786.2012.663626

195

Bruns, A., & Moe, H. (2014). Structural layers of communication on Twitter. In Weller, K.,

Bruns, A., Burgess, J. E., Mahrt, M., & Puschmann, C. (Eds.), Twitter and society (pp.

15-28). Peter Lang.

Budak, C., Goel, S., & Rao, J. M. (2016). Fair and balanced? quantifying media bias through

crowdsourced content analysis. Public Opinion Quarterly, 80(S1), 250-271.

https://doi.org/10.1093/poq/nfw007

Castells, M. (2013). Communication power. Oxford University Press.

Ceron, A. (2015). Internet, news, and political trust: The difference between social media and

online media outlets. Journal of Computer‐Mediated Communication, 20(5), 487-503.

doi: 10.1111/jcc4.12129

Ceron, A., Curini, L., Iacus, S. M., & Porro, G. (2014). Every tweet counts? How sentiment

analysis of social media can improve our knowledge of citizens’ political preferences

with an application to Italy and France. New Media & Society, 16(2), 340-358.

https://doi.org/10.1177/1461444813480466

Coleman, R., & McCombs, M. (2007). The young and agenda-less? Exploring age-related

differences in agenda setting on the youngest generation, baby boomers, and the civic

generation. Journalism & Mass Communication Quarterly, 84(3), 495-508.

https://doi.org/10.1177/107769900708400306

Conway, B. A., Kenski, K., & Wang, D. (2013). Twitter use by presidential primary candidates

during the 2012 campaign. American Behavioral Scientist, 57(11), 1596-1610.

https://doi.org/10.1177/0002764213489014

196

Conway, B. A., Kenski, K., &Wang, D. (2015). The rise of Twitter in the political campaign:

Searching for intermedia agenda‐setting effects in the presidential primary. Journal of

Computer‐Mediated Communication, 20(4), 363-380. doi: 10.1111/jcc4.12124

Crist, R. (2016). How the 2016 presidential candidates measure up on social media. Retrieved

from: http://www.cnet.com/news/2016-elections-comparing-presidential-candidates-on-

social-media/

Cushion, S., Kilby, A., Thomas, R., Morani, M., & Sambrook, R. (2016). Newspapers,

Impartiality and Television News: Intermedia agenda-setting during the 2015 UK

General Election campaign. Journalism Studies, 1-20.

http://dx.doi.org/10.1080/1461670X.2016.1171163

Diakopoulos, N. A., & Shamma, D. A. (2010, April). Characterizing debate performance via

aggregated Twitter sentiment. In Proceedings of the SIGCHI Conference on Human

Factors in Computing Systems (pp. 1195-1198). ACM. doi: 10.1145/1753326.1753504

Dimitrova, D. V., Connolly-Ahern, C., Williams, A. P., Kaid, L. L., & Reid, A. (2003).

Hyperlinking as gatekeeping: Online newspaper coverage of the execution of an

American terrorist. Journalism Studies, 4(3), 401-414.

http://dx.doi.org/10.1080/14616700306488

Effing, R., van Hillegersberg, J., & Huibers, T. (2011). Social media and political participation:

are Facebook, Twitter and YouTube democratizing our political systems?. In Electronic

participation (pp. 25-35). Springer Berlin Heidelberg. doi: 10.1007/978-3-642-23333-

3_3

Fahy, D., & Nisbet, M. C. (2011). The science journalist online: Shifting roles and emerging

practices. Journalism, 12(7), 778-793. https://doi.org/10.1177/1464884911412697

197

Freelon, D. (2014). On the interpretation of digital trace data in communication and social

computing research. Journal of Broadcasting & Electronic Media, 58(1), 59-75.

http://dx.doi.org/10.1080/08838151.2013.875018

Freeman, L. C. (1979). Centrality in social networks conceptual clarification. Social networks,

1(3), 215-239. Retrieved from: https://www.journals.elsevier.com/social-networks/

Freeman, J. R. (1983). Granger causality and the times series analysis of political relationships.

American Journal of Political Science, 327-358. doi: 10.2307/2111021

Gaffney, D., & Puschmann, C. (2014). Data collection on Twitter. In K. Weller, A. Bruns, J.

Burgess, M. Mahrt, & C. Puschmann (Eds.), Twitter and society (pp. 55-68). New York:

Peter Lang.

Ghanem, S. (1997) Filling in the Tapestry: the second level of agenda setting. In McCombs, M.

E., Shaw, D. L., & Weaver, D. H. (Eds.). Communication and democracy: Exploring the

intellectual frontiers in agenda-setting theory (pp. 3-14). Psychology Press.

Golan, G. (2006). Inter-media agenda setting and global news coverage: Assessing the influence

of the New York Times on three network television evening news programs. Journalism

studies, 7(2), 323-333.

http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/14616700500533643

Golan, G., & Wanta, W. (2001). Second-level agenda setting in the New Hampshire primary: A

comparison of coverage in three newspapers and public perceptions of candidates.

Journalism & Mass Communication Quarterly, 78(2), 247-259.

https://doi.org/10.1177/107769900107800203

198

Groshek, J. (2008). Homogenous agendas, disparate frames: CNN and CNN International

coverage online. Journal of broadcasting & electronic media, 52(1), 52-68.

http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/08838150701820809

Groshek, J., & Al-Rawi, A. (2013). Public sentiment and critical framing in social media content

during the 2012 US presidential campaign. Social Science Computer Review, 31(5), 563-

576. https://doi-org.ezproxy.bgsu.edu:9443/10.1177/0894439313490401

Groshek, J., & Groshek, M. C. (2013). Agenda trending: Reciprocity and the predictive capacity

of social networking sites in intermedia agenda setting across topics over time. Media

and Communication, 1(1), 15. doi: 10.12924/mac2013.01010015

Gouws, S., Metzler, D., Cai, C., & Hovy, E. (2011, June). Contextual bearing on linguistic

variation in social media. In Proceedings of the Workshop on Languages in Social Media

(pp. 20-29). Association for Computational Linguistics.

Guo, L., Vargo, C. J., Pan, Z., Ding, W., & Ishwar, P. (2016). Big Social Data analytics in

journalism and mass communication comparing dictionary-based text analysis and

unsupervised topic modeling. Journalism & Mass Communication Quarterly, 93(2), 332-

359. https://doi-org.ezproxy.bgsu.edu:9443/10.1177/1077699016639231

Hansen, D., Shneiderman, B., & Smith, M. A. (2011). Analyzing social media networks with

NodeXL: insights from a connected world. Elsevier Inc, Burlington.

Heim, K. (2013). Framing the 2008 Iowa Democratic Caucuses Political Blogs and Second-

Level Intermedia Agenda Setting. Journalism & Mass Communication Quarterly, 90(3),

500-519. https://doi.org/10.1177/1077699013493785

Hermida, A. (2010). Twittering the news: The emergence of ambient journalism. Journalism

practice, 4(3), 297-308. http://dx.doi.org/10.1080/17512781003640703

199

Himelboim, I., McCreery, S., & Smith, M. (2013). Birds of a feather tweet together: Integrating

network and content analyses to examine cross‐ideology exposure on Twitter. Journal of

Computer‐Mediated Communication, 18(2), 40-60. doi: 10.1111/jcc4.12001

Hollander, B. A. (2008). Tuning out or tuning elsewhere? Partisanship, polarization, and media

migration from 1998 to 2006. Journalism & Mass Communication Quarterly, 85(1), 23-

40. https://doi-org.ezproxy.bgsu.edu:9443/10.1177/107769900808500103

Hyun, K. D., & Moon, S. J. (2016). Agenda setting in the partisan TV news context: Attribute

agenda setting and polarized evaluation of presidential candidates among viewers of

NBC, CNN, and Fox News. Journalism & Mass Communication Quarterly, 93(3), 509-

529. https://doi.org/10.1177/1077699016628820

Jang, S. M., & Pasek, J. (2015). Assessing the carrying capacity of Twitter and online news.

Mass Communication and Society, 18(5), 577-598.

http://dx.doi.org/10.1080/15205436.2015.1035397

Jansen, H. J., & Koop, R. (2006). Pundits, ideologues, and the ranters: The British Columbia

election online. Canadian Journal of Communication, 30(4).

https://doi.org/10.22230/cjc.2005v30n4a1483

Jungherr, A. (2014). The logic of political coverage on Twitter: Temporal dynamics and content.

Journal of Communication, 64(2), 239-259. doi: 10.1111/jcom.12087

Kadushin, C. (2012). Understanding social networks: Theories, concepts, and findings. New

York, NY : Oxford University Press.

Kim, J. H., Barnett, G. A., & Park, H. W. (2010). A hyperlink and issue network analysis of the

United States Senate: A rediscovery of the web as a relational and topical medium.

200

Journal of the American Society for Information Science and Technology, 61(8), 1598-

1611. doi: 10.1002/asi.21357

Kinder, D. R. (1978). Political person perception: The asymmetrical influence of sentiment and

choice on perceptions of presidential candidates. Journal of Personality and Social

Psychology, 36(8), 859. http://dx.doi.org/10.1037/0022-3514.36.8.859

Kiousis, S., Bantimaroudis, P., & Ban, H. (1999). Candidate image attributes experiments on the

substantive dimension of second level agenda setting. Communication Research, 26(4),

414-428. https://doi.org/10.1177/009365099026004003

Kiousis, S., Mitrook, M., Wu, X., & Seltzer, T. (2006). First-and second-level agenda-building

and agenda-setting effects: Exploring the linkages among candidate news releases, media

coverage, and public opinion during the 2002 Florida gubernatorial election. Journal of

Public Relations Research, 18(3), 265-285.

http://dx.doi.org/10.1207/s1532754xjprr1803_4

Ku, G., Kaid, L. L., & Pfau, M. (2003). The impact of web site campaigning on traditional news

media and public information processing. Journalism & Mass Communication Quarterly,

80(3), 528-547. https://doi.org/10.1177/107769900308000304

Kwak, H., Lee, C., Park, H., & Moon, S. (2010, April). What is Twitter, a social network or a

news media?. In Proceedings of the 19th international conference on World wide web

(pp. 591-600). ACM. doi: 10.1145/1772690.1772751

Laniado, D., & Mika, P. (2010). Making sense of twitter. The Semantic Web–ISWC 2010, 470-

485. doi: 10.1007/978-3-642-17746-0_30

201

Lee, J. K. (2007). The effect of the Internet on homogeneity of the media agenda: A test of the

fragmentation thesis. Journalism & Mass Communication Quarterly, 84(4), 745-760.

https://doi-org.ezproxy.bgsu.edu:9443/10.1177/107769900708400406

Lee, B., Lancendorfer, K. M., & Lee, K. J. (2005). Agenda-setting and the Internet: The

intermedia influence of Internet bulletin boards on newspaper coverage of the 2000

general election in South Korea. Asian Journal of Communication, 15(1), 57-71.

http://dx.doi.org/10.1080/0129298042000329793

Lim, J. (2006). A cross-lagged analysis of agenda setting among online news media. Journalism

& Mass Communication Quarterly, 83(2), 298-312.

https://doi.org/10.1177/107769900608300205

Lim, J. (2011). First-level and second-level intermedia agenda-setting among major news

websites. Asian Journal of Communication, 21(2), 167-185.

http://dx.doi.org/10.1080/01292986.2010.539300

Liu, B. (2015). Sentiment Analysis: Mining Opinions, Sentiments, and Emotions. New York,

NY : Cambridge University Press.

Lopez-Escobar, E., Llamas, J. P., McCombs, M., & Lennon, F. R. (1998). Two levels of agenda

setting among advertising and news in the 1995 Spanish elections. Political

Communication, 15(2), 225-238. http://dx.doi.org/10.1080/10584609809342367

MacDonald, M. (2010). Access 2010: The Missing Manual. Sebastopol, CA: O'Reilly Media,

Inc.

Maireder, A., & Ausserhofer, J. (2014). Political discourses on Twitter: networking topics,

objects and people. In K. Weller, A. Bruns, J. Burgess, M. Mahrt, & C. Puschmann

(Eds.), Twitter and society (pp. 305-318). New York: Peter Lang.

202

McCombs, M. (2014). Setting the agenda: The mass media and public opinion. England: Polity

Press.

McCombs, M. (2005). A look at agenda-setting: Past, present and future. Journalism studies,

6(4), 543-557. http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/14616700500250438

McCombs, M., Llamas, J. P., Lopez-Escobar, E., & Rey, F. (1997). Candidate images in Spanish

elections: Second-level agenda-setting effects. Journalism & Mass Communication

Quarterly, 74(4), 703-717. https://doi.org/10.1177/107769909707400404

McCombes, M., Lopez‐Escobar, E., & Llamas, J. P. (2000). Setting the agenda of attributes in

the 1996 Spanish general election. Journal of Communication, 50(2), 77-92. doi:

10.1111/j.1460-2466.2000.tb02842.x

McCombs, M. E., Shaw, D. L., & Weaver, D. H. (2014). New directions in agenda-setting theory

and research. Mass Communication and Society, 17(6), 781-802.

http://dx.doi.org/10.1080/15205436.2014.964871

McLeod, D. M., Kosicki, G. M., & McLeod, J. M. (2009). Political communication effects.

Bryant, J., & Oliver, M. B. (Eds.). Media effects: Advances in theory and research.

Routledge.

McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social

networks. Annual review of sociology, 27(1), 415-444. https://doi-

org.ezproxy.bgsu.edu:9443/10.1146/annurev.soc.27.1.415

Mei, Q., Ling, X., Wondra, M., Su, H., & Zhai, C. (2007, May). Topic sentiment mixture:

modeling facets and opinions in weblogs. In Proceedings of the 16th international

conference on World Wide Web (pp. 171-180). ACM. doi: 10.1145/1242572.1242596

203

Meraz, S. (2009). Is there an elite hold? Traditional media to social media agenda setting

influence in blog networks. Journal of Computer‐Mediated Communication, 14(3), 682-

707. doi: 10.1111/j.1083-6101.2009.01458.x

Meraz, S. (2011a). The fight for ‘how to think’: Traditional media, social networks, and issue

interpretation. Journalism, 12(1), 107-127. https://doi.org/10.1177/1464884910385193

Meraz, S. (2011b). Using time series analysis to measure intermedia agenda-setting influence in

traditional media and political blog networks. Journalism & Mass Communication

Quarterly, 88(1), 176-194. https://doi.org/10.1177/107769901108800110

Newman, T. P. (2016). Tracking the release of IPCC AR5 on Twitter: Users, comments, and

sources following the release of the Working Group I Summary for Policymakers. Public

Understanding of Science, 1(11). https://doi.org/10.1177/0963662516628477

Neuman, W. R., Guggenheim, L., Jang, S. M., & Bae, S. Y. (2014). The dynamics of public

attention: Agenda‐setting theory meets big data. Journal of Communication, 64(2), 193-

214. doi: 10.1111/jcom.12088

O'Connor, B., Balasubramanyan, R., Routledge, B. R., & Smith, N. A. (2010). From Tweets to

Polls: Linking Text Sentiment to Public Opinion Time Series. ICWSM, 11(122-129), 1-2.

Olteanu, A., Castillo, C., Diakopoulos, N., & Aberer, K. (2015). Comparing events coverage in

online news and social media: The case of climate change. In Proceedings of the Ninth

International Conference on Web and Social Media, ICWSM 2015 (pp. 288-297).

Pak, A., & Paroubek, P. (2010, May). Twitter as a Corpus for Sentiment Analysis and Opinion

Mining. In LREC (Vol. 10, pp. 1320-1326).

Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and trends in

information retrieval, 2(1-2), 1-135. http://dx.doi.org/10.1561/1500000011

204

Pang, B., Lee, L., & Vaithyanathan, S. (2002, July). Thumbs up?: sentiment classification using

machine learning techniques. In Proceedings of the ACL-02 conference on Empirical

methods in natural language processing-Volume 10 (pp. 79-86). Association for

Computational Linguistics. doi: 10.3115/1118693.1118704

Papacharissi, Z., & de Fatima Oliveira, M. (2012). Affective news and networked publics: The

rhythms of news storytelling on# Egypt. Journal of Communication, 62(2), 266-282. doi:

10.1111/j.1460-2466.2012.01630.x

Park, H. W., & Jankowski, N. W. (2008). A hyperlink network analysis of citizen blogs in South

Korean politics. Javnost-The Public, 15(2), 57-74.

http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/13183222.2008.11008970

Park, S., Ko, M., Kim, J., Liu, Y., & Song, J. (2011, March). The politics of comments:

predicting political orientation of news stories with commenters' sentiment patterns. In

Proceedings of the ACM 2011 conference on Computer supported cooperative work (pp.

113-122). doi: 10.1145/1958824.1958842

Park, H. W. (2003). Hyperlink network analysis: A new method for the study of social structure

on the web. Connections, 25(1), 49-61. Retrieved from: http://insna.org/connections.html

Park, H. W., & Thelwall, M. (2003). Hyperlink analyses of the World Wide Web: A

review. Journal of Computer-Mediated Communication, 8(4), 0-0. doi: 10.1111/j.1083-

6101.2003.tb00223.x

Parmelee, J. H. (2013). The agenda-building function of political tweets. New Media & Society,

16(3), 434-450. https://doi-org.ezproxy.bgsu.edu:9443/10.1177/1461444813487955

Parmelee, J. H., & Bichard, S. L. (2011). Politics and the Twitter revolution: How tweets

influence the relationship between political leaders and the public. Lexington Books.

205

Pew Research Center. (2012). Social media and political engagement. Retrieved from:

http://www.pewinternet.org/2012/10/19/social-media-and-political-engagement/

Pew Research Center. (2015). State of the news media 2015. Retrieved from:

http://www.journalism.org/files/2015/04/FINAL-STATE-OF-THE-NEWS-MEDIA.pdf

Pew Research Center. (2016a). Voters’ perceptions of the candidates: traits, ideology and impact

on issues. Retrieved from: http://www.people-press.org/2016/07/14/voters-perceptions-

of-the-candidates-traits-ideology-and-impact-on-issues/

Pew Research Center. (2016b). Election 2016: Campaigns as a direct source of news. Retrieved

from: http://www.journalism.org/2016/07/18/election-2016-campaigns-as-a-direct-

source-of-news/

Pew Research Center. (2016c). In Their Own Words: Why Voters Support – and Have Concerns

About – Clinton and Trump. Retrieved from: http://www.people-press.org/2016/09/21/in-

their-own-words-why-voters-support-and-have-concerns-about-clinton-and-trump/

Roberts, M., & McCombs, M. (1994). Agenda setting and political advertising: Origins of the

news agenda. Political Communication, 11(3), 249-262.

http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/10584609.1994.9963030

Roberts, M., Wanta, W., & Dzwo, T. H. D. (2002). Agenda setting and issue salience online.

Communication Research, 29(4), 452-465. https://doi-

org.ezproxy.bgsu.edu:9443/10.1177/0093650202029004004

Robertson, S. P. (2011, January). Changes in referents and emotions over time in election-related

social networking dialog. In System Sciences (HICSS), 2011 44th Hawaii International

Conference on (pp. 1-9). IEEE. doi: 10.1109/HICSS.2011.97

206

Russell, F. M., Hendricks, M. A., Choi, H., & Stephens, E. C. (2015). Who Sets the News

Agenda on Twitter? Journalists’ posts during the 2013 US government shutdown. Digital

Journalism, 3(6), 925-943.

http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/21670811.2014.995918

Sayre, B., Bode, L., Shah, D., Wilcox, D., & Shah, C. (2010). Agenda setting in a digital age:

Tracking attention to California Proposition 8 in social media, online news and

conventional news. Policy & Internet, 2(2), 7-32. doi: 10.2202/1944-2866.1040

Shoemaker, P. J., & Reese, S. D. (1996). Mediating the message: Theories of influences on mass

media content. NY: White Plains.

Spinner, J. (2015). How journalists are using social media monitoring to support local news

coverage. Retrieved from:

http://www.cjr.org/united_states_project/social_media_geotagging_local_journalists.php

Statista (2016). Number of Twitter users in the United States from 2014 to 2020 (in millions).

Retrieved from: https://www.statista.com/statistics/232818/active-us-twitter-user-growth/

Stroud, N. J. (2011). Niche news: The politics of news choice. New York, NY: Oxford University

Press.

Sweetser, K. D., Golan, G. J., & Wanta, W. (2008). Intermedia agenda setting in television,

advertising, and blogs during the 2004 election. Mass Communication & Society, 11(2),

197-216. http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/15205430701590267

Tao, K., Hauff, C., Abel, F., & Houben, G-J. (2014). Information Retrieval for Twitter Data. In

K. Weller, A. Bruns, J. Burgess, M. Mahrt, & C. Puschmann (Eds.), Twitter and society

(pp. 195-206). New York: Peter Lang.

207

Tandoc Jr, E. C., & Jenkins, J. (2015). The Buzzfeedication of journalism? How traditional news

organizations are talking about a new entrant to the journalistic field will surprise you!.

Journalism, 18(4), 482-500. https://doi.org/10.1177/1464884915620269

Thelwall, M. (2014). Sentiment analysis and time series with Twitter. In K. Weller, A. Bruns, J.

Burgess, M. Mahrt, & C. Puschmann (Eds.), Twitter and society (pp. 83-95). New York:

Peter Lang.

Thelwall, M., Buckley, K., & Paltoglou, G. (2011). Sentiment in Twitter events. Journal of the

American Society for Information Science and Technology, 62(2), 406-418. doi:

10.1002/asi.21462

Thelwall, M., & Prabowo, R. (2007). Identifying and characterizing public science‐related fears

from RSS feeds. Journal of the American Society for Information Science and

Technology, 58(3), 379-390. doi: 10.1002/asi.20504

Tumasjan, A., Sprenger, T. O., Sandner, P. G., & Welpe, I. M. (2010). Election forecasts with

Twitter: How 140 characters reflect the political landscape. Social Science Computer

Review, 29(4), 402-418. https://doi-

org.ezproxy.bgsu.edu:9443/10.1177/0894439310386557

Thelwall, M., Sud, P., & Wilkinson, D. (2012). Link and co‐inlink network diagrams with URL

citations or title mentions. Journal of the American Society for Information Science and

Technology, 63(4), 805-816. doi: 10.1002/asi.21709

Vargo, C. J., Basilaia, E., & Shaw, D. L. (2015), Event versus Issue: Twitter Reflections of

Major News, A Case Study. In L. Robinson, S. R. Cotten, J. Schulz (Eds.),

Communication and Information Technologies Annual (Studies in Media and

Communications, Volume 9) (pp.215 – 239). Emerald Group Publishing Limited.

208

Vargo, C. J., & Guo, L. (2017). Networks, Big Data, and Intermedia Agenda Setting: An

Analysis of Traditional, Partisan, and Emerging Online US News. Journalism & Mass

Communication Quarterly, 1077699016679976. https://doi-

org.ezproxy.bgsu.edu:9443/10.1177/1077699016679976

Vargo, C. J., Guo, L., McCombs, M., & Shaw, D. L. (2014). Network issue agendas on Twitter

during the 2012 US presidential election. Journal of Communication, 64(2), 296-316.

doi: 10.1111/jcom.12089

Vliegenthart, R., & Walgrave, S. (2008). The contingency of intermedia agenda setting: A

longitudinal study in Belgium. Journalism & Mass Communication Quarterly, 85(4),

860-877. https://doi-org.ezproxy.bgsu.edu:9443/10.1177/107769900808500409

Vonbun, R., Königslöw, K. K. V., & Schoenbach, K. (2016). Intermedia agenda-setting in a

multimedia news environment. Journalism, 17(8), 1054-1073. https://doi-

org.ezproxy.bgsu.edu:9443/10.1177/1464884915595475

Wallsten, K. (2007). Agenda setting and the blogosphere: An analysis of the relationship

between mainstream media and political blogs. Review of Policy Research, 24(6), 567-

587. doi: 10.1111/j.1541-1338.2007.00300.x

Wortham, J. (2012, October 8). The Presidential Campaign on Social Media. The New York

Times. Retrieved from:

http://www.nytimes.com/interactive/2012/10/08/technology/campaign-social-

media.html?action=click&contentCollection=Technology&module=RelatedCoverage&p

gtype=article®ion=EndOfArticle&_r=0

Williams, B. A., & Carpini, M. X. D. (2004). Monica and Bill All the Time and Everywhere The

Collapse of Gatekeeping and Agenda Setting in the New Media Environment. American

209

Behavioral Scientist, 47(9), 1208-1230. https://doi-

org.ezproxy.bgsu.edu:9443/10.1177/0002764203262344

Williams, A. P., Trammell, K. D., Postelnicu, M., Landreville, K. D., & Martin, J. D. (2005).

Blogging and hyperlinking: Use of the Web to enhance viability during the 2004 US

campaign. Journalism Studies, 6(2), 177-186.

http://dx.doi.org.ezproxy.bgsu.edu:8080/10.1080/14616700500057262

Yun, G. W., David, M., Park, S., Joa, C. Y., Labbe, B., Lim, J., ... & Hyun, D. (2016). Social

media and flu: Media Twitter accounts as agenda setters. International journal of medical

informatics, 91, 67-73. https://doi.org/10.1016/j.ijmedinf.2016.04.009