A Model to Evaluate and Detect Bot Behavior on Twitter

Total Page:16

File Type:pdf, Size:1020Kb

A Model to Evaluate and Detect Bot Behavior on Twitter “IT DOESN’T MATTER NOW WHO’S RIGHT AND WHO’S NOT:” A MODEL TO EVALUATE AND DETECT BOT BEHAVIOR ON TWITTER by Braeden Bowen Honors Theis Submitted to the Department of Computer Science and the Department of Political Science Wittenberg University In partial fulfillment of the requirements for Wittenberg University honors April 2021 Bowen 2 On April 18, 2019, United States Special Counsel Robert Mueller III released a 448-page report on Russian influence on the 2016 United States presidential election [32]. In the report, Mueller and his team detailed a vast network of false social media accounts acting in a coordinated, concerted campaign to influence the outcome of the election and insert systemic distrust in Western democracy. Helmed by the Russian Internet Research Agency (IRA), a state-sponsored organization dedicated to operating the account network, the campaign engaged in "information warfare" to undermine the United States democratic political system. Russia's campaign of influence on the 2016 U.S. elections is emblematic of a new breed of warfare designed to achieve long-term foreign policy goals by preying on inherent social vulnerabilities that are amplified by the novelty and anonymity of social media [13]. To this end, state actors can weaponize automated accounts controlled through software [55] to exert influence through the dissemination of a narrative or the production of inorganic support for a person, issue, or event [13]. Research Questions This study asks six core questions about bots, bot activity, and disinformation online: RQ 1: What are bots? RQ 2: Why do bots work? RQ 3: When have bot campaigns been executed? RQ 4: How do bots work? RQ 5: What do bots do? RQ 6: How can bots be modeled? Hypotheses With respect to RQ 6, I will propose BotWise, a model designed to distill average behavior on the social media platform Twitter from a set of real users and compare that data against novel input. Regarding this model, I have three central hypotheses: H 1: real users and bots exhibit distinct behavioral patterns on Twitter H 2: the behavior of accounts can be modeled based on account data and activity H 3: novel bots can be detected using these models by calculating the difference between modeled behavior and novel behavior Bots Automated accounts on social media are not inherently malicious. Originally, software robots, or "bots," were used to post content automatically on a set schedule. Since then, bots have evolved significantly, and can now be used for a variety of innocuous purposes, including marketing, distribution of information, automatic responding, news aggregation, or just for highlighting and reposting interesting content [13]. No matter their purpose, bots are built entirely from human-written code. As a result, every action and decision they are made capable of replicating must be preprogrammed and decided by the account's owner. But because they are largely self-reliant after creation, bots can generate massive amounts of content and data very quickly. Bowen 3 Many limited-use bots make it abundantly clear that they are inhuman actors. Some bots, called social bots, though, attempt to subvert real users by emulating human behavior as closely as possible, creating a mirage of imitation [13]. These accounts may attempt to build a credible persona as a real person in order to avoid detection, sometimes going as far as being partially controlled by a human and partially controlled by software [54]. The more sophisticated the bot, the more effectively it can shroud itself and blend into the landscape of real users online. Not all social bots are designed benevolently. Malicious bots, those designed with an exploitative or abusive purpose in mind, can also be built from the same framework that creates legitimate social bots. These bad actors are created with the intention of exploiting and manipulating information by infiltrating a population of real, unsuspecting users [13]. If a malicious actor like Russia's Internet Research Agency were invested in creation a large-scale disinformation campaign with bots, a single account would be woefully insufficient to produce meaningful results. Malicious bots can be coordinated with extreme scalability to feign the existence of a unified populous or movement, or to inject disinformation or polarization into an existing community of users [13], [30]. These networks, called "troll factories," "farms," or "botnets," can more effectively enact an influence campaign [9] and are often hired by partisan groups or weaponized by states to underscore or amplify a political narrative. Social Media Usage In large part, the effectiveness of bots depends on users' willingness to engage with social media. Luckily for bots, social media usage in the U.S. has skyrocketed since the medium's inception in the early 2000's. In 2005, as the Internet began to edge into American life as a mainstay of communication, a mere 5% of Americans reported using social media [40], which was then just a burgeoning new form of online interconnectedness. Just a decade and a half later, almost 75% of Americans found themselves utilizing YouTube, Instagram, Snapchat, Facebook, or Twitter. In a similar study, 90% of Americans 18-29, the lowest age range surveyed, reported activity on social media [39]. In 2020, across the globe, over 3.8 billion people, nearly 49% of the world's population, held a presence on social media [23]. In April 2020 alone, Facebook reported that more than 3 billion of those people had used its products [36]. The success of bots also relies on users' willingness to utilize social media not just as a platform for social connections, but as an information source. Again, the landscape is ripe for influence: in January 2021, more than half (53%) of U.S. adults reported reading news from social media and over two-thirds (68%) reported reading news from news websites [45]. In a 2018 Pew study, over half of Facebook users reported getting their news exclusively from Facebook [14]. In large part, this access to information is free, open, and unrestricted, a novel method for the dissemination of news media. Generally, social media has made the transmission of information easier and faster than ever before [22]. Information that once spread slowly by mouth now spreads instantaneously through increasingly massive networks, bringing worldwide communication delays to nearly zero. Platforms like Facebook and Twitter have been marketed by proponents of democracy as a mode of increasing democratic participation, free speech, and political engagement [49]. In theory, Sunstein [47] says, social media as a vehicle of self-governance should bolster democratic Bowen 4 information sharing. In reality, though, the proliferation of "fake news," disinformation, and polarization have threatened cooperative political participation [47]. While social media was intended to decentralize and popularize democracy and free speech [49], the advent of these new platforms have inadvertently decreased the authority of institutions (DISNFO) and the power of public officials to influence the public agenda [27] by subdividing groups of people into unconnected spheres of information. Social Vulnerabilities Raw code and widespread social media usage alone are not sufficient to usurp an electoral process or disseminate a nationwide disinformation campaign. To successfully avoid detection, spread a narrative, and eventually "hijack" a consumer of social media, bots must work to exploit a number of inherent social vulnerabilities that, while largely predating social media, may be exacerbated by the platforms' novelty and opportunity for relative anonymity [44]. Even the techniques for social exploitation are not new: methods of social self-insertion often mirror traditional methods of exploitation for software and hardware [54]. The primary social vulnerability that bot campaigns may exploit is division. By subdividing large groups of people and herding them into like-minded circles of users inside of which belief- affirmative information flows, campaigns can decentralize political and social narratives, reinforce beliefs, polarize groups, and, eventually, pit groups against one another, even when screens are off [31]. Participatory Media Publically and commercially, interconnectedness, not disconnectedness, is the animus of social media platforms like Facebook, whose public aim is to connect disparate people and give open access to information [58]. In practice, though, this interconnectedness largely revolves around a user's chosen groups, not the platform's entire user base. A participant in social media is given a number of choices: what platforms to join, who to connect with, who to follow, and what to see. Platforms like Facebook and Twitter revolve around sharing information with users' personal connections and associated groups: a tweet is sent out to all of a user's followers, and a Facebook status update can be seen by anyone within a user's chosen group of "friends." Users can post text, pictures, GIFs, videos, and links to outside sources, including other social media sites. Users also have the ability to restrict who can see the content they post, from anyone on the entire platform to no one at all. Users chose what content to participate in and interact with and chose which groups to include themselves in. This choice is the first building block of division: while participation in self-selected groups online provides users with a sense of community and belonging [5], it also builds an individual
Recommended publications
  • Arxiv:1805.10105V1 [Cs.SI] 25 May 2018
    Effects of Social Bots in the Iran-Debate on Twitter Andree Thieltges, Orestis Papakyriakopoulos, Juan Carlos Medina Serrano, Simon Hegelich Bavarian School of Public Policy Technical University of Munich Abstract Ahmadinejad caused nationwide unrests and protests. As these protests grew, the Iranian government shut down for- 2018 started with massive protests in Iran, bringing back the eign news coverage and restricted the use of cell phones, impressions of the so called “Arab Spring” and it’s revolution- text-messaging and internet access. Nevertheless, Twitter ary impact for the Maghreb states, Syria and Egypt. Many reports and scientific examinations considered online social and Facebook “became vital tools to relay news and infor- mation on anti-government protest to the people inside and networks (OSN’s) such as Twitter or Facebook to play a criti- 1 cal role in the opinion making of people behind those protests. outside Iran” . While Ayatollah Khameini characterized the Beside that, there is also evidence for directed manipulation influence of Twitter as “deviant” and inappropriate on Ira- of opinion with the help of social bots and fake accounts. nian domestic affairs2, most of the foreign news coverage So, it is obvious to ask, if there is an attempt to manipulate hailed Twitter to be “a new and influential medium for social the opinion-making process related to the Iranian protest in movements and international politics” (Burns and Eltham OSN by employing social bots, and how such manipulations 2009). Beside that, there was already a critical view upon will affect the discourse as a whole. Based on a sample of the influence of OSN’s as “tools” to express political opin- ca.
    [Show full text]
  • Political Astroturfing Across the World
    Political astroturfing across the world Franziska B. Keller∗ David Schochy Sebastian Stierz JungHwan Yang§ Paper prepared for the Harvard Disinformation Workshop Update 1 Introduction At the very latest since the Russian Internet Research Agency’s (IRA) intervention in the U.S. presidential election, scholars and the broader public have become wary of coordi- nated disinformation campaigns. These hidden activities aim to deteriorate the public’s trust in electoral institutions or the government’s legitimacy, and can exacerbate political polarization. But unfortunately, academic and public debates on the topic are haunted by conceptual ambiguities, and rely on few memorable examples, epitomized by the often cited “social bots” who are accused of having tried to influence public opinion in various contemporary political events. In this paper, we examine “political astroturfing,” a particular form of centrally co- ordinated disinformation campaigns in which participants pretend to be ordinary citizens acting independently (Kovic, Rauchfleisch, Sele, & Caspar, 2018). While the accounts as- sociated with a disinformation campaign may not necessarily spread incorrect information, they deceive the audience about the identity and motivation of the person manning the ac- count. And even though social bots have been in the focus of most academic research (Fer- rara, Varol, Davis, Menczer, & Flammini, 2016; Stella, Ferrara, & De Domenico, 2018), seemingly automated accounts make up only a small part – if any – of most astroturf- ing campaigns. For instigators of astroturfing, relying exclusively on social bots is not a promising strategy, as humans are good at detecting low-quality information (Pennycook & Rand, 2019). On the other hand, many bots detected by state-of-the-art social bot de- tection methods are not part of an astroturfing campaign, but unrelated non-political spam ∗Hong Kong University of Science and Technology yThe University of Manchester zGESIS – Leibniz Institute for the Social Sciences §University of Illinois at Urbana-Champaign 1 bots.
    [Show full text]
  • Understanding Users' Perspectives of News Bots in the Age of Social Media
    sustainability Article Utilizing Bots for Sustainable News Business: Understanding Users’ Perspectives of News Bots in the Age of Social Media Hyehyun Hong 1 and Hyun Jee Oh 2,* 1 Department of Advertising and Public Relations, Chung-Ang University, Seoul 06974, Korea; [email protected] 2 Department of Communication Studies, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong SAR, China * Correspondence: [email protected] Received: 15 July 2020; Accepted: 10 August 2020; Published: 12 August 2020 Abstract: The move of news audiences to social media has presented a major challenge for news organizations. How to adapt and adjust to this social media environment is an important issue for sustainable news business. News bots are one of the key technologies offered in the current media environment and are widely applied in news production, dissemination, and interaction with audiences. While benefits and concerns coexist about the application of bots in news organizations, the current study aimed to examine how social media users perceive news bots, the factors that affect their acceptance of bots in news organizations, and how this is related to their evaluation of social media news in general. An analysis of the US national survey dataset showed that self-efficacy (confidence in identifying content from a bot) was a successful predictor of news bot acceptance, which in turn resulted in a positive evaluation of social media news in general. In addition, an individual’s perceived prevalence of social media news from bots had an indirect effect on acceptance by increasing self-efficacy. The results are discussed with the aim of providing a better understanding of news audiences in the social media environment, and practical implications for the sustainable news business are suggested.
    [Show full text]
  • Reagan's Mythical America
    Reagan’s Mythical America This page intentionally left blank Reagan’s Mythical America Storytelling as Political Leadership Jan Hanska REAGAN’ S MYTHICAL AMERICA Copyright © Jan Hanska, 2012. Softcover reprint of the hardcover 1st edition 2012 978-1-137-27299-7 All rights reserved. First published in 2012 by PALGRAVE MACMILLAN® in the United States—a division of St. Martin’s Press LLC, 175 Fifth Avenue, New York, NY 10010. Where this book is distributed in the UK, Europe and the rest of the world, this is by Palgrave Macmillan, a division of Macmillan Publishers Limited, registered in England, company number 785998, of Houndmills, Basingstoke, Hampshire RG21 6XS. Palgrave Macmillan is the global academic imprint of the above companies and has companies and representatives throughout the world. Palgrave® and Macmillan® are registered trademarks in the United States, the United Kingdom, Europe and other countries. ISBN 978-1-349-44509-7 ISBN 978-1-137-27300-0 (eBook) DOI 10.1057/9781137273000 Library of Congress Cataloging-in-Publication Data Hanska, Jan. Reagan’s mythical America : storytelling as political leadership / by Jan Hanska. p. cm. 1. Reagan, Ronald—Oratory. 2. Political oratory—United States— History—20th century. 3. Discourse analysis, Narrative—Political aspects—United States. 4. Communication in politics— United States—History—20th century. 5. United States—Politics and government—1981–1989. I. Title. E877.2.H356 2012 973.927092—dc23 2012013075 A catalogue record of the book is available from the British Library. Design by Newgen Imaging Systems (P) Ltd., Chennai, India. First edition: October 2012 10 9 8 7 6 5 4 3 2 1 CPI Antony Rowe, Chippenham and Eastbourne This book is dedicated to professors Vilho Harle Ira Chernus Mika Luoma-aho .
    [Show full text]
  • Supplementary Materials For
    www.sciencemag.org/content/359/6380/1094/suppl/DC1 Supplementary Materials for The science of fake news David M. J. Lazer,*† Matthew A. Baum,* Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, Jonathan L. Zittrain *These authors contributed equally to this work. †Corresponding author. Email: [email protected] Published 9 March 2018, Science 359, 1094 (2018) DOI: 10.1126/science.aao2998 This PDF file includes: List of Author Affiliations Supporting Materials References List of Author Affiliations David M. J. Lazer,1,2*† Matthew A. Baum,3* Yochai Benkler,4,5 Adam J. Berinsky,6 Kelly M. Greenhill,7,3 Filippo Menczer,8 Miriam J. Metzger,9 Brendan Nyhan,10 Gordon Pennycook,11 David Rothschild,12 Michael Schudson,13 Steven A. Sloman,14 Cass R. Sunstein,4 Emily A. Thorson,15 Duncan J. Watts,12 Jonathan L. Zittrain4,5 1Network Science Institute, Northeastern University, Boston, MA 02115, USA. 2Institute for Quantitative Social Science, Harvard University, Cambridge, MA 02138, USA 3John F. Kennedy School of Government, Harvard University, Cambridge, MA 02138, USA. 4Harvard Law School, Harvard University, Cambridge, MA 02138, USA. 5 Berkman Klein Center for Internet and Society, Cambridge, MA 02138, USA. 6Department of Political Science, Massachussets Institute of Technology, Cambridge, MA 02139, USA. 7Department of Political Science, Tufts University, Medford, MA 02155, USA. 8School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47405, USA. 9Department of Communication, University of California, Santa Barbara, Santa Barbara, CA 93106, USA.
    [Show full text]
  • Frame Viral Tweets.Pdf
    BIG IDEAS! Challenging Public Relations Research and Practice Viral Tweets, Fake News and Social Bots in Post-Factual Politics: The Case of the French Presidential Elections 2017 Alex Frame, Gilles Brachotte, Eric Leclercq, Marinette Savonnet [email protected] 20th Euprera Congress, Aarhus, 27-29th September 2018 Post-Factual Politics in the Age of Social Media 1. Algorithms based on popularity rather than veracity, linked to a business model based on views and likes, where novelty and sensationalism are of the essence; 2. Social trends of “whistle-blowing” linked to conspiracy theories and a pejorative image of corporate or institutional communication, which cast doubt on the neutrality of traditionally so-called expert figures, such as independent bodies or academics; 3. Algorithm-fuelled social dynamics on social networking sites which structure publics by similarity, leading to a fragmented digital public sphere where like-minded individuals congregate digitally, providing an “echo-chamber” effect susceptible to encourage even the most extreme political views and the “truths” upon which they are based. Fake News at the Presidential Elections Fake News at the Presidential Elections Fake News at the Presidential Elections Political Social Bots on Twitter Increasingly widespread around the world, over at least the last decade. Kremlin bot army (Lawrence Alexander on GlobalVoices.org; Stukal et al., 2017). DFRLab (Atlantic Council) Social Bots A “social bot” is: “a computer algorithm that automatically produces content and
    [Show full text]
  • Empirical Comparative Analysis of Bot Evidence in Social Networks
    Bots in Nets: Empirical Comparative Analysis of Bot Evidence in Social Networks Ross Schuchard1(&) , Andrew Crooks1,2 , Anthony Stefanidis2,3 , and Arie Croitoru2 1 Computational Social Science Program, Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA [email protected] 2 Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA 22030, USA 3 Criminal Investigations and Network Analysis Center, George Mason University, Fairfax, VA 22030, USA Abstract. The emergence of social bots within online social networks (OSNs) to diffuse information at scale has given rise to many efforts to detect them. While methodologies employed to detect the evolving sophistication of bots continue to improve, much work can be done to characterize the impact of bots on communication networks. In this study, we present a framework to describe the pervasiveness and relative importance of participants recognized as bots in various OSN conversations. Specifically, we harvested over 30 million tweets from three major global events in 2016 (the U.S. Presidential Election, the Ukrainian Conflict and Turkish Political Censorship) and compared the con- versational patterns of bots and humans within each event. We further examined the social network structure of each conversation to determine if bots exhibited any particular network influence, while also determining bot participation in key emergent network communities. The results showed that although participants recognized as social bots comprised only 0.28% of all OSN users in this study, they accounted for a significantly large portion of prominent centrality rankings across the three conversations. This includes the identification of individual bots as top-10 influencer nodes out of a total corpus consisting of more than 2.8 million nodes.
    [Show full text]
  • Bursting the Filter Bubble
    BURSTINGTHE FILTER BUBBLE:DEMOCRACY , DESIGN, AND ETHICS Proefschrift ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. ir. K. C. A. M. Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op woensdag, 16 September 2015 om 10:00 uur door Engin BOZDAG˘ Master of Science in Technische Informatica geboren te Malatya, Turkije. Dit proefschrift is goedgekeurd door: Promotors: Prof. dr. M.J. van den Hoven Prof. dr. ir. I.R. van de Poel Copromotor: dr. M.E. Warnier Samenstelling promotiecommissie: Rector Magnificus, voorzitter Prof. dr. M.J. van den Hoven Technische Universiteit Delft, promotor Prof. dr. ir. I.R. van de Poel Technische Universiteit Delft, promotor dr. M.E. Warnier Technische Universiteit Delft, copromotor Independent members: dr. C. Sandvig Michigan State University, USA Prof. dr. M. Binark Hacettepe University, Turkey Prof. dr. R. Rogers Universiteit van Amsterdam Prof. dr. A. Hanjalic Technische Universiteit Delft Prof. dr. ir. M.F.W.H.A. Janssen Technische Universiteit Delft, reservelid Printed by: CPI Koninklijke Wöhrmann Cover Design: Özgür Taylan Gültekin E-mail: [email protected] WWW: http://www.bozdag.nl Copyright © 2015 by Engin Bozda˘g All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, includ- ing photocopying, recording or by any information storage and retrieval system, without written permission of the author. An electronic version of this dissertation is available at http://repository.tudelft.nl/. PREFACE For Philip Serracino Inglott, For his passion and dedication to Information Ethics Rest in Peace.
    [Show full text]
  • A Longitudinal Analysis of Youtube's Promotion of Conspiracy Videos
    A longitudinal analysis of YouTube’s promotion of conspiracy videos Marc Faddoul1, Guillaume Chaslot3, and Hany Farid1,2 Abstract Conspiracy theories have flourished on social media, raising concerns that such content is fueling the spread of disinformation, supporting extremist ideologies, and in some cases, leading to violence. Under increased scrutiny and pressure from legislators and the public, YouTube announced efforts to change their recommendation algorithms so that the most egregious conspiracy videos are demoted and demonetized. To verify this claim, we have developed a classifier for automatically determining if a video is conspiratorial (e.g., the moon landing was faked, the pyramids of Giza were built by aliens, end of the world prophecies, etc.). We coupled this classifier with an emulation of YouTube’s watch-next algorithm on more than a thousand popular informational channels to obtain a year-long picture of the videos actively promoted by YouTube. We also obtained trends of the so-called filter-bubble effect for conspiracy theories. Keywords Online Moderation, Disinformation, Algorithmic Transparency, Recommendation Systems Introduction social media 21; (2) Although view-time might not be the only metric driving the recommendation algorithms, YouTube By allowing for a wide range of opinions to coexist, has not fully explained what the other factors are, or their social media has allowed for an open exchange of relative contributions. It is unarguable, nevertheless, that ideas. There have, however, been concerns that the keeping users engaged remains the main driver for YouTubes recommendation engines which power these services advertising revenues 22,23; and (3) While recommendations amplify sensational content because of its tendency to may span a spectrum, users preferably engage with content generate more engagement.
    [Show full text]
  • Arxiv:1811.12349V2 [Cs.SI] 4 Dec 2018 Content for Different Purposes in Very Large Scale
    Combating Fake News with Interpretable News Feed Algorithms Sina Mohseni Eric D. Ragan Texas A&M University University of Florida College Station, TX Gainesville, FL [email protected] eragan@ufl.edu Abstract cations of personalized data tracking for the dissemination and consumption of news has caught the attention of many, Nowadays, artificial intelligence algorithms are used for tar- especially given evidence of the influence of malicious so- geted and personalized content distribution in the large scale as part of the intense competition for attention in the digital cial media accounts on the spread of fake news to bias users media environment. Unfortunately, targeted information dis- during the 2016 US election (Bessi and Ferrara 2016). Re- semination may result in intellectual isolation and discrimi- cent reports show that social media outperforms television as nation. Further, as demonstrated in recent political events in the primary news source (Allcott and Gentzkow 2017), and the US and EU, malicious bots and social media users can the targeted distribution of erroneous or misleading “fake create and propagate targeted “fake news” content in differ- news” may have resulted in large-scale manipulation and ent forms for political gains. From the other direction, fake isolation of users’ news feeds as part of the intense competi- news detection algorithms attempt to combat such problems tion for attention in the digital media space (Kalogeropoulos by identifying misinformation and fraudulent user profiles. and Nielsen 2018). This paper reviews common news feed algorithms as well as methods for fake news detection, and we discuss how news Although online information platforms are replacing the feed algorithms could be misused to promote falsified con- conventional news sources, personalized news feed algo- tent, affect news diversity, or impact credibility.
    [Show full text]
  • 2011 Elena Daniela
    ©2011 ELENA DANIELA (DANA) NEACSU ALL RIGHTS RESERVED POLITICAL SATIRE AND POLITICAL NEWS: ENTERTAINING, ACCIDENTALLY REPORTING OR BOTH? THE CASE OF THE DAILY SHOW WITH JON STEWART (TDS) by ELENA-DANIELA (DANA) NEACSU A Dissertation submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements for the degree of Doctor of Philosophy Graduate Program in Communication, Information and Library Studies Written under the direction of John V. Pavlik, Ph.D And approved by ___Michael Schudson, Ph.D.___ ____Jack Bratich, Ph.D.______ ____Susan Keith, Ph.D.______ ______________________________ New Brunswick, New Jersey MAY 2011 ABSTRACT OF THE DISSERTATION Political Satire and Political News: Entertaining, Accidentally Reporting or Both? The Case of The Daily Show with Jon Stewart (TDS) by ELENA-DANIELA (DANA) NEACSU Dissertation Director: John V. Pavlik, Ph.D. For the last decade, The Daily Show with Jon Stewart (TDS ), a (Comedy Central) cable comedy show, has been increasingly seen as an informative, new, even revolutionary, form of journalism. A substantial body of literature appeared, adopting this view. On closer inspection, it became clear that this view was tenable only in specific circumstances. It assumed that the comedic structure of the show, TDS ’ primary text, promoted cognitive polysemy, a textual ambiguity which encouraged critical inquiry, and that TDS ’ audiences perceived it accordingly. As a result I analyzed, through a dual - encoding/decoding - analytical approach, whether TDS ’ comedic discourse educates and informs its audiences in a ii manner which encourages independent or critical reading of the news. Through a multilayered textual analysis of the primary and tertiary texts of the show, the research presented here asked, “How does TDS ’ comedic narrative (primary text) work as a vehicle of televised political news?” and “How does TDS ’ audience decode its text?” The research identified flaws in the existing literature and the limits inherent to any similar endeavors.
    [Show full text]
  • The Nature of Political Satire Under Different Types of Political
    Rollins College Rollins Scholarship Online Honors Program Theses Spring 2017 Laughing in the Face of Oppression: The aN ture of Political Satire Under Different Types of Political Regimes Victoria Villavicencio Pérez Rollins College, [email protected] Follow this and additional works at: http://scholarship.rollins.edu/honors Part of the Communication Commons, International Relations Commons, and the Latin American Studies Commons Recommended Citation Villavicencio Pérez, Victoria, "Laughing in the Face of Oppression: The aN ture of Political Satire Under Different Types of Political Regimes" (2017). Honors Program Theses. 55. http://scholarship.rollins.edu/honors/55 This Open Access is brought to you for free and open access by Rollins Scholarship Online. It has been accepted for inclusion in Honors Program Theses by an authorized administrator of Rollins Scholarship Online. For more information, please contact [email protected]. POLITICAL SATIRE UNDER DIFFERENT POLITICAL REGIMES 1 Laughing in the Face of Oppression: The Nature of Political Satire Under Different Types of Political Regimes Victoria Villavicencio Honors Degree Program POLITICAL SATIRE UNDER DIFFERENT POLITICAL REGIMES 2 Table of Contents ABSTRACT ..................................................................................................................................................3 INTRODUCTION ........................................................................................................................................4 SIGNIFICANCE ............................................................................................................................................6
    [Show full text]