
“IT DOESN’T MATTER NOW WHO’S RIGHT AND WHO’S NOT:” A MODEL TO EVALUATE AND DETECT BOT BEHAVIOR ON TWITTER by Braeden Bowen Honors Theis Submitted to the Department of Computer Science and the Department of Political Science Wittenberg University In partial fulfillment of the requirements for Wittenberg University honors April 2021 Bowen 2 On April 18, 2019, United States Special Counsel Robert Mueller III released a 448-page report on Russian influence on the 2016 United States presidential election [32]. In the report, Mueller and his team detailed a vast network of false social media accounts acting in a coordinated, concerted campaign to influence the outcome of the election and insert systemic distrust in Western democracy. Helmed by the Russian Internet Research Agency (IRA), a state-sponsored organization dedicated to operating the account network, the campaign engaged in "information warfare" to undermine the United States democratic political system. Russia's campaign of influence on the 2016 U.S. elections is emblematic of a new breed of warfare designed to achieve long-term foreign policy goals by preying on inherent social vulnerabilities that are amplified by the novelty and anonymity of social media [13]. To this end, state actors can weaponize automated accounts controlled through software [55] to exert influence through the dissemination of a narrative or the production of inorganic support for a person, issue, or event [13]. Research Questions This study asks six core questions about bots, bot activity, and disinformation online: RQ 1: What are bots? RQ 2: Why do bots work? RQ 3: When have bot campaigns been executed? RQ 4: How do bots work? RQ 5: What do bots do? RQ 6: How can bots be modeled? Hypotheses With respect to RQ 6, I will propose BotWise, a model designed to distill average behavior on the social media platform Twitter from a set of real users and compare that data against novel input. Regarding this model, I have three central hypotheses: H 1: real users and bots exhibit distinct behavioral patterns on Twitter H 2: the behavior of accounts can be modeled based on account data and activity H 3: novel bots can be detected using these models by calculating the difference between modeled behavior and novel behavior Bots Automated accounts on social media are not inherently malicious. Originally, software robots, or "bots," were used to post content automatically on a set schedule. Since then, bots have evolved significantly, and can now be used for a variety of innocuous purposes, including marketing, distribution of information, automatic responding, news aggregation, or just for highlighting and reposting interesting content [13]. No matter their purpose, bots are built entirely from human-written code. As a result, every action and decision they are made capable of replicating must be preprogrammed and decided by the account's owner. But because they are largely self-reliant after creation, bots can generate massive amounts of content and data very quickly. Bowen 3 Many limited-use bots make it abundantly clear that they are inhuman actors. Some bots, called social bots, though, attempt to subvert real users by emulating human behavior as closely as possible, creating a mirage of imitation [13]. These accounts may attempt to build a credible persona as a real person in order to avoid detection, sometimes going as far as being partially controlled by a human and partially controlled by software [54]. The more sophisticated the bot, the more effectively it can shroud itself and blend into the landscape of real users online. Not all social bots are designed benevolently. Malicious bots, those designed with an exploitative or abusive purpose in mind, can also be built from the same framework that creates legitimate social bots. These bad actors are created with the intention of exploiting and manipulating information by infiltrating a population of real, unsuspecting users [13]. If a malicious actor like Russia's Internet Research Agency were invested in creation a large-scale disinformation campaign with bots, a single account would be woefully insufficient to produce meaningful results. Malicious bots can be coordinated with extreme scalability to feign the existence of a unified populous or movement, or to inject disinformation or polarization into an existing community of users [13], [30]. These networks, called "troll factories," "farms," or "botnets," can more effectively enact an influence campaign [9] and are often hired by partisan groups or weaponized by states to underscore or amplify a political narrative. Social Media Usage In large part, the effectiveness of bots depends on users' willingness to engage with social media. Luckily for bots, social media usage in the U.S. has skyrocketed since the medium's inception in the early 2000's. In 2005, as the Internet began to edge into American life as a mainstay of communication, a mere 5% of Americans reported using social media [40], which was then just a burgeoning new form of online interconnectedness. Just a decade and a half later, almost 75% of Americans found themselves utilizing YouTube, Instagram, Snapchat, Facebook, or Twitter. In a similar study, 90% of Americans 18-29, the lowest age range surveyed, reported activity on social media [39]. In 2020, across the globe, over 3.8 billion people, nearly 49% of the world's population, held a presence on social media [23]. In April 2020 alone, Facebook reported that more than 3 billion of those people had used its products [36]. The success of bots also relies on users' willingness to utilize social media not just as a platform for social connections, but as an information source. Again, the landscape is ripe for influence: in January 2021, more than half (53%) of U.S. adults reported reading news from social media and over two-thirds (68%) reported reading news from news websites [45]. In a 2018 Pew study, over half of Facebook users reported getting their news exclusively from Facebook [14]. In large part, this access to information is free, open, and unrestricted, a novel method for the dissemination of news media. Generally, social media has made the transmission of information easier and faster than ever before [22]. Information that once spread slowly by mouth now spreads instantaneously through increasingly massive networks, bringing worldwide communication delays to nearly zero. Platforms like Facebook and Twitter have been marketed by proponents of democracy as a mode of increasing democratic participation, free speech, and political engagement [49]. In theory, Sunstein [47] says, social media as a vehicle of self-governance should bolster democratic Bowen 4 information sharing. In reality, though, the proliferation of "fake news," disinformation, and polarization have threatened cooperative political participation [47]. While social media was intended to decentralize and popularize democracy and free speech [49], the advent of these new platforms have inadvertently decreased the authority of institutions (DISNFO) and the power of public officials to influence the public agenda [27] by subdividing groups of people into unconnected spheres of information. Social Vulnerabilities Raw code and widespread social media usage alone are not sufficient to usurp an electoral process or disseminate a nationwide disinformation campaign. To successfully avoid detection, spread a narrative, and eventually "hijack" a consumer of social media, bots must work to exploit a number of inherent social vulnerabilities that, while largely predating social media, may be exacerbated by the platforms' novelty and opportunity for relative anonymity [44]. Even the techniques for social exploitation are not new: methods of social self-insertion often mirror traditional methods of exploitation for software and hardware [54]. The primary social vulnerability that bot campaigns may exploit is division. By subdividing large groups of people and herding them into like-minded circles of users inside of which belief- affirmative information flows, campaigns can decentralize political and social narratives, reinforce beliefs, polarize groups, and, eventually, pit groups against one another, even when screens are off [31]. Participatory Media Publically and commercially, interconnectedness, not disconnectedness, is the animus of social media platforms like Facebook, whose public aim is to connect disparate people and give open access to information [58]. In practice, though, this interconnectedness largely revolves around a user's chosen groups, not the platform's entire user base. A participant in social media is given a number of choices: what platforms to join, who to connect with, who to follow, and what to see. Platforms like Facebook and Twitter revolve around sharing information with users' personal connections and associated groups: a tweet is sent out to all of a user's followers, and a Facebook status update can be seen by anyone within a user's chosen group of "friends." Users can post text, pictures, GIFs, videos, and links to outside sources, including other social media sites. Users also have the ability to restrict who can see the content they post, from anyone on the entire platform to no one at all. Users chose what content to participate in and interact with and chose which groups to include themselves in. This choice is the first building block of division: while participation in self-selected groups online provides users with a sense of community and belonging [5], it also builds an individual
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages35 Page
-
File Size-