Journal of Design and Science • Issue 6: Unreal On the Internet, Nobody Knows You’re a Bot: Pseudoanonymous Inuence Operations and Networked Social Movements Brian Friedberg, Joan Donovan Published on: Aug 10, 2019 DOI: 10.21428/7808da6b.45957184 License: Creative Commons Attribution 4.0 International License (CC-BY 4.0) On the Internet, Nobody Knows You’re a Bot: Pseudoanonymous Inuence Operations and Journal of Design and Science • Issue 6: Unreal Networked Social Movements Brian Friedberg is an investigative ethnographer whose work focuses on the impacts that alternative media, anonymous communities and popular cultures have on political communication and organization. Brian works with Dr. Joan Donovan, who heads one of the world’s leading teams focused on understanding and combating online disinformation and extremism, based at Harvard’s Shorenstein Center on Media, Politics and Public Policy. In this essay, Brian and Joan explore a challenge the Unreal has presented for study of activism online, the question of whether an online actor is real or synthetic. In this essay, they explore what happens when politically motivated humans impersonate vulnerable people or populations online to exploit their voices, positionality and power. — Ethan Zuckerman, Editor It is not true that in order to live one has to believe in one's own existence. There is no necessity to that. No matter what, our consciousness is never the echo of our own reality, of an existence set in "real time." But rather it is its echo in "delayed time," the screen of the dispersion of the subject and of its identity — only in our sleep, our unconscious, and our death are we identical to ourselves. — Jean Baudrillard Introduction When facts become hard to establish, distortion takes over. Social media has given rise to countless operators who assume false identities, infiltrating political networks and using them to influence public opinion. And while social media platforms say they are working to stop this malfeasance, they remain open to all manner of abuse by bad actors. The outcomes are varied, but in almost every instance, a con game is operating — one that manipulates other social media users and online social movements with reams of posts, a firehose of unprovable “facts’’ that in measurable ways influences public debate on key social issues. For example, with each election cycle, a tremendous amount of online content is generated: Users digest and analyze candidates’ words and images, and politicians themselves produce a waterfall of social media posts as they seek to connect with the electorate. From petitions and polling, meme campaigns and the interventions of marketing firms like Cambridge Analytica, social media and networked communication has created enormous opportunity for professionals and amateurs alike to attempt to influence public opinion. At the same time, a robust practice of punditry provides 2 On the Internet, Nobody Knows You’re a Bot: Pseudoanonymous Inuence Operations and Journal of Design and Science • Issue 6: Unreal Networked Social Movements exhaustive analysis of these campaigns. The prevalence and visibility of these influence operations has also intensified American tendencies toward skepticism and conspiracism. A tacit coalition of journalists, activists, and academics now tries to make sense of these rival perceptions of polarization, with pessimistic analysis suggesting discourse as we know it is dead, or perhaps has never existed. The cacophonous debate about the role social media played in election outcomes leaves most of us with two options: belief or indifference. After the 2016 U.S. presidential election, we have placed tremendous importance on the influencing of political participation and attitudes, as well as on measuring this influence. What we see as the study of disinformation may be in itself the exercise of power: with social and material harm to marginalized groups often being deprioritized in these analyses in favor of party politics and the interests of big data. Algorithmic anti-blackness (Noble, 2018), relentless gendertrolling (Mantilla, 2015), digital redlining (Gilliard & Culik, 2016), the exploitation of gig economy workers (Rosenblat, 2018 and Gray, 2019) and a resurgence of white nationalist organizing in the new public commons present immediate threats to minoritized groups, many of which have been mounting their own resistance to attacks that benefit from the cloak of online pseudoanonymity. Our understanding of disinformation, however, must also include an understanding of these harms, not just their effects on party politics. Historically, activists have adopted anonymization techniques to ensure operational security when organizing on open or closed networks. Despite Facebook leaders’ protestations to the contrary, anyone may at any time create a new Facebook account not linked to his or her verified legal identity. And while a user’s reach depends on access to amplification, users with strong networks or those assisted by artificial tools to add automated followers are favored by algorithmic recommendation mechanisms. Leveraging these mechanisms creates a newfound hypervisibility for individuals who regularly create content with high rates of interaction. It’s a double-edged sword, however: political figures, pundits, and social movements all enjoy newfound reach while simultaneously facing intense scrutiny. Others, who have been outed as activists online, have lost their jobs or other membership status. Alongside this hypervisibility of celebrities, candidates, elected officials arise new opportunities and incentives to influence online conversations anonymously. We have seen foreign and domestic actors leverage amplification opportunities and algorithmic vulnerabilities using political wedge issues, such as immigration, LGBT rights, anti-Black racism, religious discrimination, and more. It is these divisive politicized positions concerning identity, authority, and justice that fuel news cycles and electoral campaigns. Politicians deploy these issues during campaigns, amplifying and iterating them amongst mainstream and independent media alike. These issues are frequently at the heart of operations designed to influence elections, culture, and policy, with operators accomplishing these goals through 3 On the Internet, Nobody Knows You’re a Bot: Pseudoanonymous Inuence Operations and Journal of Design and Science • Issue 6: Unreal Networked Social Movements social media activity and the manipulation of media coverage. By artificially creating the impression of broad support and legitimacy, inauthentic actors can wield disproportionate influence by giving their fabrications a solid platform on social media and making it appear that they have large numbers of supporters. Platforms such as Facebook call these operations “coordinated inauthentic behavior” (CIB), though it also terms the same practices authentic if “legitimate” users, rather than political operatives, paid partisans, or bots perform them. For example, when instances of artificial amplification (bot networks) or misrepresentation (sockpuppet accounts) were covered in the news or discussed by social media users after the 2016 US election, platform companies labelled this activity as CIB based on their own internal data analysis. Because of public outcry and political pressure, major platforms are now removing detectable automated accounts and reporting these removals publicly. At the same time, the accusation of an account being a “Russian bot” became a common dismissive reply online, especially to those calling attention to trolling campaigns or online harassment. Instead of focusing on the notion of authenticity, our analysis centers on “pseudoanonymous influence operations” (PIO), wherein politically motivated actors impersonate marginalized, underrepresented, and vulnerable groups to either malign, disrupt, or exaggerate their causes. Because PIOs travel across platforms, researchers cannot rely on one platform’s designation of CIB in order to assess the complexity of an operation. PIOs are more subtle than botnets and their data traces can be difficult to track. Rather than relying on volume for amplification, as bots do, their inauthenticity comes from the adoption of a minority persona by a human seeking political influence. PIOs play with and upon political polarization, confirming our most deeply held beliefs with their messaging, either reinforcing preexisting stereotypes or creating a convenient straw man of the (approximated) positions of political opponents. In order to investigate the phenomena of PIO, we ask: “What happens when a bot is not a bot?” We compare the residual artifacts, impacts, and press surrounding four pseudonymous (use of pen name) or pseudoanonymous political actors that were charged with inauthentic behavior online. Our analysis reveals how a tradition of anonymity online became a tactic for appropriating the style and techniques of networked social movements, groups of loosely affiliated individuals who use the Internet to coordinate action (Donovan 2018). For example, the Occupy Movement in 2011 used social platforms, conference calls, and the Internet to coordinate protests in hundreds of public parks around the world (Donovan 2016). And it’s not only that networked social movements used social media to coordinate public protests; by repurposing search engine optimization techniques for movement goals they also learned to “play the algorithms” in order to draw online attention to specific issues
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-