<<

MIAMI UNIVERSITY The Graduate School

Certificate for Approving the Dissertation

We hereby approve the Dissertation

of

Bridget Christine Gelms

Candidate for the Degree

Doctor of Philosophy

______Dr. Jason Palmeri, Director

______Dr. Tim Lockridge, Reader

______Dr. Michele Simmons, Reader

______Dr. Lisa Weems, Graduate School Representative

ABSTRACT

VOLATILE VISIBILITY: THE EFFECTS OF ONLINE ON FEMINIST CIRCULATION AND PUBLIC DISCOURSE

by

Bridget C. Gelms

As our digital environments—in their inhabitants, communities, and cultures—have evolved, harassment, unfortunately, has become the status quo on the (Duggan, 2014 & 2017; Jane, 2014b). Harassment is an issue that disproportionately affects women, particularly women of color (Citron, 2014; Mantilla, 2015), LGBTQIA+ women (Herring et al., 2002; Warzel, 2016), and women who engage in , civil rights, and feminist discourses (Cole, 2015; Davies, 2015; Jane, 2014a). Whitney Phillips (2015) notes that it’s politically significant to pay attention to issues of online harassment because this kind of invective calls “attention to dominant cultural mores” (p. 7). Keeping our finger on the pulse of such attitudes is imperative to understand who is excluded from digital publics and how these exclusions perpetuate and to “preserve the internet as a space free of politics and thus free of challenge to white masculine heterosexual hegemony” (Higgin, 2013, n.p.). While rhetoric and writing as a field has a long history of examining myriad exclusionary practices that occur in public discourses, we still have much work to do in understanding how online harassment, particularly that which is gendered, manifests in digital publics and to what rhetorical effect.

In this dissertation, I critically examine how harassment is enabled and circulated by digital platforms as well as the effects it has on people, online cultures, and design and policy. I outline a of what I call “volatile visibility,” the correlation between a woman’s circulation online and the amount of harassment she experiences. To document and analyze the effects of volatile visibility, I conducted a survey and in-depth interviews with women who have experienced severe forms of online harassment. Their stories reveal how online harassment works to maintain existing cultural boundaries that exclude women from public discourses. Therefore, I argue online harassment, in its influence on how we exist and interact online, dampens women’s rhetorical influence and limits their opportunities for expression.

This work has implications for social media rhetorics, circulation, and digital methodologies. I also present pedagogical implications, arguing we should incorporate concerns of online harassment into our digital writing courses to help students understand how harassment influences who can safely engage in public discourse online and how they can do so. I conclude with advice for what we can do as researchers, designers, and citizens to intervene in cultures of online harassment.

VOLATILE VISIBILITY: THE EFFECTS OF ONLINE HARASSMENT ON FEMINIST CIRCULATION AND PUBLIC DISCOURSE

A DISSERTATION

Presented to the Faculty of

Miami University in partial

fulfillment of the requirements

for the degree of

Doctor of Philosophy

Department of English

by

Bridget C. Gelms

The Graduate School Miami University Oxford, Ohio

2018

Dissertation Director: Dr. Jason Palmeri

©

Bridget C. Gelms

2018

TABLE OF CONTENTS

List of Tables...... iv

List of Figures...... v

Dedication...... vi

Acknowledgements...... vii

Introduction: Online Harassment and Why Being a Woman Online is Really, Really Hard…………………………...... 1

Chapter 1: Online Harassment is an Issue of Social Justice…...... 15

Chapter 2: Volatile Visibility and the Methodological Problem of Harassment...... 40

Chapter 3: The High Stakes of Online Harassment: Threats to Women and Feminist Action ………………………………………………...... 59

Chapter 4: Tactics of Avoidance: How Harassment Makes Women Disappear…...... 90

Chapter 5: Avenues for Change: Policies and Pedagogies of Online Harassment...... 114

References...... 144

Appendix A: Survey Questions...... 167

iii LIST OF TABLES

Table 2.1: Responses to the survey question, “what are your racial and/or ethnic identifications?”………………………………………………………………………… 54

Table 2.2: Responses to the survey question, “how do you describe your sexuality?”………………………………………………………………………………. 55

Table 2.3: Responses to the survey question, “how do you describe your ?” …………………………………………………………………………………56

Table 3.1: Stories of how harassment affects behavior and well-being…………………84

Table 3.2: Stories of identities and topics that arouse online harassment………….……85

iv LIST OF FIGURES

Figure 1.1: @femme_esq’s controversial tweet………………………………………...... 2

Figure 1.2: @femme_esq’s avatar………………………………...... …………26

Figures 1.3-1.6: Imani Gandy’s tweets about @femme_esq…………………...…….26-27

Figure 1.7-1.8: Jamilah Lemiuex’s tweets about @femme_esq……………………..…..28

Figure 1.9: Leslie Jones leaves Twitter…………………………………………….…….32

Figure 5.1: Twitter’s form for reporting harassment……………………………..…….116

Figure 5.2: PragerU’s video, “Are 1 in 5 Women Raped at College?”…………..…….124

Figure 5.3: Pinterest’s terms of service for posting content ……………...……………137

Figure 5.4: Pinterest’s terms of service for jurisdiction………………………...………138

v DEDICATION

For all of the women who have been silenced or menaced by harassment.

vi ACKNOWLEDGEMENTS

There are many people who made this work and my ability to get to and through graduate school possible. I’d first like to thank Jackie Grutsch McKinney for helping me see my own potential as a student. I’d also like to thank the faculty at Miami who have had a hand in my success. Heidi McKee has taught me so much, and vested interest in my progress through the program and field has helped me take my work to new heights. For that I’m grateful. Thanks also go to my committee: Michele Simmons, who introduced me to Troubling the Angels, a book that changed the way I think about person- based research and community work; Lisa Weems, whose class on youth culture and education gave me occasion to see Stand By Me for the first time (!). Her work continues to inspire me; Kate Ronald, who was part of my committee the year before her retirement. In my preliminary exam, I wrote, "I came to through Twitter," to which Kate responded, "I came to feminism through books and hideous men." I will never forget that; and Tim Lockridge, who designed a graduate course that nurtured my interest in digital rhetoric and writing. It was in his class that my dissertation project began to take shape, and his feedback in that class, during my preliminary exam, and throughout my writing of this dissertation has had profound effects on my thinking about this topic and its place in our field. The biggest thanks goes to my chair, Jason Palmeri. Jason is the kind of person that you meet and inspires you to be a better, more compassionate teacher, mentor, and advocate. His dedication to his work and students is unparalleled, and I feel very fortunate to have worked with him in the classroom and on this project. My dissertation would not be what it is without his robust and transformative feedback and support. I also want to thank my Miami grad school buds, without whom I never would have made it through: Catherine Tetz, Cynthia Johnson, Kathleen Coffey, Matt Young, Erin Brock Carlson, Hua Zhua, Caleb Pendygraft, Enrique Paz, and Kyle Larson. I am also so grateful for cross-institution friends: Rich Shivener, Johnson, Zarah Moeggenberg, Ann Burke, and Kat Greene. And finally, I don’t know where I would be without my friend and collaborator Dustin Edwards, who has sent me an unprecedented amount of Tom Hardy and painted nails throughout my dissertation writing process. Working with him has made me a better thinker and writer. I can’t thank him enough. This dissertation was written mostly in the company of dogs and a one-eyed cat. Thank you goes to Blue, Pepper, Ripley, and Wendy. Of course, I also thank my human family: my sisters Ginny and Caryn for their endless support and cheer, and my parents, who took kind of a while to figure out I wasn't a literature major but have been excited for me every step of the way. To my third parent Ralph, thank you for showing me how exciting higher education can be. Most of all, I owe thanks to Sean, my partner in all things, who has unconditionally supported me in everything I’ve tried since we met eleven years ago. Sean’s commitment to speaking up and helping others who face injustices reminds me every day that we must use our time on this planet for good in our communities. I'm not sure how many hours he has spent listening to me talk through ideas about my project, but his approving nods have done more for me than he probably knows. His being by my side at every turn during this strange ride is what made this possible.

vii Introduction Online Harassment and Why Being a Woman Online is Really, Really Hard

In June of 2016, a toddler was snatched by an alligator in front of his parents at a pond on a Walt Disney World Resort property in Orlando, Florida. Divers later recovered the body of the boy, who was killed during the incident. A month prior, in May of 2016, a toddler climbed into a enclosure at the Cincinnati Zoo. A gorilla, Harambe, picked up the child and began carrying him around the enclosure. Zoo officials determined that the gorilla was too unpredictable to assume the child would be unharmed and that tranquilizing Harambe would only aggravate him, possibly putting the child in greater danger. They ultimately shot and killed the gorilla. Both of these stories drew commentary on social media platforms about child negligence and parental responsibility, but the parents in the Cincinnati Zoo case drew much wider vilification, with critics going so far as to make connections between their pasts and their abilities to responsibly parent. What wasn’t covered so widely in the media was Child Protective Services’ determination that the parents were not negligent and that the Cincinnati Zoo, while compliant with national enclosure requirements, only had a small wall and some shrubbery between patrons of the zoo and the making it extremely easy for the child to access the enclosure. A noticeable difference in these two cases: the parents of the child at the zoo are Black, while the parents of the child at Disney World are white. It may seem strange to begin a dissertation about online harassment with these stories, but it sets the scene for a remarkable example of how women, particularly feminist women, are consistently attacked online and systematically driven off social media. Former Twitter user @femme_esq, or Brienne of Snarth,1 is a ­based lawyer and, as many would categorize her, an online activist. She used Twitter exclusively for calling out unjust practices she saw happening in everyday culture and used her vantage point as a lesbian in an interracial­marriage as well as her experience in law to speak on social issues directly related to sexuality, race, gender, class, and law. @femme_esq was prolific. She had thousands of followers and garnered a reputation for her intellect and quick wit. She was also known for starting conversations rather than participating in them. When I started following her, I found myself becoming, via her feed, privy to issues before they exploded into a larger cultural moment and national conversation. I looked forward to reading her opinions and ideas throughout the day, and for me, she was an important usher into the intersectional feminist community.2 I learned a lot from her.

1 After the incident I describe throughout this introduction, @femme_esq’s real name became known. I choose not to use her real name here because she never wanted it to be known. She adopted her for a reason, and I’m honoring these wishes. 2 , a term coined and a theory brought to greater public consciousness by Kimberlé Williams Crenshaw, refers to how intersecting identities inform one’s life experiences and the social inequalities they may face. Intersectional feminism, therefore, acknowledges social categories beyond gender such as race, ability, class, sexuality, age, ethnicity, and so on, and the role they play in shaping how one experiences gendered injustices. The concept of intersectionality will be discussed in greater detail in chapter one.

1 Around the time of the Disney World and Cincinnati Zoo incidents, @femme_esq was tweeting frequently about white and entitlement, usually couched in relation to the 2016 Democratic primary, an election marked heavily by gendered issues as became the first woman to become the Presidential nominee of a major political party. On June 15, 2016, @femme_esq tried to make a point about race and male privilege in tweeting, “I’m so finished with white men’s entitlement lately that I’m really not sad about a 2yo being eaten by a gator bc his daddy ignored the signs.”

Figure 1.1: A screenshot of @femme_esq’s controversial tweet from June 15, 2016.

This brash and insensitive tweet was not well received, to say the least, and was the starting place for an all day battle on @femme_esq’s feed. She explained that she was trying to draw attention to the ways in which people received and commented on the Orlando incident differently from the Cincinnati incident, highlighting the racial politics involved, particularly in how each set of parents were portrayed in the media. The white parents were covered in a sympathetic light while the Black parents were heavily scrutinized and often blamed from the death of Harambe. The day after @femme_esq posted her tweet, WIRED magazine published an article by Brian Raftery titled “We Wish We Could Unsee this Vile Tweet about the Alligator Attack,” which pointedly called out @femme_esq for the tweet, but effectually removes it from the context of the larger conversation she evoked about race, gender, and entitlement.3 When the magazine tweeted the link to the article, they called @femme_esq’s tweet “the worst tweet of all time,” sparking outrage by transgender and cisgender women who were quick to point out the sustained harassment and death threats they’ve been subjected to daily on Twitter. Replies to WIRED ’s tweet soon filled up with objections to the assertion that @femme_esq’s tweet was the worst of all time, and some included screenshots of tweets they’ve received from users threatening to or kill them.

3 Historically, W IRED has been criticized for its male­heavy slant. Paulina Borsook (1996), for example, describes the magazine's inception as a trendy counter­culture magazine for the new techno­era but it's execution as a magazine written by men, for men, with a heavy emphasis of things that culturally skew male contribute to the ongoing gender disparity within technology industry and circles.

2 Shortly after WIRED published their article about her, @femme_esq, inundated with and threats, locked her account and not long after that deleted it completely. People even went so far as to demand a response from Hillary Clinton, who @femme_esq was a vocal supporter of during the 2016 primary election season, beginning when Pulitzer Prize winning journalist , best known for publishing documents leaked by former NSA employee Edward Snowden, posted a link to screenshots of @femme_esq’s tweet and tweeted, “From a prominent, fanatical Clinton supporter. Has the Clinton campaign officially commented on this yet?”4 Eventually, she was doxxed. Doxxing, derived from “dox,” shorthand for “documents” or “docs” (Quodling, 2015), is the act of releasing private information about someone in an effort to shame or harm them personally and professionally. In @femme_esq’s case, her harassers published and circulated her full legal name, her address, where she worked, and even the name of her spouse. Once people knew her offline identity, many of those enraged by her tweet emailed and called her employer insisting she be fired. I write this paragraph the day @femme_esq deleted her account. And as I follow the conversations about this in real time, I’m feeling many emotions. I’m upset and disappointed that @femme_esq, a consistent source of intersectional feminism for many people, has been threatened and driven into silence. On a broader scale, I’m angry and horrified that our culture’s orientation to the systematic harassment of women is, often, to the harassed. Rarely do we stop to examine the systemic reasons for this constant devaluation of women’s public voices. @femme_esq’s tweet was in poor taste, though not nearly as violent and horrific as the tweets she received in response, and her story is important because of its entanglements that reach multiple aspects of online harassment—namely visibility, feminist identities, race, and gender. There is no denying that social media has upended the way people come in contact with information and social movements of all kinds (Brock, 2012; Dietel­McLaughlin, 2009; Penney & Dadas, 2014). I, for one, came to feminism through Twitter since joining the site in 2008, before harassment was as widespread of a problem as it is today. While my uses of the platform have changed over time in nuanced ways, its primary function for me has been to keep track of news, current events, and cultural trends. This is how I found the large feminist presence that exists on Twitter, but some of the feminist sources I used to follow are since sporadic, silenced, or altogether gone, as with the case of @femme_esq. I can’t help but think about new or uninformed feminists on Twitter who will miss out on the important discourses @femme_esq brought to me and others through the platform. Of course, there are still many women who are just as prolific as @femme_esq, and as new voices emerge, the community of feminists using Twitter to learn, grow, and connect gets larger. As the community expands, we must consider how the feminist potentials of Twitter are significantly hindered by the presence of online harassment.

4 Greenwald, G. [@ggreenwald]. (2016, June 16). Retrieved from https://twitter.com/ggreenwald/status/743410634956615681

3 Locations of Online Harassment: Social Media Platforms Online harassment has much to do with social media—their populations, circulation mechanisms, and designs. “Social media” can refer to many different spaces of interaction, and the rapid pace of proliferation makes it a topic that’s difficult to keep up with critically, pedagogically, and methodologically (boyd, 2012; Madden, 2014; Postill & Pink, 2012). I favor danah boyd’s (2012) definition of social media as “sites and services that emerged during the early , including sites, video sharing sites, blogging and platforms, and related tools that allow participants to create and share their own content” (p. 6). I would extend this definition to also note the rhetorical significance and potential social media has for users, “not only because of the ways in which [social media] connect people to one another discursively but also because of their greater cultural role as form of self­expression” (Warnick & Heineman, 2012, p. 102). The creation and sharing of content is key, because in my view, the chief potential of social media is the ability to express oneself coupled with the ability to share those expressions with others. Along with this understanding, I argue it’s crucial that we acknowledge how the human elements of social media, our expressions and circulations, are constrained by the platforms themselves. By platforms, I draw on Tarleton Gillespie (2018) to mean “sites and services that host public expression, store it on and serve it up from the cloud, organize access to it through search and recommendation, or install it onto mobile devices” (p. 254). Platforms, then, encapsulate a complex convergence of cultural, political, ideological, and economic practices that greatly influence the very human (inter)actions in and of social media. Gillespie notes that platforms act as intermediaries in that they host and carefully curate content that is produced by users , not the platform. It’s the platform that decides what content they’ll host, how it’s distributed, who will see it, and how they mediate user interactions about content. Therefore, Gillespie argues that a platform can align itself with an assortment of aims, catering to multiple, and sometimes competing, stakeholders at the same time. For instance, a platform can allow advertisers to reach potential buyers while at the same time allow everyday users to critique capitalism, which helps the platform maintain an illusion of neutrality. Although platforms host and circulate content they do not produce, it would be a mistake to see them as mere intermediaries given they set the policies and conditions of the platform. Users must express themselves within the confines of those policies and conditions, and are therefore shaped by them. It is essential we continue to take up research questions about social media and individual platforms, especially given that social media have become “the new public square” where “knowledge sharing, public discussion, debates, and disputes are carried out” (Smith et al., 2014, n.p.). However, when it comes to online harassment, much of the existing research discusses the blanket of “social media” without fully considering how individual platforms, such as Twitter, influence the proliferation of online harassment. Twitter, one of the most popular and widely­used social media platforms to date, has become a hub for online harassment largely

4 through its policies and conditions,5 complicating the potentials for self­expression. Most of my discussion throughout this dissertation focuses solely on Twitter because of the platform’s deep embroilment with harassment and ongoing struggle to meet the evolving needs of users experiencing severe abuse. Twitter faced much scrutiny in the wake of GamerGate, which brought a surge of harassment to the platform.6 Many GamerGate members had their accounts suspended or revoked for violent harassment causing them to accuse Twitter of First Amendment rights infringement, a rhetorical tactic that, historically, has been used by other oppressive hate groups.7 Not long after, Twitter released an outline of proposed changes to its policies regarding harassment, admitting their current approaches didn’t do enough to protect users from abuse. The proposed changes to their policy didn’t do much for the immediate abuse many users suffered, and functional changes to the platform only seemed to enhance the opportunity for harassment (Eveleth, 2014; Molina, 2017). Even more recently, the surge of white nationalism on Twitter has prompted arguments over how the policies allow for hate groups to easily congregate and harass, especially in light of the uptick in hate crimes in the first month after was elected President of the United States8 (Garcia, 2016; “Update: 1,094 ­related incidents,” 2016). I see Twitter specifically as a rich site for examination given it is currently at the epicenter of the debate about online harassment and how it should be dealt with from both policy and platform functionality standpoints. Although my discussion is primarily contained to a single platform, there are broader implications for how we think about the use and circulation of harassment, social media design and user experience, as well as policy and law meant to curb the problem, which will be discussed in subsequent chapters. “Online harassment” can mean many things and is generally used as a blanket term to describe the vitriol that is present on most social platforms. However, there are so many varieties that using a general phrase like “online harassment” can muddy the ways in which many of these transgressions are distinctly violent, misogynistic, racist, homophobic, and transphobic. When I

5 The policies and design that facilitate and uphold harassment on Twitter will be discussed further in chapters three and five. 6 GamerGate is a common case to examine in relation to a variety of issues surrounding online harassment, and while it’s a touchstone in chapters three and four, it’s not a main focus of this dissertation. For further reading about its influence and rhetorical dimensions, however, I recommend Potts & Trice’s (2018) “Building Dark Patterns into Platforms: How GamerGate Perturbed Twitter’s User Experience.” 7 For example, see Jessie Daniels’ (2009) C yber Racism: Online and the New Attack on Civil Rights for information on the KKK’s use of First Amendment rights. In it, Daniels contends free speech is foundational to American culture but our value of it as a concept differs along racial axes, particularly among young people, and she argues that First Amendment protection helps to uphold white supremacy in the U.S., especially in online environments (p. 176). This will be discussed more fully in chapter 5. 8 Not only was Trump endorsed by the KKK (Detrow, 2016) and other white supremacy hate groups (Piggott, 2016), but his campaign was centered around , , and racism (Foran, 2016). His win is marred with controversy as he lost the popular vote to Hillary Clinton by almost 3 million votes, the largest margin in American history (Kentish, 2016), and evidence continues to surface, as of July 2017, about Russian interference with the election in Trump’s favor (Miller, Nakashima, & Entous, 2017) and his campaign’s collusion with such efforts (Becker, Goldman, & Apuzzo, 2017).

5 use the phrase “online harassment” throughout this dissertation, it’ll be in instances when describing, to borrow Danielle Citron’s definition, “online expression [...] targeted at a particular person that causes the targeted individual substantial emotional distress and/or the fear of bodily harm” (2014, n.p.). When I use the phrase “sexist online harassment,” however, I’m specifically talking about the kinds of online harassment that employ existing cultural attitudes about women that aim to shame, silence, and victimize them both online and off.9 Therefore, perpetrators of sexist online harassment engage in covert and overt sexism and , much of which has been institutionalized, for a political and ideological purpose—to police women and feminist ideologies out of public spaces and discourses. This kind of harassment carries long­term effects on individual victims as well as the social makeup of online spaces in that it dictates who is able to easily and safely contribute to the conversations that happen online. In short: online harassment has a massive impact on who and what circulates on social media platforms. Therefore, online harassment matters to composition and rhetoric as a field. So why aren’t we talking more about it? For the remainder of this introduction, I will describe the ways that composition and rhetoric already concerns itself with issues related to online harassment, arguing that online harassment is clearly within the purview of what we engage as a field and therefore, we should pay closer attention to the effects it has on people and digital spheres.

Online Harassment Matters to Composition and Rhetoric Composition and rhetoric has a rich history of taking up research questions that directly intersect with spaces, identities, and concerns of online harassment in a variety of ways. Notably, computers and writing scholars have examined: ● Digitally networked environments and their relationship to writing, rhetorical practice, and pedagogy (Bowden, 2014; Buck, 2012; Shepherd, 2015; Vie, 2015). ● Social media’s design, infrastructures, politics, and cultures and how they influence the proliferation and circulation of public writing and rhetoric (Arola, 2010; Herbst, 2009; Selfe & Selfe, 1994; Wysocki & Jasken, 2004). ● Identity and how it influences/is influenced by digitally networked environments (Banks, 2006; Kolko, 2000; Nakamura, 2008; Noble, 2013). ● Online antagonism and exclusionary tactics and their effects on writing, rhetorical practice, and pedagogy (DeWitt, 1997; Gruwell, 2015; McKee, 2002). Our work in these areas has uniquely positioned us to connect these topics more pointedly to the current state of online harassment and the effects it creates on digital environments and rhetorical action. It is beyond the scope of this dissertation to finely detail all of the work our field has done to reveal social media’s potential for the teaching and research of writing, but it’s important to note the diverse ways we’ve engaged this topic within the last fifteen years, especially its

9 In chapter one, I’ll go into greater detail about the language complexities involved when talking about and describing online harassment.

6 pedagogical aspects. For example, Stephanie Vie’s “ 2.0: ‘Generation M’ and Online Social Networking Sites in the Composition Classroom” (2008) took on questions of how new generations of learners engage with writing online, and how compositionists might incorporate social media in their pedagogies in order to tap into this engagement. Vie’s follow­up study from 2015 demonstrates that teachers continue to find new uses for social media in the classroom in relation to writing and rhetoric. An expansive amount of research has been done on the continued uses of social media in both first­year composition and the professional writing major (Bowden, 2014; Buck, 2012; Fife, 2010; Maranto & Barton, 2010; Patrick, 2013; Reid, 2011; Shepherd, 2015 & 2016), and we don’t show signs of slowing in our pursuit of innovative digital writing pedagogies. Common among this research is the argument that we should be using social media because it gives students the chance to engage in public writing with immediate and kairotic audiences, and therefore we shouldn’t “ignore the opportunities for learning, for social and political engagement, that online networking affords” (Maranto & Barton, 2010, p. 44). An extension of this argument positions the use and research of social media as a recognition of the changing nature of our world in light of digital networks, and therefore, social media enables us to meet students where they already are with their everyday use of technologies. A 2010 special issue of Computers and Composition edited by Day, Randall McClure and Mike Palmquist addresses these concerns. In it, for instance, Elizabeth J. Clark writes, “To ignore the [digital] imperative of the now is to create a dangerous paradigm for the future” (2010, p. 34) in that students are living and working in vastly different contexts than they were, say, fifteen years ago in light of new digital environments. Clark calls for composition and rhetoric scholars, as a result, to alter not just our pedagogies but our very thinking about writing and literacy. She writes, We need to work to help the profession embrace digital rhetoric not as a fad, but as a profound shift in what we mean by writing, by literacy, and by cultural communication. And what then? We need to be ready to morph—from a book culture, to an online culture, to whatever comes next—so that our students are ready to meet the challenges that ahead, in whatever form they appear. (p. 35) Clark is purposefully vague about these challenges in order to draw attention to the fact that digital rhetoric is complicated, and we can’t possibly anticipate all of the ways in which students will be challenged by it. We can, however, prepare ourselves to be more malleable in our digital rhetoric pedagogies in order to account for those unanticipated challenges. One way to do this, as Erin Frost (2011) has argued, is to give students more agency to port the digital skill­sets they bring to the classroom into the structure of the writing course. In this model, the teacher becomes a co­learner, working to decenter themself in the classroom. Frost writes, “this shift in the locations of classroom control also gives instructors the chance to observe digital natives as users and to develop pedagogies that might better reflect the

7 implications of the Web for those users’ futures” (p. 275).10 Bolstering the argument that the field should do more to meet students where they are, Jessie L. Moore et al.’s (2016) study of 1,366 students across seven different institutions found that students use a wide variety of technologies for a wide variety of composing purposes. While some of the findings weren’t entirely surprising (for instance, using word­processing software to write a research paper), others show that students “push the boundaries of traditional technology use, demonstrating the flexibility of composing technologies,” while “reimagining how they can be used in daily writing” (p. 8). The authors are quick to caution against using empirical data such as that collected for this study to make sweeping generalizations about students’ technology use in relation to writing. Rather, they recommend we synthesize these findings about materiality with additional research about “the deeply social and rhetorical nature of composing” (p. 10). In other words, we must pursue research that gets at the social and contextual dimensions of digital composing practices and spaces in order to make new meaning about technology use and purpose. These digital composing practices and spaces are complicated by the presence of antagonism, hate, and harassment. “” is a term that has long been used in our field to represent the “heated, emotional, sometimes venting” (Selfe & Meyers, 1991, p. 170) that occurs online. It is often prompted by a disagreement within an otherwise civil conversation but usually stems from an emotional reaction to something that’s been said. Heidi McKee’s (2002) analysis of an online discussion forum reveals how intense, disruptive, and damaging these outbursts can be. However, McKee contends flaming in action is difficult to categorize “without careful consideration of the specific rhetorical situation in which it occurs, including an examination of the wider cultural and social positionings of the participants involved” (p. 413). It may be easy to label an outburst as flaming but the complexities of the exchanges that lead up to the act have to be acknowledged in order to understand what separates flaming, an act of venting, from a different type of reaction. As McKee points out, reactions like flaming don’t occur in a vacuum and are often embedded in deeper cultural contexts and power structures than the act may initially reveal. McKee also highlights one of the most important aspects of understanding these kinds of exchanges: the underpinnings of identity contribute to

10 The use of the phrase “digital native” supposes that students, particularly millennial students, have grown up with and are therefore adept at using computers, the internet, and social media. Yet, categorizing these students as digital natives ignores how digital literacy acquisition occurs along axes of gender, race, and class. (2007) writes, “Talking about youth as digital natives implies that there is a world which these young people all share and a body of knowledge they have all mastered, rather than seeing the online world as unfamiliar and uncertain for all of us” (n.p.). danah boyd (2014) also critiques the idea of digital natives, arguing that “a focus on today’s youth as digital natives presumes that all we as a society need to do is be patient and wait for a generation of these digital wunderkinds to grow up. A laissez­faire attitude is unlikely to eradicate the inequalities that continue to emerge. Likewise, these attitudes will not empower average youth to be more sophisticated internet participants” (p. 196). I align myself with Jenkins and boyd in that thinking of as necessarily savvy at all things digital erases the complicated dimensions of digital rhetoric, particularly those I take on in this dissertation, such as internet citizenship, digital sexism, and online harassment.

8 the power dynamics, reactions, and effects of flaming in meaningful ways, just as they do in the creation and use of digital spaces. Ann M. Bomberger (2004) notes that flame­wars can easily erupt in our digital classroom spaces, such as online discussion boards, and she argues that online conditions might make conflict more likely to happen than it does in a traditional physical classroom.11 Angela Laflen and Brittany Fiorenza (2012) found in their analysis of online forum posts that students oftentimes use emotionally­charged language as a stand­in for non­verbal cues expressed in physical settings, causing upset. They write, “Although some conflict in [Computer­Mediated Communication] can be useful, flaming can, in many cases, have a chilling effect on course discussion and create a hostile environment” (p. 297). As Laflen and Fiorenza gesture towards, Kristine Blair (1998) cautions us against seeing all flaming as unproductive because conflict, specifically cultural conflict, can work “as a means for breaking down the binaries between the utopic and the heterotopic because online conflicts provide a more realistic sense of multivocality than dialogues privileged within a strictly utopic view of Computer­Mediated Communication (CMC) in which everyone ‘gets along’” (p. 318). Blair offers an important reminder that ultimately, some of the most radical learning and understanding is born out of conflict. The trick, then, is for teachers to develop skills that allow them to recognize the difference between meaningful conflict and that which is more aligned with venting or antagonism, like flaming. But where and how do we make these distinctions? And what happens when antagonism moves beyond flaming and into the realm of more serious and injurious forms of communication like or violent threats? It’s important to contextualize flaming and as one way that digital environments, in both design and use, work to (re)create the marginalization, dominance, and of people of color, LGBTQ people, and women (Banks, 2006; Barrios, 2004; Dibbell, 1994; Kolko, 2000; Nakamura, 2008; Noble, 2013). Because these groups are at a disadvantage online by design, they can easily become targets of antagonism, hate, and harassment in digital spaces. Pamela Takayoshi’s influential piece “ Building New Networks From the Old: Women’s Experiences With Electronic Communications” (1994) notes that at that point in time, the field had not done enough to account for how women are harassed online and that we were, perhaps, over­relying on the argument that “computers are potentially empowering tools for disenfranchised groups and marginalized students, particularly female students who have had few tools with which to carve out a space for themselves in the male world of the academy” (p. 21). Instead, she argues, we should look at how digitally networked environments facilitate “patterns of interaction deeply entrenched within a patriarchal system” and therefore “cannot be undermined simply by offering access to a new ” (p. 21). In collecting firsthand accounts of women who had been harassed online in academic contexts, as Takayoshi herself was, she realized this is a problem larger than the field may think. She writes,

11 Some of the reasons Bomberger puts forth as to why online conditions prime a setting for conflict have also been researched by the behavioral science community. This discussion will be surveyed more fully in chapter one.

9 When we establish electronic mailing lists, newsgroups, and bulletin boards for our classes, many of us require our students’ participation. These issues of harassment are certainly not something we wish our female students to encounter. But what happens when they are engaged in an activity required of them as intellectual participants in an educational setting, and they encounter once again their commodification as “merely” female in a patriarchal society? What messages are sent to the class as a whole when female students are responded to in the same sexist codes that consider women the playthings of men, but not equal intellectual participants? (p. 26) While Takayoshi’s piece is over 20 years old, her sentiments continue to ring true. The fact remains that our online environments continue to privilege certain identities while upholding oppressive structures against others. Scott Lloyd DeWitt’s (1997) research on the harassment of gay, lesbian, and bisexual students who maintain websites reveals that for members of the LGBTQ community, it’s not a question of whether or not they’ll be harassed, it’s a question of when. One of DeWitt’s participants, for instance, hadn’t been harassed at the start of the ethnographic study but revealed to DeWitt that he was surprised by this and waiting for it to happen. By the conclusion of the study, however, he had received a threatening, offensive, and vitriolic message from a visitor to his website regarding his political support of Bill and Hillary Clinton. DeWitt aims to represent the ways in which the internet can be a positive place for LGBTQ students to work through their feelings about coming out, but at the same time, he calls for us to approach the internet with a “critical sensibility.” DeWitt posits, If we see it as part of our professional responsibility to introduce our students to the Web as a vast resource for writers and as an exciting new medium for writing, is it not also our responsibility to talk frankly to our students about the fact that white supremacists, misogynists, homophobes, and religious and anti­religious zealots are all on the Web? (p. 242) DeWitt inspires us to ask, what is our responsibility to students writing on the web? And where do those responsibilities intersect with notions of fostering ethical netizenship, especially in the face of hate students might encounter while writing online? Like DeWitt, Susan Claire Warshauer (1995) argues that it is our responsibility to talk frankly with students about the potentially odious audiences they might encounter online. Additionally, Warshauer says, “responsible instructor pedagogy in the networked classroom would hold instructors more accountable for confronting and limiting the dynamics which alienate class members in online discussion” (p. 97), particularly class members who are predisposed to marginalized status, such as LGBTQ students. Julia Ferganchick­Neufang (1997) takes Warshauer’s idea a step further and says that while we as teachers hold responsibility in terms of how we position and participate in digital spaces to and with our students, it is also within our purview to teach students how to be good netizens. She writes, “each of us must take the responsibility to teach students appropriate behavior for webbed environments. Teaching the

10 ethics of online communication must be a part of any course in which students participate in computer mediated instruction” (n.p.). Again, while social media can indeed be productive environments for students learning about writing and rhetoric, that value is rendered moot when we don’t develop pedagogies that fully account for the ways internet users habitually reflect dominant structures that oppress people of color, LGBTQ people, and women. The reflection of oppressive structures is heavily influenced by a platform’s interface design (Arola, 2010; Herbst, 2009; Kolko, 2000; Selfe & Selfe, 1994; Wysocki & Jasken, 2004), business and financial interests (Beck, 2015; Beck et al., 2016; Vee, 2010), and online community norms/cultures (Almjeld, 2014; Fleckenstein, 2005; Haas, 2009), all of which have great effect on the creation and circulation of public rhetoric. Of course, these aspects of social media make them ripe for oppressive tactics that replicate those that exist offline as well. As a field, we have long been concerned with how oppression and exclusion play out in the design of writing technologies and digital spaces, and how oppressive structures influence the types of interactions that can and do take place in networked environments. One of the earliest examples of this work is Richard Ohmann’s 1985 article, “Literacy, Technology, and Monopoly Capital.” In it, Ohmann makes clear the close relationship technology has to “political questions of dominance and equality” (p. 675). He argues that historically, writing and its technologies evolved the way they did to gatekeep, not liberate, in order to advance oppressive agendas. Ohmann’s work paved the way for us to think more critically about how exclusion manifests in new tools for writing. Joseph Janangelo (1991) also vehemently argues we look past all of the ways digital tools enhance the teaching of writing in order to examine how they inflict “technoppression.” He writes, “By celebrating the technology, we avoid the challenge of having to rethink and restructure the powerfully hegemonic social milieu in which it is embedded and employed,” (p. 48). Janangelo likens our classroom uses of networked environments to “an electronic police state” (p. 50) and urges us to develop a “rigorous and healthy skepticism” of our technology use (p. 60). While perhaps hyperbolic, Janangelo’s argument is congruent to Ohmann’s in the sense that while we often think of and position digital environments as ones that are liberating or neutralizing of power dynamics, they can often work in the exact opposite way, reinforcing oppressive power imbalances, especially when used uncritically. To quote Cynthia Selfe, As composition teachers, deciding whether or not to use technology in our classes is simply not the point—we have to pay attention to technology. When we fail to do so, we share in the responsibility for sustaining and reproducing an unfair system that [...] enacts social violence and ensures continuing illiteracy under the aegis of education. (1999, p. 415) Selfe reminds us that without paying critical attention to the technologies we bring into the classroom and the issues that arise as a result of their uses, we risk becoming complacent in a system that can subjugate or prey on vulnerable populations.

11 Also questioning oppressive practices in the digital realm, Cynthia Selfe and Richard Selfe’s influential article “The Politics of the Interface: Power and Its Exercise in Electronic Contact Zones” (1994) underscores how exclusion is built into composing technologies and how interface design can be used to exercise power online. Because interface design reflects “dominant tendencies in our culture,” asking students to work within these interfaces contributes “to a larger cultural system of differential power that has resulted in the systematic domination and marginalization of certain groups of students,” namely women, people of color, and international students (p. 481). More recently, Leigh Gruwell’s (2015) work on the politics of exclusion in community standards and practices of carries on the tradition of examining electronic spaces that perpetuate societal norms of excluding women from public discourses. Through a textual analysis of Wikipedia’s policies as well as interviews with women editors of the site, Gruwell finds that “Wikipedia’s discourse community is organized around an epistemology that presumes a stable, knowable truth,” which is clearly gendered (p. 127). She cautions us that we must “be attentive to the public, digital spaces where knowledge is made” like Wikipedia because “non­dominant viewpoints,” like those that exist outside of patriarchal epistemologies, “can thus be silenced” (p. 127). Selfe and Selfe as well as Gruwell remind us that the power differentials that exist and thrive online drive much of the exclusion and knowledge­production that happens in digital spheres. It’s up to rhetoricians and digital compositionists to put a spotlight on these power differentials in an effort to inspire a cultural shift towards more savvy engagements with and in digital spheres. Clearly, composition and rhetoric scholars are keenly positioned to intervene in research and conversations about online harassment more fully. I’ll point out here that much of our work within this topic is pedagogical in nature, which is obviously important and reflects our values as a field. However, keeping our research about online harassment confined to a classroom context limits our understanding of it. I see great opportunity to continue our work and apply our questions to contexts that have different social and rhetorical purposes than those of the writing classroom. To again echo Takayoshi’s concerns, “talking with women about their actual experiences with computerized communication reveals a dissonance between theories of education and the reality female students live” (1994, p. 25). Her call inspires us to do more to examine the material effect online harassment has on marginalized people and digital spheres. This is what I aim to do in this dissertation.

Overview of Chapters In chapter one, “Online Harassment is an Issue of Social Justice,” I discuss matters of definition related to online harassment, describing the rhetorical complexities of the language we use to describe it. I critique the limitations of applying terms such as “trolling” and “” to sexist online harassment for the negative impact they have on the perceived seriousness of the issue and the ways in which they infantilize the people experiencing the abuse:

12 women. I argue that by understanding how language has evolved to account for new varieties of harassment, we can better see what might be missing in our current conceptions of these actions, positioning us to intervene and disrupt harassment’s standing as the status quo. I also argue that online harassment is used as a method of policing women’s ideas and behaviors, particularly feminist ideas and behaviors, and has extreme negative effects on women, especially women of color who experience sexist online harassment in the context of racism. Chapter two, “Volatile Visibility and the Methodological Problem of Harassment,” builds a methodology for examining feminist uses of social media and ensuing harassment. I offer a methodology for interrogating what I call volatile visibility, or the correlation between a woman’s visibility online and the amount of harassment she’s likely to experience. Volatile visibility impedes the circulation of feminist identities and voices, and my own story of conducting research about online harassment is evidence that volatile visibility also acts as a barrier to the study of harassment through visible means (i.e. distributing a survey via social media). In light of this problem, I argue online harassment researchers must attune themselves to a multitude of ethical concerns beyond those that social media research already pose. Chapter three, “The High Stakes of Online Harassment: Threats to Women & Feminist Action,” discusses Twitter as a place where feminist community building and action happens despite the platform’s enabling of sexist online harassment through policy and inattention to the harassment epidemic. I highlight interviews with two women who use Twitter for feminist ends and have experienced volatile visibility and severe sexist harassment as a result. Further, experiences of harassment shared by women through open­ended survey questions are discussed to reveal the profound and myriad effects online harassment has on women’s daily lives, both online and off. Chapter four, “Tactics of Avoidance: How Harassment Makes Women Disappear,” continues to highlight the stories shared with me by interview and survey participants that speak to tactics women employ to avoid harassment that also effectively cause them to disappear from view. These tactics include self­censorship, locking their social media accounts, decreasing the span of their social network, and/or hiding or suppressing their female gender identity by adopting a male avatar or abstaining from discussing anything perceived to be too women­focused. In light of these learned instinct, I argue that harassment functions as a silencing mechanism for women, especially those speaking out about political issues, and that their erasure from public discussion and view will contribute to a setback in and social justice efforts. Chapter five, “Avenues for Change: Policies and Pedagogies of Online Harassment,” takes on questions of what is currently being done about online harassment from a policy standpoint and what more can be done to protect victims. I discuss how platforms play a role in sustaining abuse through their design, policies, and governance procedures. In order to further build theory with the women who participated in my study, I discuss responses from participants about what they think can and should be done to curb the problem of online harassment from

13 both a cultural and policy standpoint. The chapter concludes with discussion of pedagogies of harassment, offering suggestions of how we might more pointedly interrogate issues of online harassment with students in our digital writing courses.

14 Chapter One Online Harassment is an Issue of Social Justice

“The hatred of women in public spaces online is reaching pandemic levels, and it’s time to end the pretense that it’s either acceptable or inevitable.” —Laurie Penny, 2013, n.p.

Online harassment, particularly in mainstream circles, is often framed as a generic problem—one that all people face because of how common antagonism has become on social media platforms. I argue, however, that online harassment should be framed as an issue of social justice given how much influence it has over who can safely speak in public forums, largely influenced by hierarchies of gender, race, and sexuality. In this chapter, I’ll describe the challenges we face in shifting our thinking about online harassment as a social justice issue in light of the language commonly used to define online harassment. As I’ll discuss, much of the current terminology does not reflect the varieties of online harassment that are deeply entrenched in sexism, racism, , , and , and therefore, hides this issue’s connections to equality disparities online. For example, I’ll illustrate how race influences the severity of harassment, noting the correlation between women of color and more severe forms of online harassment, despite our culture’s propensity to centralize white women in conversations about online harassment. I’ll also discuss how gendered senses of spatial ownership contribute to this problem, arguing that online harassment is used as a method of policing women’s ideas and behaviors, particularly feminist ideas and behaviors, making it a significant hurdle we face in our fight for social justice.

Difficult to Define: How Do We Talk about Online Harassment? As I touched on in the introduction, “online harassment” can be difficult to define given the multidimensional complexities that encompass the term. However, as I’ll show throughout the rest of this chapter, online harassment is an issue that disproportionately affects women and to varying degrees based on social categories beyond gender, such as race and sexuality (Citron, 2014; Herring et al., 2002; Jane 2014a; Jane 2014b; Mantilla, 2015; Phillips, 2015b). Yet, online harassment is commonly misconceived as something that affects everyone evenly: it’s everywhere, everyone sees it, and everyone experiences it. This may be true in the sense that online harassment is pervasive and has become commonplace, affecting the way people use social media and how they interact online (Brock, 2012; Lanier, 2010; Penny, 2013). The Pew Research Center categorizes harassment as a “common part of online life,” and a 2014 survey conducted by their Internet, Science, and Technology branch found 73% of respondents have, at the very least, witnessed harassment online (Duggan, 2014, n.p.); however, the kinds of harassment that people experience change across social categories.

15 A recent survey of 2,700 Twitter users about the kinds of abusive tweets they’ve received shows that the most common type employs “misogynistic language” (Warzel, 2016), demonstrating a distinct rhetorical flavor of targeting women, both directly and indirectly, in online harassment. Of course, men experience online harassment too, even the kind that uses misogyny in its construction, particularly gay, bisexual, , and transgender men (Citron, 2014), and in fact, the second most common type of abusive tweet users experience as reported in the survey used “homophobic or transphobic slurs” (Warzel, 2016). Yet the kinds of harassment men receive are vastly different from what women are subjected to. A Pew Research Center’s study of online harassment, for instance, demonstrate that young women are the most vulnerable population to “sexualized forms of abuse,” as “21% of women ages 18­29 experience online” as compared to the 9% men in the same age range (Duggan, 2017, n.p.). Further, while men are more likely to be called names, women are far more likely to be stalked or sexually harassed, which come with greater long­term effects (Duggan, 2014). These findings are corroborated by the now defunct organization Working to Halt Online Abuse (WHOA), which worked to support victims of online abuse. In 2012 alone, WHOA handled 394 cases of which 316, or 80%, of the victims were women (“2012 Cyberstalking,” 2012). Pew’s findings also show women are more likely to suffer more serious after­effects of their harassment than men, and others have noted that women are more likely to be victims of sustained and prolonged attacks by groups of people rather than singular incidents perpetrated by a lone harasser (Jane 2014b; Mantilla, 2015), increasing the intensity and potential for emotional damage. As Danielle Keats Citron puts it so simply, “men are more often attacked for their ideas and actions,” while women are attacked for their very being (2014, p. 14­15). Perhaps these gendered differences in how online harassment is experienced contributes to the significant differences in how men and women perceive the seriousness of online harassment. For example, “35% of women who have experienced any type of online harassment describe their most recent incident as either extremely or very upsetting,” while only 16% of men categorize their most recent experience the same way (Duggan, 2017, n.p.). What’s more, 70% of women see online harassment as “a major problem,” as compared to 54% of men who feel the same way. This gap widens even further in the 18 to 29 age range, where 83% of women classify online harassment as a “major problem” as compared to 55% of men. Additionally, 50% of women say offensive content online is “too often excused as not a big deal,” while 64% of men (and 73% of men ages 18­29) say offensive content is “taken too seriously.” The gendered differences of attitudes towards the seriousness of online harassment is also reflected in what men and women deem as being important when it comes to the tenor of online space. For example, 63% of women feel it’s more important that people “feel welcome and safe online,” while 56% of men think being able to speak one’s mind freely online is more important (Duggan, 2017, n.p.). Despite clear patterns of gendered difference in experiences of and attitudes towards online harassment, the public perception that online harassment does not necessarily occur along

16 gendered lines persists. Experts in online harassment have worked hard to illuminate this fallacy by debunking narrow claims that men receive more abuse than women. Soraya Chemaly, for example, is the Director of the Women's Media Center Speech Project, an online harassment research group with the stated goal of increasing the “understanding of the nature, scope and costs of online misogyny and abuse in order to contribute to new frameworks that will ensure that free speech is a right that extends equally to all” (“About Us,” 2016). Her 2014 article “There's No Comparing Male and Female Harassment Online” systematically breaks down the argument that men are targeted more than women. In it, she argues that the harassment of women is often presented in the form of “discriminatory harms rooted in our history,” and that people who “defy rigid gender and sexuality rules” (n.p.) are at greater risk as well. Chemaly argues, “For girls and women, harassment is not just about ‘un­pleasantries.’ It’s often about men asserting dominance, silencing, and frequently, scaring and punishing them” (n.p.). She goes on to present study after study that concluded digitally­mediated acts of , rape threats, death threats, cyber mob attacks, , rape videos, and human trafficking are more likely to be aimed at women and girls by men. Of course, as Chemaly notes through statistical evidence, threats mean something different to women, who are far more likely to experience acts like rape and physical assault in their lifetime than men. For example, she cites the Bureau of Justice Statistics in noting that 1 in 5 women will experience rape as compared to 1 in 71 men. “The harassment men experience,” she writes, “also lacks broader, resonant symboli sm,” because “ the objectification and of women is central to normalizing violence against us,” (n.p.). Chemaly concludes with a strong call to action to stop ignoring online harassment and its relationship to cultural norms of . Chemaly also advises us to pay greater attention to the role online harassment plays in suppressing free speech, as it is often used as a method of policing the boundaries of who and what is circulated among mainstream discourses. Jessica Megarry (2014) writes, “Equality online is dependent not only on the ability to occupy a space, but to be able to influence it and speak without fear of threat or harassment” (p. 46). The very real likelihood of women experiencing or witnessing harassment while on social media influences how they occupy, influence, and interact in online spaces. Mainstream utopic visions of the internet highlight its democratizing potential, but Megarry’s assertion reminds us that we are far from women holding an equitable place online. Harassment is a major part of this problem, yet our popular terminology about it doesn’t fully gesture to the extent that harassment is gendered. “Online harassment” as a term is too broad given all of the gender differences in how it is experienced. There are many types of harassment under this umbrella term, making it difficult to fully realize how identity influences both severity and effects. Yet, it’s imperative that the harassment acts which disparage women have a distinct name. When victims lack the lexicon to be able to put language to their experience, the effects can be amplified, both in terms of the individual experience and the culture as a whole. Without a distinct name, it’s difficult to enact

17 change through law and policy (MacKinnon, 1987, p. 104). Consider, how might a victim go about formally verbalizing their pain let alone accuse the perpetrator without knowing what to call the crime? Catharine MacKinnon (1987) was instrumental in coining the phrase “sexual harassment” and bringing the issue to the Supreme Court, arguing sexual harassment is a violation of The Civil Rights Act of 1964. In discussing the reverberations of that work, MacKinnon notes that victims of sexual harassment “have been given a name for their suffering and an analysis that connects to gender. They have been given a forum, legitimacy to speak, authority to make claims, and an avenue for possible relief” (p. 104). MacKinnon argues that without the words to put to an experience, it’s easy for society to ignore a very real problem.12 Using the phrase “sexual harassment” took culture from a place of describing these acts with what she calls “primitive language” to recognizing them as “as experience with a form, an etiology, a cumulativeness” (p. 106). Developing the language to accurately talk about sexual harassment made it possible to do something about it, and in turn “it became possible for its victims to speak about it” (p. 106). Karla Mantilla maintains, “clearly recognizing and naming harassment and abuse of women [...] is an indispensable step towards enacting laws and policies that will protect women online as well as making the legal cultural and social changes to ensure women’s full equality and participation” (2015, p. 16). Therefore, it’s crucial we are specific in our articulations and that we continue to interrogate how generalities fail to accurately communicate how harassment is inflicted and experienced differently across myriad identities. Currently, because of how complex and contextual online harassment is, words are sometimes attributed to types of harassment that should belong in a different, more definite category. For instance, “cyberbullying” is frequently used to talk about all varieties of online harassment. A keyword search in online search engines or in academic journal databases for “cyberbullying” yields a high volume of results for information about online harassment. However, cyberbullying by definition only affects adolescents and teens (boyd, 2014; Collier, 2012). Granted, this term too has definitional problems because of the constantly changing nature of where cyberbullying occurs: the internet. Anne Collier (2012) explains that users, content, and interactions of the internet are always shifting, and therefore, “one­size­fits­all solutions do not exist” (p. 3), which means combating cyberbullying comes with its own unique challenges, complicated by the involvement of parents, teachers, and other caretakers of young people. For example, current interventions of cyberbullying usually involve adults using technologies to surveil the whereabouts and discourses of adolescents on social media platforms (boyd, 2014). While cyberbullying is a very real and serious problem,13 it’s an ineffective term for naming online harassment as it pertains to

12 MacKinnon has been rightfully criticized for her negative stances towards sex work and (see Vance, 1984), providing an oppositional standpoint to sex­positive . Nevertheless, her work on sexual harassment, especially as it pertains to law, remains foundational. 13 F or a comprehensive look at how cyberbullying affects teens’ online writing processes, digital literacies, and their relationships with adults, see chapter five in danah boyd’s I t’s Complicated: The Social Lives of Networked Teens (2014).

18 those outside of adolescent and teen demographics because cyberbullying is a form of harassment that’s inflicted on and by adolescents and teens. What’s more, when we use “cyberbullying” to name the misogynistic vitriol aimed at women online, we infantilize those victims.14 “Trolling” is possibly the most recognizable term within the broader public conversation about online harassment and is often erroneously ascribed to general acts of online aggression. Whitney Phillips, author of This is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture (2015b) , explains that there are many types of online aggressions, most of which are both difficult to define and are often mistaken as trolling. Trolls, however, “partake in highly stylized subcultural practices” (p. 2) that involve antagonizing a target for “lulz,” or “amusement at other people’s distress,” (p. 27). Lulz are the crux of trolling culture and act as within troll communities, which sets trolling apart from other forms of online harassment in its intentionality, because by design, trolling aims to upset a person or group of people for fun or credibility within the larger community. It’s a troll’s intention to provoke a heated response by inserting themself in any conversation aggressively and offensively, meaning they don’t always have a personal stake in the topic or even agree with what they’re saying—they simply say it to upset those who have an opposing opinion. This is an important distinction to make that gets at the complexities of language associated with online harassment, particularly that which is sexist and misogynistic: trolls “don’t mean, or have to mean, the abusive things they say” (Phillips, 2015b, p. 26), while perpetrators of sexist online harassment with the intent to police, shame, and silence women have deep­seated beliefs about women that manifest through use of harassment. The relationship between sexist online harassment and trolling is complicated because some of the transgressions I discuss in this dissertation do align with trolling behaviors; however, I’m cautious to use “troll” to describe these perpetrators because of popular conceptions of who trolls are and why they troll. Philips (2015a) advocates describing antagonistic online behavior in ways that acknowledge the effects it has on those who receive it. The word troll, “implies a level of playfulness that tends to minimize their antagonistic behaviors, or at least establish a firewall between the embodied person and their digitally mediated actions” (n.p.). Social and cognitive research on personality helps researchers of trolls and trolling behaviors draw conclusions about the type of person who is likely to engage in trolling. Fichman & Sanfilippo (2015), for instance, align trolling with “deviant and antisocial online behavior in which the deviant user acts provocatively and outside of normative expectations within a particular community” (p. 163). Buckels et al. (2014) determined that trolling highly correlates with what behavioral

14 As I’ll discuss further in chapter two, an issue I ran into in the design of a survey of online harassment was that the Institutional Review Board wanted me to provide participants with a link to a hotline or resource website for coping with harassment. I knew that little resources for victims of harassment existed, and my in­depth search for a space online for victims to get help or access resources revealed that most, if not all, of these websites are geared towards teens and tweens.

19 psychologists called the Dark Tetrad of personality: , sadism, , and Machiavellianism. The association is so strong, the researchers claim “online trolls are prototypical everyday sadists” because trolls, unlike other deviant figures online, enjoy antagonizing others and tend to be repeat offenders (p. 101). Again, trolling, fundamentally, is about drawing out a response through repeated and intentional disruptions in order to cause conflict within an online community (Fichman & Sanfilippo, 2015, p. 163), often motivated by boredom or a need for attention (Shachaf & Hara, 2010, p. 357). Trolling, then, is calculated and highly devious. Social science researchers have also considered questions of how behaviors such as a trolling are influenced by : does anonymity actually make people meaner, bolder, and more reckless, creating conditions that encourage antagonism like trolling? Does digitally­mediated communication facilitate a detachment from our offline identity? Theorized by John Suler (2004), the “online disinhibition effect” refers to the idea that our online selves are less inhibited than our offline selves, causing us to speak and act more impulsively and freely online where we’re seemingly farther away from the social mores that dictate our behavior during offline interactions (p. 321). Suler argues, “anonymity is one of the principle factors that creates the disinhibition effect” (p. 322), which can manifest in two ways: benign disinhibition, which causes people to be more positively disinhibited in the sense they “show unusual acts of kindness and generosity,” or toxic disinhibition, which is more representative of vitriolic forms of online interactions, (p. 321) like harassment or trolling. Toxic disinhibition is, in part, a result of the disembodiment we feel when we communicate through technologically mediated modes, where we can’t physically see those with whom we are interacting. Our ability to create an online persona that is separate from our in­person identity helps us to feel less vulnerable within these networks, and “the opportunity to be physically invisible amplifies the disinhibition effect” (p. 322). In other words, when we think no one is watching, we are more likely to behave in ways we normally wouldn’t in an offline setting. The disinhibition effect, as Suler notes, can be flagrantly amplified when anonymity is involved. For example, studies about online commenting show that anonymous commenting is indeed largely more antagonistic and abusive than non­anonymous commenting, such as Arthur D. Santana’s (2013) analysis of comments left on news stories about immigration. Santana found that 53.3% of anonymous comments were uncivil in tone as compared to 28.7% of non­anonymous comments (p. 27). The correlation between anonymity and trolling has begun to influence media outlets’ policies on anonymous commenting, many of which have gone so far as to disable anonymous commenting. In 2013, ESPN’s website, for example, switched to a model that tethers a commenter’s post to their account (Goldenberg, 2013). Not long after, The Huffington Post put an end to anonymous accounts, meaning comments can still be left anonymously, but users can only do so after creating an account and verifying their identity. The Huffington Post released a statement explaining their position saying that it’s the “tension

20 between anonymity and accountability” that drove their decision (Soni, 2013). Jimi Soni, the managing editor at the time, released a statement in which he wrote, At HuffPo , we publish nearly 9 million comments a month, but we’ve reached the point where roughly three­quarters of our incoming comments never see the light of day, either because they are flat­out spam or because they contain unpublishable levels of vitriol. And rather than participating in threads and promoting the best comments, our moderators are stuck policing the trolls with diminishing success. (2013, n.p.) His statement points to the copious amounts of labor involved in monitoring trolls in comments sections and concludes by pointing out that vitriolic words often come at real human cost, citing two relatively famous cases of online abuse—one in which a British politician was threatened with rape numerous times over her petition to put a woman on the ten pound note, and one in which a young boy committed after being cyberbullied. Some outlets have opted to end commenting altogether, anonymous or not, specifically citing trolls as the reason why. Popular Science , one of the first places to do so, released a statement saying, “we are as committed to fostering lively, intellectual debate as we are to spreading the word of science far and wide. The problem is when trolls and spambots overwhelm the former, diminishing our ability to do the latter” (LaBarre, 2013). Similarly, the feminist outlet The Establishment did away with comment sections in 2015 because of how commonly they housed abuse and trolling. The writers and editors noted that comment sections “rarely provide the thoughtful feedback they were designed for,” “legitimize abusive language,” and are counter to “an environment that appeals to our better natures” (Oluo, 2015, n.p.). Again, it’s difficult to say that all of the abuse in comment sections can be considered trolling, as trolling can be difficult to distinguish from other forms of online harassment, but trolling has certainly helped create an atmosphere online where we’ve come to expect malevolence, prompting platforms to take preemptive measures to protect against it. Yet, trolling doesn’t always have negative associations and is sometimes even glorified due in part to trolls’ frequent appearance in pop culture as Robin Hood figures. For example, global hacker group Anonymous, while considered cyber terrorists by some (Rawlinson & Peachey, 2012), are often positioned in the media as executing trickery that exposes corruption among corporations and governments for the good of the masses. In 2011, the group stole credit card information from multiple corporate entities and used it to donate money to charities. News outlets covering the story painted Anonymous as heroes who use trolling as a means to neutralize wealth disparities (Williams, 2011). Phillips also notes ways in which trolling can be perceived as “good” or moral behavior, namely in the practice of trolling trolls. People who troll trolls, or “antis,” essentially turn deviant trolling methods onto trolls themselves. Antis, then, “troll as many trolls as possible” (Phillips, 2015b, p. 24) in an effort to denounce trolling as a harmful act. Public perceptions of trolls as either a.) trite nuisances, or b.) do­gooder Robin Hoods, have shaped the cultural definition of trolling into one that perhaps makes light of an issue that’s far more serious than the name “trolling” suggests. Gabriella Coleman, for example, an

21 anthropologist known for her work on hacker culture and Anonymous, questions what the alignment of trolls as a “trickster” figure does for the perception of the very real harm they cause. She wonders if this label “act[s] as an alibi, a defense, or an apology for juvenile, racist, or misogynist behavior” (2012, p. 115­116). Part of the problem stems from how the brand dilution of “trolling” has resulted in its use to describe multiple kinds of online antagonism. Whitney Phillips (2015a) argues, “referring to nasty online behaviors as ‘trolling’ frames online antagonism as a game only the aggressor can win” (n.p.), which thereby privileges the aggressor within the exchange and creates a culture in which everyone should simply acquiesce to their abuse. The familiarity that trolling has within is powerful, but it is incredibly important to distinguish the ways in which using “trolling” to describe all forms of online aggression can be limiting and dangerous. Karla Mantilla works to draw on the familiarity of “trolling” while also narrowing its scope in her book, Gendertrolling: How Misogyny Went Viral (2015). Mantilla proposes the term “gendertrolling” in order to more specifically gesture towards the harassment leveraged at women online. There are seven common features of gendertrolling, though Mantilla points out that not all instances of gendertrolling include all of them. Gendertrolling, 1.) is prompted by women participating in online public discourses, 2.) uses “graphic sexualized and gender­based ,” 3.) uses credible rape and death threats by evoking offline targeting, 4.) spans across many social media sites, 5.) is highly intense and frequent, 6.) is sustained across long periods of time, months or years, and 7.) is perpetuated through an organized campaign that involves many attackers (p. 21). Gendertrolling is a “more threatening online phenomenon than the generic trolling,” because while generic trolling is largely motivated by a desire to upset the target, gendertrolling “often expresses sincere beliefs held by the trolls” (2013, p. 564). The difference, then, not only has to do with who the target is, but how the perpetrator uses harassment as a means to vocalize misogynistic or sexist ideologies. Gendertrolling as a term, then, does more to specifically draw attention to the gendered aspects of online harassment while capitalizing on the cultural familiarity with trolls, but I worry this lessens the perceived seriousness of these offenses because of our propensity to write trolls off as mean­spirited annoyances. “Trolling,” as discussed in Phillips’ work, connotes anonymous prankster, while gendertrolls are often onymous presumably because they staunchly stand by what they say in their systematic hatred and harassment of women. Gendertrolls, Mantilla explains, have a very specific motivation: to ensure that women are kept from voicing their opinions “freely and without consequences in public venues such as the internet” (2015, p. 94). Their actions have a wide­range of responses from the women that they target, primarily withdrawing from online spaces, adopting a male avatar, self­censoring, and remaining silent on particular issues (p. 107­113). But the effects extend offline as well. Mantilla’s extensive interviews with victims of gendertrolling reveal that many women suffer from anxiety and even PTSD as a result. What’s more, women who receive credible threats from

22 gendertrolls are often forced to make significant changes in their life, such as finding a new job or moving altogether in order to remain safe (p. 115). These threats and their effects results in profound shifts in dominant public discourses online that reflect only patriarchal identities that are safe to express themselves, as Mantilla says, freely and without consequences. Mantilla’s description of of gendertrolling strongly evokes connections to , a socially­constructed model of gender that posits men as hyper violent and sexually aggressive. Toxic masculinity entails asserting dominance over other groups, such as women and LGBTQ people, and “valorizes violence as the way to prove one’s self to the world” (Marcotte, 2016, n.p.). Mantilla’s work reminds us that unless we do more to draw attention to and curb the very real and very serious problem of sexist online harassment, we’ll continue to see toxic masculine culture reflected in the common areas of the internet. Mantilla also addresses the rhetorical importance of naming varieties of online harassment accurately. She contends that when we fail to fully define what behaviors like harassment do and cause, it is incredibly difficult to act in ways that productively address or even counter the problem (2015, p. 155). Like MacKinnon, Mantilla is also concerned with naming these acts in order to create social recognition that will lead to action. She writes, The effect of not acknowledging or recognizing widespread, common, and patterned abusive and harassing behaviors that women experience is that those behaviors are rendered effectively invisible, so their harm is not recognized, and the behaviors are therefore tolerated, albeit not explicitly but rather by overlooking, ignoring, or dismissing their existence. (2015, p. 155) Her call to action here asks us to examine the ways in which we’ve failed to acknowledge gendertrolling as patterned behavior that is indicative of a larger cultural problem. When we can name sexist online harassment, it becomes more visible and is able to linger longer in the public consciousness. Emma Jane, too, notes the pervasiveness of online harassment and proposes the term “e­bile” to mean the “extravagant invective, the sexualized threats of violence, and the recreational nastiness that have come to constitute a dominant tenor of Internet discourse” (2014b, p. 532). Evoking the idea that hostility has become a cultural norm online, Jane observes, “toxic and often markedly misogynist e­bile no longer oozes only in the darkest digestive folds of the cybersphere but circulates freely through the entire body of the Internet” (p. 532). Because of this, she argues academia must do more to recognize e­bile as a field of research, and to do so, we must first use nomenclature that allows us to study e­bile from broad points of inquiry (p. 532). Therefore, Jane opts to keep e­bile’s definition relatively broad in order to be able to analyze a “variety of related phenomena that are sometimes—mistakenly, in [her] view—differentiated with nigh neo­scholastic attention to minutiae.” To that end, she defines e­bile as “any text or speech act which relies on technology for communication and/or publication, and is perceived by a sender, receiver, or outside observer as involving hostility” (p. 533).

23 Jane argues using a broad category like “e­bile” allows us to see recurring patterns across contexts and interlocutors. I see some potential value in keeping the terminology of online harassment broad in order to avoid the academic black hole we sometimes find ourselves in when it comes to defining our terms, and Jane points out that working towards an indisputable definition of online harassment would prevent us from ever entering an analysis stage in our research (p. 540). But given all of the ways various forms of online harassment, like cyberbullying, trolling, and gendertrolling differ in intention and participants, how much can we truly understand when we lump them all together? I argue that more precise language that zeros in how styles of harassment fracture across various identities will provide snapshots that, frankly, we lack and that are necessary for understanding the nuanced ways identities such as gender and race influence the relationship between harassers and the harassed. Despite Jane’s assertion that we should keep the definition of “e­bile” broad, she speaks across her work to the gendered patterns in the rhetorical construction of e­bile; notably, e­bile frequently occurs in the form of violent sexualized threats, with Jane going so far as to say that “threatening rape has become the modus operandi for those wishing to critique female commentators” (2014b, p. 535). She also notes that “women are more likely to be the targets and less likely to be the authors of this type of discourse” (p. 536), and e­bile becomes more severe and persistent towards those that expose misogyny, call out harassment when it happens, and/or make a feminist identity visible. Most often, it’s women who are visible in the public sphere that are the prime targets of e­bile (2014a, p. 560). Jane does draw attention to the fact that men receive e­bile too, but oftentimes threats received by men are gendered or sexualized through use of misogynistic threats and homophobic language. Corroborating the conclusions of Citron (2014) and Chemaly (2014), Jane’s research demonstrates men are less likely to believe their harasser would make good on a threat of violence, making it easier for them to on from harassment, whereas women are more likely to suffer long­term emotional distress (2014b, p. 536). More often than not, the end goal for e­bile is to silence a public female voice through the threat of violence and is “rarely about winning an argument via the deployment of coherent reasoning, so much as a means by which discursive volume can be increased” (Jane, 2014b, p. 534). Part of e­bile’s sexism and power in its flooding mechanism, drowning out everyone in opposition to the aggressor, silencing them. When discourses like the ones Jane describes as e­bile are present, women are silenced or erased, and the violent commentary about them, their bodies, voices, and being, suddenly becomes the dominant discourse. For these reasons, I’m reluctant to encourage the use of terminology that erases or hides these gendered dimensions because we can reclaim power when we draw attention to the use of sexist online harassment to subjugate women. Modifying phrases like “online harassment” with “sexist” or “misogynistic,” while a small step, works to make visible the ways in which online harassment, who enacts it, its intentions, how it’s rhetorically constructed, and who mainly experiences it, is by and large gendered.

24 Jane argues that academia has a responsibility to study online harassment because it “has become such a dominant tenor of Internet discourse” causing “suffering and is likely reducing the inclusivity of the cybersphere” (2014a, p. 567), a point that significantly echoes the concerns of scholars in composition and rhetoric such as Selfe (1999), Takayoshi (1994), DeWitt (1997), and Gruwell (2015). Jane observes that critical examination of e­bile provides us “insight into the degree to which misogynist views are still held by many in the community” (p. 567). These producers of e­bile are not outliers or random trolls who exist only in scary corners of the internet. They are members of our communities, our neighbors, our coworkers. In what follows throughout the remainder of this dissertation, I respond to Jane’s call to fill knowledge gaps through research into how online harassment affects women and our online environments, and I extend her lines of inquiry by attending to how identities such as gender, race, and sexuality influence sexist and misogynistic online harassment. Rather than broadening our perceptions of online harassment to make room for all kinds, it’s my goal to provide more in­depth accounts of the sexist varieties of online harassment that reflect systemic cultural attitudes about women, their participation in public discourse, and their presence in public spaces. This kind of sexist online harassment targets women specifically in order to police their voices and presences online in an effort to silence, shame, and scare them. Additionally, while enacted in online environments and affecting women’s online lives, sexist online harassment has perpetrations, consequences, and effects that extend into offline spaces as well, and we must take care to acknowledge that. One of the interventions I aim to make in the following section is to elucidate intersectional dimensions of sexist online harassment, which have considerable offline implications. An intersectional approach is necessary because it allows us to see how sexist online harassment is also, for example, vehemently racist when experienced by women of color. Thus, I turn now to unpacking the racial dimensions of sexist online harassment.

Sexist Online Harassment is an Issue of Racial Justice Race and racism have major implications for how people use internet technologies and interact online (Banks, 2006; Brock, 2012; Chun, 2016; Daniels, 2009a; Krogstad, 2015; Nakamura, 2008), and many of the inequitable practices and racist attitudes that flow freely in offline environments are replicated in online ones (Daniels 2009b; Higgin, 2013; Nakamura, 2012; Young, 2014). As I alluded to in previous sections, racism intersects with sexist online harassment in prominent and overt ways. I return here to @femme_esq as an example, because despite her being white, her harassment brought up broader discussions and observations about how race influences the treatment of women, online and off. It’s important to note that @femme_esq’s avatar was a silhouetted outline of her body—neither her face nor her skin was visible.

25

Figure 1.2: @femme_esq’s Twitter avatar of a silhouette.

After the WIRED article came out, the ensuing commentary indicated that most people thought @femme_esq was a Black woman, presumably because of her adamant support for and other anti­racist justice movements. She vocally participated in these conversations and, as they came up, called attention to instances of White Fragility, “a state in which even a minimum amount of racial stress becomes intolerable [for white people], triggering a range of defensive moves” (DiAngelo, 2011, p. 54). Consistent readers of @femme_esq’s feed knew that she openly claimed her whiteness and examined it in relation to the issues she tweeted about, particularly in conjunction with her relationship to her wife, who is a Black woman. It didn’t take long for feminists of color to mark the assumption that @femme_esq was Black as an important detail that shouldn’t go ignored. On the same day @femme_esq locked and then deleted her account, Imani Gandy (@AngryBlackLady), political journalist, women’s rights activist, and co­host of show This Week in Blackness , tweeted, I’m seeing people who are rejoicing in femme_esq’s fate saying that she tweeted like a Black woman… So… So is that an admission that you think Black women on here deserve the sort of treatment she got? Because that's FUCKED UP. And if you think she tweeted like a Black woman, you never followed her. She acknowledged her whiteness ALL THE FUCKING TIME. Just admit you think women, and women of color in particular, deserve the incessant harassment they are subject to on here.15

15 Retrieved from https://twitter.com/AngryBlackLady/status/743692506232066049 and https://twitter.com/angryblacklady/status/743692626478534658

26

Figures 1.3­6: Imani Gandy’s series of tweets about @femme_esq’s leaving of Twitter because of harassment.

Similarly, on June 17th, 2016 Jamilah Lemiuex (@JamilahLemiuex), a writer and cultural critic, also weighed in, speaking directly to the controversy surrounding the WIRED article, which many people blamed for driving hostile traffic to @femme_esq’s feed, ultimately resulting in her harassment and doxxing. Specially, Lemiuex critiqued an article that defended @femme_esq without engaging in the racial dimensions that were obvious to Lemiuex and other women of color. She wrote, “Femme Esq’s tweet was really bad, but I’m not dismissing all that I admired

27 about her or the positive exchanges we had bc of it. BUT… the author of the piece defending her failed to articulate why it seems the Wired piece happened: the assumption she was a WOC.” 16

17 Figures 1.7­8: Jamilah Lemiuex’s tweets about how race was involved in @femme_esq’s harassment.

Two days after @femme_esq’s tweet, Michelle Taylor, also known as Feminista Jones (@FeministaJones), another prominent feminist of color with a large following on Twitter, participated in an online conversation about @femme_esq’s perceived race. In a reply to a tweet deeming it “telling” that so many thought @femme_esq is Black, Jones tweeted, “It's telling that people don't expect WW [white women] to question privilege, challenge WM [white male] toxicity, stand up for BW [Black women], etc.”18 Jones brings to light the immense inequalities in who is responsible for challenging white supremacy, toxic masculinity, and their intersections. Currently, far more of the burden is put on women of color, and perhaps this expectation is a contributing factor to why women of color are at greater risk of receiving severe and relentless harassment online. Yet despite these clear inequalities, sexist online harassment isn’t recognized as an issue of racial justice.

16 Retrieved from https://twitter.com/jamilahlemieux/status/743789568298786816 and https://twitter.com/jamilahlemieux/status/743789845298876418 17 “WOC” is an acronym for “women of color.” 18 Retrieved from https://twitter.com/FeministaJones/status/743812798879272964

28 “Women” as a social category is too often incorrectly positioned as an all­encompassing group that points to a singular experience, which only works to negate and conceal how identities apart from gender, such as race, influence how women experience patriarchal oppression. This positioning has been contested within feminist movements because it affects who benefits from feminism and who makes the decisions about which issues receive attention from the cause. The concept of solidarity, a focus on shared oppression based solely on gender, has been widely critiqued as a mechanism that allows white women to co­opt the oppression of Black women in their fight for equality without examining how race impacts gendered experiences. In the words of Audre Lorde, “Refusing to recognize difference makes it impossible to see the different problems and pitfalls facing us as women” (2007, p. 118). (1981) contextualizes this issue by pointing out that white women within 19th century women’s liberation efforts often relied on the idea of common oppression, rightfully disaffecting women of color. She writes: Whenever black women tried to express to white women their ideas about white female racism or their sense that the women who were at the forefront of the movement were not oppressed women they were told that “oppression cannot be measured.” White female emphasis on “common oppression” in their appeals to black women to join the movement further alienated many black women. Because so many of the white women in the movement were employers of non­white and white domestics, their rhetoric of common oppression was experienced by black women as an assault, an expression of the bourgeois woman’s insensitivity and lack of concern for the lower class woman’s position in society. (p. 144) hooks’ retelling of this crucial moment in encapsulates why claims of solidarity can be deeply problematic and understood as a means to use the experiences of women of color to bolster . Marsha Houston explains, “women of color do not experience sexism in addition to racism, but sexism in the context of racism; thus, they cannot be said to bear an additional burden that white women do not bear,” but rather they “bear an altogether different burden from that borne by white women” (1992, p. 49). As such, claims of solidarity from white women, even those who mean well (Ortega, 2006), can be “self­centered and motivated by [their] own opportunistic desires” (hooks, 1981, p. 144). Intersectionality is a term coined by Kimberlé Williams Crenshaw (1989) to critique how theoretical frameworks of gendered oppression obfuscate race, thereby erasing or distorting Black women’s experiences. Frameworks of unitary gender oppression, Crenshaw points out, operate under a “single categorical axis,” and therefore contribute “to the marginalization of Black women in feminist theory and in antiracist politics” (p. 140). She argues, Black women are sometimes excluded from feminist theory and antiracist policy discourse because both are predicated on a discrete set of experiences that often does not accurately reflect the interaction of race and gender. These problems of exclusion cannot be solved simply by including Black women within an already established analytical structure. Because the intersectional experience is greater than the sum of racism and

29 sexism, any analysis that does not take intersectionality into account cannot sufficiently address the particular manner in which Black women are subordinated. (p. 140) By examining how white feminist movements, theories, and practices are built around the single axis of gender void of an acknowledgment of racial difference, Crenshaw exposes how equality and social justice, for many women, can never truly be achieved. When feminist movements ignore racial difference, they uphold the very power structures they purport to resist. Crenshaw writes, Feminist efforts to politicize experiences of women and antiracist efforts to politicize experience of people of color have frequently proceeded as though the issues and experiences they each detail occur on mutually exclusive terrains. Although racism and sexism readily intersect in the lives of real people, they seldom do in feminist and antiracist practices. And so, when the practices expound identity as women or person of color as an either/or position, they relegate the identity of women of color to a location that resists telling. (1991, p. 1242) Crenshaw’s work reveals that factors beyond gender shape how women live and move within the world. When we fail to recognize social divisions like race, class, ethnicity, nationality, sexuality, ability, and age (Berger & Guidroz, 2009, p. 1), we fail to understand how social injustices operate in complex ways that do recognize identity difference. By extension, a to develop intersectional understandings of harassment and the violence it inflicts on women is a failure of efforts to end these injustices. Despite the racial divide among those who experience sexist online harassment (Citron, 2014; Mantilla, 2015; Nakamura, 2012), most widely­publicized instances centralize white women (see Jones as qtd. in Mantilla, 2015, p. 26­27). This phenomenon is observed, for example in some of the most recent well­known cases of sexist online harassment: ● : women’s rights and body positive activist, West has a reputation as someone who engages with and calls out her online harassers. She was featured on a 2015 episode of about online aggression in which she tracked down a person who created a social media account posing as her deceased father in order to harass her. Because she is so outspoken about the amount of abuse she’s subjected to online, she’s become a central figure in mainstream attention of sexist online harassment. She left Twitter is 2017 because of harassment. ● Jennifer Lawrence: an actress, was the victim of doxxing. Edward Majerczyk, who later pleaded guilty to the crime and was sent to prison (Meisner, 2017), hacked Lawrence’s iCloud account and released stolen nude photos of her online. Lawrence wasn’t Majerczyk’s only victim, but she gained a lot of attention after using the phrase “sex crime” to describe what happened to her (“Cover Exclusive,” 2014). ● : feminist activist and founder of feminist Feministing . Much like Lindy West, Valenti is a consistent target for online harassers presumably because of her notoriety as a feminist figure. In July of 2016, Valenti temporarily quit Twitter after her

30 harassers posted credible rape threats against her then five year old daughter (Piner, 2016). What does it say that the most widely known instances of sexist online harassment centralize white women’s experiences? In the case of @femme_esq, while she is white, her story represents a lot of the complexities of online harassment given her perceived status as a woman of color, her visibly feminist ideals, and her sexuality as a member of the LGBTQ community. And it’s impossible to say with any certainty that her story might have played out differently if any of these things were different, but it’s not a stretch of the imagination to presume that if @femme_esq were white, straight, and male, her harassment wouldn’t have been as severe or had as dire consequences. Therefore, we have to ask ourselves: what do we fail to understand about sexist online harassment when we ignore the roles that race and gender play? A notable exception to the pattern of centralizing white women’s stories of online harassment is that of Leslie Jones (on Twitter as @LesDoggg), an actor and stand­up comedian who gained notoriety as a cast member on Saturday Night Live and later as one of the four Ghostbusters in the all­women reboot of the original 1980’s movie. Jones has been harassed off of Twitter twice.19 She first left Twitter after her harassers, many of whom targeted her because of her involvement with the Ghostbusters reboot,20 created dummy accounts posing as Jones and used them to tweet homophobic slurs and harass others, making it look like these were her actions and beliefs. Jones left Twitter the second time after her harassers doxxed her, releasing stolen nude photos of her. These harassment campaigns were organized by , an infamous sexist and white supremacist, who has a history of using Twitter to police people of color and women through use of overt racist and misogynistic language, images, and threats (Robinson, 2016). Prior to Jones’ fleeing of Twitter, Yiannopoulos called on his followers to harass her explicitly because of her involvement with Ghostbusters , which they did en masse. Both Mantilla (2015) and Jane (2014b) note that “cybermobs,” as Mantilla calls them, are comprised mainly of men who are adept at mobilizing in order to orchestrate large­scale attacks on a single target (Mantilla, 2015, p. 92­94), which is exactly what happened to Jones. It’s no accident the other three stars of the film, who are all white women, weren’t subjected to the same treatment, because again, women of color are systematically harassed at greater lengths and to more severe degrees than white women are (Citron, 2014; Mantilla, 2015). Misogynoir, a term coined by feminist scholar Moya Bailey, refers to the combination of sexism and racism that Black women face. It is a different kind of sexism than the kind white women experience because it is predicated on anti­Blackness. As Bailey writes, “What happens to Black women in

19 I should specify that she has been harassed off of Twitter twice s o far at the time of writing this in March of 2018, but I suspect it may happen again in the future. 20 Amy Zimmerman provides a comprehensive breakdown of how Jones’ harassment is representative of both 1.) the misogyny brought out by the mere existence of a G hostbusters f ilm starring all women, and 2.) the extreme racism women of color in popular culture experience. Her article, “The Hacking of Leslie Jones Exposes Misogynoir at Its Worse” was retreived from: http://www.thedailybeast.com/articles/2016/08/24/the­hacking­of­leslie­jones­exposes­misogynoir­at­its­worst.html

31 public space isn’t about them being any woman of color. It is particular and has to do with the ways that anti­Blackness and misogyny combine to malign Black women in our world” (2014, n.p.). The intense misogynoir against Jones brought on by Yiannopoulos’ cybermob of racist and misogynistic harassers pushed her off of Twitter. Before leaving she tweeted, “I leave Twitter tonight with tears and a very sad heart. All this cause [sic] I did a movie. You can hate the movie but the shit I got today...wrong.”21

Figure 1.9: Leslie Jones tweets about her decision to leave Twitter.

The racial dimensions of what happened to Jones are situated within white culture’s history of positioning Black women as objects for public consumption. Deborah Douglas (2016) notes historical connections Jones’ doxxing has to the treatment of Black women since the 1600s, and quotes gender and race scholar Lisa Thompson as saying, “The attention to [Jones’] body in particular speaks to a centuries­old disparagement and mistreatment of black women” (n.p.). Through being doxxed, Jones, a Black woman, was stripped bare in public and harassed into silence. Rachel Charlene Lewis (2016) speaks to the racism and colorism present in Leslie’s case in writing, “Leslie is not being targeted because she’s a woman. She’s being targeted because she’s a black woman, a dark­skinned black woman more specifically” (n.p.). Lewis goes on to argue that conversations about online harassment that are devoid of race dilutes them to the point that we are unable to pinpoint what happened and why. She contends, “we regularly take pride in highlighting trauma faced by white women, but we don't do the same for black women like Leslie” (n.p.). For many Black women, the signal boosts that white women like Jennifer Lawrence are afforded simply aren’t there, and if they are, it takes whiteness for people to listen. In Jones’ case, her harassment didn’t receive much recognition outside of the Black community on Twitter until the #LoveForLeslieJ, started by Ghostbusters director and cisgender heterosexual white male Paul Fieg, went viral. Twitter later permanently suspended Yiannopoulos from the

21 Retrieved from https://twitter.com/lesdoggg/status/755271004520349698

32 platform, prompting the hashtag #FreeMilo where his followers congregated to tweet their support for him. Claims that Yiannopoulos’ right to free speech was violated sparked much discussion about Twitter’s responsibilities as a private platform, especially in relation to the nature of free speech and censorship.22 The #FreeMilo campaign, which centralized the plight of a white supremacist, also drew more attention to Jones’ harassment and inspired more support for her, but unfortunately, drew a lot of sympathetic attention to Yiannopoulos as well. Despite the frequency with which cases like Jones’ occur, our culture habitually fails to fully recognize the ways in which race influences the machinations, severity, and types of harassment women experience, both online and off. Jones’ story is just one of many in which harassment exists at the vile intersection of misogyny and racism—a combination that inspires a unique set of concerns, effects, and cultural impacts. A commitment to racial justice means attending to these concerns, understanding the effects, and highlighting the cultural impacts so as to support women of color who experience the racism that undergirds so much of sexist online harassment.

Ownership of Space: How Harassment Is Used to Police Women Both Offline and On The harassment of women is not a new practice and, as evidenced by the cases I discussed in the last section, has consistently been used as a method of policing where they go and how they exist in public space (Day, 2001; Padmaja, 2016; Thompson, 1993; Tran, 2015; Vera­Gray, 2016). Commonly referred to as “street harassment,” the harassment of women in physical public space acts as a way to ascribe “other” status to them as they experience harassment in the form of verbal and nonverbal behaviors like “wolf­whistles, leers, winks, grabs, pinches, catcalls, and rude comments” (Kissling & Kramarae, 1991, p. 75­76). Harassment, then, serves as a toll that women must pay as they move about in public. A study conducted by anti­harassment organization Hollaback and Cornell University found that 85% of U.S. women experience street harassment for the first time before they turn 17 years old, and 11.6% of women experience street harassment for the first time before the age of 11. The study also found that 50% of women under the age of 40 have experienced unwanted groping or fondling in public, and 77% of women have been followed by a or a group of men in a way that made them feel unsafe. What’s more, the researchers conclude, street harassment has a major impact on women’s behavior. For instance, 85.6% of women surveyed reported taking an alternate route home or to wherever they are traveling in avoidance of harassment. 72.8% of women reported taking a different mode of transportation to their destination (i.e. a cab instead of walking or public transport). 69.8% of women reported not going out at night at all because of street harassment. Further, findings show that street harassment has negative impacts on women’s abilities to attend work and school, as 34.1% of women surveyed reported being late to work or school as a result of street harassment (“Cornell

22 The use of the first amendment as an argument for harassment will be discussed in chapter five.

33 International,” 2015, n.p.).23 These results point to the disturbing ripple effects of street harassment and its impacts on women’s personal, professional, and academic success. Street harassment not only makes public space unsafe for women, but rhetorically, it communicates that they don’t have the same right to be there as men do. When women aren’t afforded the same level of access to public space as men, they’re unable to achieve the same amount of rhetorical influence. The relationship between gender and public space has been examined by scholars such as Nan Johnson (2002) who explores “how resistance to women’s participation in public rhetorical space was deployed” in the late 19th and early 20th century. Largely, women were relegated to domestic spheres, rendering them underrepresented in more public ones as “access to the powerful public rhetorical space of the podium and the pulpit” was limited (p. 14). For example, Johnson describes the backlash to women’s pursuit of “education, the vote, property rights, and mobility in public life” in the form of what’s called conduct literature—a genre that pushed the ideological agenda that the woman is “rhetorically meek,” quiet, and domestic (p. 48­49). By keeping women in the parlor, so to speak, they were kept out of public and therefore denied access to greater rhetorical influence. Tracing just how far back these discriminatory frameworks go, Cheryl Glenn (2004) argues that it’s no surprise “women and other traditionally disenfranchised groups have been systematically and consciously excluded from public speaking,” because “these groups have, since before Aristotle’s time, been excluded from full participation in the production of all Western canonized cultural forms, including the production of rhetorical arts” (p. 23­24). Women’s exclusion from these productions, then, means that their modes of communication, epistemologies, and bodies are often rejected in traditional models of rhetoric, forcing them to adapt, subvert, or be overlooked. Carolyn Kohrs Campbell (1989) examines the contentious relationship women have had with public space in her assemblage of historical feminist rhetorics representative of the struggles women have faced in the fight for access to public platforms and the right speak. Campbell notes, “The obstacles early women persuaders faced persist, although in altered form, in the present” (p. 15), reminding us that new forums and modes of communication reproduce age­old barriers for women rhetors. For example, the design and cultures of the internet recreate gendered inequalities that are prohibitive to women’s rhetorical opportunities. Claudia Herbst (2009) argues that men have a greater sense of “ownership” of online spaces because they are the ones who build them. She notes that “code literacy is a predominantly male phenomenon,” and that “the is testimony to this gender imbalance; it was built and maintained by mostly male programmers” (p. 136). It’s important to note that, historically, the role women have played in

23 16,607 women were surveyed for this research, making it “the largest analysis of street harassment to date” (“Cornell International...,” 2015, n.p.).

34 computer science and programming is far from insignificant.24 Yet, as Janet Abbate (2012) notes, our histories of computer programming and coding, particularly early ones, feature only men because these accounts are focused on hardware, which “women rarely had the chance to participate in building” (p. 6). Further, Abbate argues, gendered hierarchies of labor in professional computer programming settings made it “harder for women to perform up to their potential, reinforcing the idea that they are less capable than men” (p. 20­22). These hurdles all contributed to a cultural narrative about computer programming that favored men and disadvantaged women, which worked to widen a gender gap in the field. Prior to the 1980s, the number of women enrolling in computer science programs was rising, mirroring a similar trend in other traditionally male­dominated fields such as law, medicine, and physical sciences (Henn, 2014). However, while the number of women enrollees in those latter majors continued to grow, the number of women computer science majors dramatically declined in the mid­1980s and continued to do so for more than two decades before leveling off, but it still has yet to increase (Henn, 2014). The beginning of this decline coincided with the rise of personal computing (Fessenden, 2014), when early versions of the personal computer were almost exclusively marketed to boys and men. Steve Henn (2014) writes, “This idea that computers are for boys became a narrative. It became the story we told ourselves about the computing revolution. It helped define who geeks were, and it created techie culture” (n.p.). Despite the contributions women have made to computer sciences, the fact remains that it has been and is still largely a male­dominated space. Jessica Megarry (2014) notes that “contemporary online norms can be seen as products of the hyper masculine domination of the medium” (p. 49), giving perceived authority to men over women online. Misogyny and rape culture is even upheld in hacker discourse communities—words like “tits,” “gangbang,” and “rape” are part of used to describe hardware and technical processes, and while “many hackers are likely to find such terminology offensive,” we can’t ignore how language informs “computer technology and cultural spaces that spring from it” (Herbst, 2009, p. 147). Herbst argues, “Those who write code and create virtual spaces inadvertently have a different relationship to cyberspace than those who merely visit it after the construction has been completed” (p.138). Therefore, “code is a salient constituent in the creation of women’s reality online” (Herbst, 2009, p. 147). In their report on misogyny on Twitter, Bartlett et al. (2014) write, “While the internet was seen as a utopian platform for free speech and equality when it began to become popularly used in the 1990s, it was evident from the very start that the inequalities that structured ‘real­world’ society had been transferred online” (p. 3). Many of these inequalities are sustained through algorithms, which uphold a variety of offline (Akhtar, 2016; Miller,

24 For example, mathematician Ada Lovelace is considered the founder of computer programming as she created the first algorithm (Lewin, 2015). Inventor and actress Hedy Lamarr paved the way for what we now know as wifi (Cowan, 2012). Margaret wrote the software that led to the moon landing (McMillan, 2015). These are just a few of the many women who have made consequential contributions to the field of computer science.

35 2015; Noble, 2016). Safiya Noble (2013), for example, discusses how code sets algorithms into motion that reflect racial and gender that exist offline, particularly pertaining to Black girls and women. Her study of keyword searching on Google demonstrates that “hegemonic discourses about the hypersexualized Black woman, which exist offline in traditional media, are instantiated online” (n.p.). As seen in Nobel’s work, the narratives we build about women, embedded in the very code that upholds the structures of the internet, only continue to negatively impact women’s social standing, both online and off, making it difficult for them to move about and speak in public safely and with the same confidence or authority as men. Further historicizing the exclusion of women from public discourse, Roxanne Mountford (2003) turns attention towards the role that the female body plays in rhetorical authority, arguing that women’s bodies have long been excluded from public space, and if not excluded then deemed lesser or as a threat. Mountford asks the important and persistent question, “How does a woman earn the respect of an audience conditioned to regard her body itself as symbolic of lack (of authority, eloquence, power, substance)” (p. 13). There is no easy answer, obviously, and we’ve seen the dilemma that Mountford points to play out recently in American politics among women elected officials who, despite their authority, eloquence, power, and substance, are undermined and silenced in public forums. For instance, during a 2017 Senate confirmation hearing of Jeff Sessions as U.S. Attorney General, Senator was interrupted as she read a letter by Coretta Scott King written to the Senate Judiciary Committee in 1986 stating King’s stance that Sessions was unfit to serve as a federal judge because of his pattern of undermining civil rights. Senator Steve Daines, the presiding Chair, interrupted Warren to cite a rule about casting another Senator in a negative light on the Senate floor (Garber, 2017), but allowed Warren to continue. She resumed reading the letter until she was again interrupted, this time by Senate Majority Leader Mitch McConnell who objected and forced a vote resulting in Warren being silenced for the remainder of the hearing. It should be noted that later in the hearing, two male senators, Cory Booker and Jeff Merkly, were allowed to read and comment on Coretta Scott King’s letter without interruption or objection (Shapiro, 2017).25 Another notable example is of Senator , the only woman of color on the Senate Intelligence Committee, who was interrupted by more than one male members of the committee during two separate hearings regarding Russian interference in the 2016 Presidential election (Rogers, 2017). Another oft­cited example of a woman politician being interrupted while having the floor is the first 2016 Presidential debate in which Donald Trump interrupted Hillary Clinton upwards of 51 times26 (Lush, 2016). Further, during the second Presidential debate, Trump exhibited

25 After the vote to silence Warren, McConnell said “Senator Warren was giving a lengthy speech. She had appeared to violate the rule. She was warned. She was given an explanation. Nevertheless, she persisted,” unintentionally sparking what would become a feminist motto and internet , “Nevertheless, she persisted” (Garber, 2017; Victor, 2017). 26 Clinton, in contrast, interrupted Trump an estimated 17 times (Lush, 2016).

36 physical behavior that many described as menacing as he appeared to stalk Clinton while she moved around the stage (Leight, 2016), sometimes idling immediately behind her as she spoke. Body language experts characterized Trump’s movements as aggressive, intimidating, and in the style of an impending physical attack (Uhrmacher & Gamio, 2016). One such expert, Ruth Sherman, noted that Trump used proximity to assert dominance over and threaten Clinton (Rappeport, 2016). These examples go far beyond gendered styles of communication. Instead, they point to the persistent sexism in assumptions about who has the right to speak in public without being subjected to silencing mechanisms like interruption, tactics, and physical threats. Mary Beard (2014) reminds us that threats made towards women have been present in our culture for centuries and carry on today. She argues, “it doesn’t much matter what line you take as a woman, if you venture into traditional male territory, the abuse comes anyway” (p. 13). Perceivably, “traditional male territory” is synonymous with anywhere public, meaning women aren’t afforded access to the sense of ownership that leads to authoritative and safe participation in public discourse. Attorney and feminist activist (2007) argues harassment is used to send a message to women that when they’re in public, it’s only because men allow it. She writes, “At the heart of this aggression seems to be a more generalized offense at women's public presence in ‘men's’ spaces,” (p. 3). Street harassment “is leveled at women as a reminder that they do not have the same right as men to move through public space,” creating a significantly different experience for women in public than men (Filipovic, 2007, p. 298). Mantilla (2013) also speaks to harassment as a means of maintaining male ownership of public space in that it’s about “patrolling gender boundaries and using insults, hate, and threats of violence and/or rape to ensure that women and girls are either kept out of, or play subservient roles in, male­dominated arenas” (p. 569). Beard argues it’s the simple act of making one’s voice public that evokes such abuse; “It’s not what you say that prompts it, it’s the fact you’re saying it” (p. 13). Further, harassment intensifies not only when a woman simply participates in conversations, but when she deviates from traditional gender norms and styles of interacting. Jennifer Berdahl’s (2007) study of sexual harassment in the workplace overwhelmingly supports the notion that women who are outspoken, assertive, and independent are disproportionately targeted. Her research finds that while many women are subjected to sexual harassment at work, “women who violate feminine ideals” are the group who are at the greatest risk (p. 430 & 434). Berdahl further argues that the prominence of sexual or gendered harassment in the workplace “is not about romancing or seducing women,” but rather it’s “about getting them out of there or punishing them for being there” (qtd. in Cassell, 2007, n.p.). The effects of harassment, whether online or off, are vast and wide­ranging. As I discuss further in chapters three and four, online harassment especially causes fear, anxiety, and silence. Sexist harassment’s prominence in both offline and online spaces is facilitated by an underlying patriarchal ideology that women should not be able to exist in public, let alone participate in public discourses, without being regulated in ways that privilege male bodies and patterns of

37 discourse. Women’s public actions and ideas have long been surveilled and policed as a way to establish that they don’t have the same rights as men to move about public spaces freely and safely. In order to understand how this policing affects public discourse and thereby epistemology, we must continue to interrogate how women are kept out of public by use of harassment that protects patriarchal styles of communication and thought.

Reclaiming the Digital: Anti­Harassment Efforts as a Feminist Social Justice Movement Returning to Jane’s (2014a) call for academics to produce theoretical frameworks that will help us to understand how harassment influences the presence of women online (p. 558), she also notes that feminist identities are at great risk for harassment because a feminist identity is threatening to sexist harassers’ ideologies about who inherently has the right to speak and what issues should be discussed freely and widely. This threat to feminism as an identity, embodied orientation to the world, and mode of social justice action should not be ignored. Nancy Baym’s influential book Personal Connections in the Digital Age (2010) reminds us, “We are still standing on shifting ground in our efforts to make sense of the capabilities of and their social consequences. [...] Who is excluded from or enabled by digitally mediated interaction is neither random nor inconsequential” (p. 21). These capabilities and their social consequences have not gone unnoticed. Feminist activists are able to use social media for a variety of purposes, namely engaging in community­building practices that are essential for social change and action. However, complicating these benefits are the harassment tactics that have become the status quo on the internet. While composition and rhetoric as a field has a long history of examining myriad exclusionary practices that occur in public discourses, we still have much work to do in understanding how these practices manifest in digital publics. Much of the work that has been done in the realm of online harassment seeks to answer the questions, who trolls and for what purpose (Fichman & Sanfilippo, 2015; Phillips, 2015b)? What are their behaviors, motivations and language uses (Buckels et al., 2014; Shachaf & Hara, 2010)? Instead, I’m interested in understanding who it is they target and the everyday effects of their transgressions, especially from a rhetorical lens that gives us insight into how this issue affects online environments, cultures, platforms, and how users contribute to and participate in these spaces. I want to give voice to those that are victimized, not the victimizers. Because much of the existing person­based research on this topic centers on the harasser and not the harassed, we often see online harassment framed as something that isn’t that harmful or is innocuous. Jane’s (2014b) survey of contemporary scholarship on trolling exposes a bias in which “e­bile producers are rarely reprimanded for engaging in insensitivity or cruelty, while recipients and outside observers are frequently chastised as hypersensitive or humourless for failing to make the supposedly easy move of reframing a flame as funny, innocuous, or transgressive” (p. 539). Such representations of the actors within these exchanges fail “to address the fact that the agency enjoyed by citizens in their new roles as media user­producers can be used—not only transgressively—but oppressively and injuriously” (p. 539). Philips (2015b) too is candid about

38 critiques of her early work, which argued that she was too easy on the trolls she studied, some going so far as to call her a “troll apologist.” Phillips admits that she “found certain forms of trolling funny, interesting, and in some cases justifiable, a position further complicated by [her] relationship to the trolls [she] was working with” (p. 46). Here we see how starting from a place of wanting to understand harassers themselves immediately puts researchers in a position where they must empathize with their subjects. In her reflection on this dilemma, Philips exposes an important issue to consider in researching this topic: that if it’s the researcher's goal to understand a population like harassers, they must learn the customs and language within that community, making it easier to defend their actions (p. 47). With all of the comprehensive and tremendous work that’s been done by Philips and others on understanding online harassment from the harassers’ point of view, it’s time now to expand our knowledge of those on the other side of the exchanges. Philips (2015b) notes that it’s politically significant to pay attention to issues of harassment because online invective calls “attention to dominant cultural mores” (p. 7). Keeping our finger on the pulse of such cultural attitudes is imperative in order to understand who is excluded and how these exclusions perpetuate racism and sexism to “preserve the internet as a space free of politics and thus free of challenge to white masculine heterosexual hegemony” (Higgin, 2013, n.p.). Through online harassment, the right to speak and participate in public discourse is policed in such a way that women, people of color, and LGBTQ people are relegated to niche corners of social media platforms or, in some cases, pushed offline altogether. As a field, we assume interest and responsibility in researching and defining what it means to be a writer and rhetor, especially in light of all the affordances and environments that digital technologies have given us. Further, as a field that values pedagogical practices in order to help emerging rhetors make meaning of the world through writing, we must turn our attention more fully towards how online harassment has become a norm in the environments our students frequent and how it alters how and where they can express themselves openly and safely. The research we as a field have already done related to online communities, interaction, and social media have primed us to enter the discussion about online harassment more fully, because we still have more work to do in understanding how online harassment contributes to and shapes the culture of digital writing environments.

39 Chapter Two Volatile Visibility and the Methodological Problem of Harassment

I want to begin this chapter with a content warning that points to an ever present methodological dilemma in researching and writing about online harassment. As you continue reading, you will encounter discussions of rape and other violent acts against women. There are times where I replicate the language used by sexist online harassers—language that is at best crass and at worst deeply unsettling, traumatic, and oftentimes violent. There are a lot of sticky ethical considerations regarding whether or not to directly quote violent and offensive language, and I’m sensitive to concerns that putting these illocutions into further circulation aids in maintaining their power; however, a key part of my argument is that not enough attention is paid, scholarly or popular, to the complexities surrounding sexist online harassment, and I believe that censoring the language of sexist online harassment functions as a cloaking mechanism, keeping the realities of what women experience out of sight and, as the saying goes, out of mind. As Jane (2014a) argues, sexist online harassment is difficult to write about and discuss in academia because by nature it is “heavily laced with expletives, profanity and explicit imagery of sexual violence: it is calculated to offend, it is difficult and disturbing to read, and it falls well outside the norms of what is usually considered ‘civil’ academic discourse” (p. 558). Our academic “politeness,” as it were, draws lines around what we can and cannot talk about in the pages of our journals, at the podiums of our conferences, and in the hallways of our departments. The language of sexist online harassment places it well outside of these boundaries. Jane fears, as do I, that “a less explicit and more polite way of discussing [online harassment] may have the unintended consequence of both hiding from view its distinct characteristics and social, political and ethical upshots, and even blinding us to its existence and proliferation—of implying that it circulates only infrequently and/or only in the far flung fringes of the cybersphere” (p. 558). When the vitriolic language of online harassment is excerpted or “cleaned up,” we risk losing sight of its existence and the degree to which it’s tolerated and accepted in communities outside of academia. My allegiance in this research is to the victims of sexist online harassment, and I try to reflect the language they themselves shared with me by incorporating as much of their own words as possible in my writing of their experiences, even when those words are impolite. I feel strongly that censoring the language and threats women encounter daily would only obscure the exigence of this issue. Please proceed with caution.

~ ~ ~

On the night of August 4th, 2016, just two days after publicizing my survey about sexist online harassment and feminist identities, I received a strange . It was from a man saying he is a “well connected male troll” who was forwarded the link to my study and had passed it on to several women he knew who might be able to contribute to the research. It was a bizarre email. I

40 hadn’t yet received any email inquiries about the survey (I had gotten a lot of engagement on Twitter—more on that later), and I was taken aback by the sender’s strange tone. He claimed to want to help,27 but he was also strangely antagonistic about the women he suggested would be suitable for me to interview. For instance, he called one a “whackobird” and diagnosed another as schizotypal. At the end of the email, he offered to be interviewed before writing, “I am fairly civic minded and interested in policy, despite my poisonous public reputation.” It seemed he wanted my research to capture another side to these stories and wasn’t threatening in any overt way. But I couldn’t help lingering on the phrase “poisonous public reputation.” What did he mean by that? I read the email after waking up in the middle of the night with anxiety about the increased attention I was receiving on Twitter as a result of publicizing the survey. In the two days it had been up, it was not uncommon for me to return to my computer or phone to find upwards for 30 or 40 notifications. Each time this happened, I experienced a twinge of nervousness that when I would click on my notifications, I would find that I been doxxed. Part of me was simply waiting for it to happen. These were (and are) legitimate concerns given all of the accounts of women, particularly vocally feminist women, who have been doxxed for even less than what I was doing. Danielle Citron (2014) describes the case of a female tech blogger who was doxxed after being vocal about experiencing harassment, with one of men behind her doxxing explaining that he did it “because he did not like her ‘whining’ about the abuse she faced” (p. 196). My project was not only identifying harassment as a rampant problem but was designed to give voice to those who have experienced it. I thought about all of the stories I had read over the past several months of women who had been swatted for less. is another violent tactic perpetrated by harassers in which they make “a fake emergency call to trigger an enormous police response at the target's home—often a heavily armed SWAT team expecting a gunman or a hostage situation” (Levintova, 2016). At the very least, a confrontation with a SWAT team results in damage to the target’s personal property, potentially at their own expense, and at the very worst, it results in bodily injury or, in some cases, even death. Swatting has become a big enough problem that legislation has been drafted to prevent it in the form of the Interstate Swatting Hoax Act of 2015, co­sponsored by Congresswoman Katherine Clark (D­MA) and Congressman Patrick Meehan (R­PA). Perhaps unsurprisingly, after the introduction of the bill, Congresswoman Clark, who has been an active advocate for anti­online harassment measures, was swatted herself. Her co­sponsor, Congressman Meehan, a man, was not. Laying in bed, the light from my phone illuminating only my face, I immediately tensed after reading this email, and my eyes rescanned “poisonous public reputation.” I googled the sender’s name. The top results were articles about his involvement in litigation over doxxing a woman through what’s called revenge­porn, publishing sexually explicit pictures or videos of

27 Though, I feel compelled to note here that this air of seemingly wanting to help is how many instances of mansplaining begin.

41 someone without their consent. Looking through some of the search results exacerbated my anxiety as my fears were potentially being confirmed. This man, as promised, indeed had a poisonous public reputation, and he also had skills—skills he used to harm women. Paranoia set in and I touched the tiny X in the corner of my phone’s web browser. I had read enough, and I didn’t want to click on anymore in case he or whoever forwarded him my name was monitoring my search history. Some of the questions I considered before I even started working on this project re­entered my mind. Did I really want to be going down this rabbit­hole? Was I willing to experience this kind of stress and anxiety for potentially years to come as I built a research agenda around online harassment? Wouldn’t I feel better if I didn’t get involved? Shouldn’t I just stay silent about it? Nothing I’d read about academic research or methodologies prepared me for this. What had primed me was everything I knew about women who had been swatted or doxxed for much less than what I was doing, and by primed I mean at least alerted me to the possibility that it might happen to me. Thinking about all of the women I read about or met who had been swatted or had men show up at their houses in the middle of the night to threaten or assault them, I popped out of bed to triple check that all of my doors were locked. I went to the sliding glass door off of my dining room, confirmed that it was indeed locked, and idled there staring out into the darkness of my backyard. I imagined what it might look like if a group of law­enforcement officers, dressed in all black and carrying rifles, crept across my property in the night on a phony tip that I was, maybe, a bomb­maker. And then the motion light on my back deck came on, and I skittered back to bed to wake up my partner. It’s not uncommon for that light to be triggered by the nocturnal creatures who pass through our yard in the night. But I was in no mood for this logic. Instead, the logic that governed my fear was the simple fact that women are swatted for speaking up about gender injustices. I was not swatted, nor did anyone show up at my house to threaten or assault me. Nonetheless, it was not unreasonable for me to feel like these things could very well be right around the corner.

But let’s back up. ~ ~ ~

Long before August of 2016, I decided to take up online harassment as a dissertation topic, and in planning my project, I had two goals in mind: 1. To create a space for women to share their stories of sexist online harassment, and 2. To make meaning from these stories that would lead to larger networks of support and advocacy for women.

I didn’t (and, admittedly, still don’t) fully know what those networks of support might be or look like, but I know I wanted to use this research of women’s lived experiences to bring about

42 meaningful change to circumstances of online harassment—not in the sense of figuring out what women can do to simply avoid harassment but what we can do as a collective to support each other and draw greater attention to the problem and its impacts, a vital first step in cultural change. One of the ways in which I wanted to build support with participants was through dialogue and story­sharing. Gathering stories and building theory with women rather than about them through their individual experience was a core goal from the outset of this project. This chapter will detail the ongoing and iterative choices I made regarding methodologies for my person­based research and how those choices were influenced by the unique challenges I faced as an online harassment researcher. I’ll also outline the methods used for collecting survey data and describe what I call “volatile visibility,” the methodological problem of being able to research harassment through visible means (i.e. publicizing or distributing a survey via social media) without experiencing harassment and/or driving it towards participants or interested parties. In short, volatile visibility means that the more visible a project about online harassment is, the more likely it will attract harassment, increasing the chances that the researcher will experience harassment herself. Further, and equally important, in conducting inquiry into online harassment, a researcher is likely to expose herself to shocking, depressing, and triggering stories or language. Both of these inevitable aspects of online harassment research ensure that the researcher will have to take on a significant amount of emotional labor or strain. My governing methodological choices, then, are attuned to concerns of not only participant safety but the researcher’s as well, something I hadn't considered before facing harassment myself as a result of this very research project.

Survey Design: Ascribing Value to the Undervalued and Underexamined Approaching person­based research about the gendered dimension of online harassment must begin with a consideration of experience . Patti Lather (1988) writes, “The overt ideological goal of feminist research in the human sciences is to correct both the invisibility and distortion of female experience in ways relevant to ending women's unequal social position” (p. 571). Lather points to two exceedingly important aspects of feminist research: 1) revealing what is often left covered up through examining everyday, lived experiences of women, and 2) using research as an avenue to fight unjust practices against women. “Feminist researchers,” she writes, “see gender as a basic organizing principle which profoundly shapes/mediates the concrete conditions of our lives” (p. 571). Those conditions become more widely known when we use research to centralize the voices and experiences of women, allowing us to critique and change the unjust conditions. Again, for me it has always been important to build theory with them rather than about them. Part of this process means not just listening to women but ascribing value to their experiences. Jessica Megarry (2014) argues, “believing women,” especially “their experiences and descriptions of male violence, is a significant component of the feminist research tradition,” pushing back against the centuries­old bias “towards letting men speak for women or ignoring women's ideas entirely” (p. 48).

43 In designing the survey, I wanted to build in opportunity, wherever I could, for women to speak for themselves. Therefore, while there were some close­ended questions, I privileged open­ended ones in order to make room for participants’ own voices and languages. There were 18 questions total;28 the first three asked about the participant’s gender, racial and/or ethnic identity, and sexuality. They were open­ended so as to empower participants to self­identify without being held to predetermined categories or be forced to mark their being as “other.” The next three questions asked how often they tweet,29 how they have their settings configured on Twitter, and what other social platforms they use on a regular basis. These questions were multiple choice with an open­ended write­in option. The next eight asked about harassment experiences on Twitter in a variety of ways, and, unless otherwise noted here in brackets, were multiple­choice with an open­ended write­in option. These questions were: ● Have you ever experienced harassment while using Twitter? ● How many times have you experienced harassment while using Twitter? ● To what degree would you rate the severity of this harassment? [rating scale of 1­5, with one designated as “not severe” and 5 designated as “extremely severe.”] ● Does harassment alter how you use Twitter? ● What strategies have you used to deal with harassment on Twitter? [check all that apply] ● In your experience, what identities, actions, or discussions provoke harassment? [open­ended] ● Have you experienced harassment on Twitter that you would consider to be based, at least in part, on your gender? ● What do you think can and/or should be done to curb the problem of harassment on Twitter? [open­ended] In addition to asking about the kinds and amount of harassment, I also thought it important to inquire into feminist identities given the conclusions of previously discussed research that suggests a high correlation between a visible feminist identity and sexist online harassment (Jane, 2014b; Mantilla, 2015). Therefore, the next four questions asked about feminist identities and the role they play in the respondent's life and use of Twitter. They were: ● Do you consider yourself a feminist? [multiple­choice with an open­ended write­in option] ● If you do identify with feminism, do you make this identity known on Twitter? [multiple­choice with an open­ended write­in option] ● If you identify as a feminist, in what ways, if any, has Twitter influenced your feminism? [open­ended]

28 The survey is presented in full in appendix A. All of the questions included on the survey went through several iterations, considering feedback from both academics as well as from women outside of the academy who are active on Twitter and/or active in feminist activism and communities. 29 As discussed in chapter one, I decided to focus on Twitter specifically on the survey given the platform’s reputation as being exceedingly bad when it comes to online harassment. These questions were written in a way that assumed the participant has a Twitter account because I contained the survey distribution to Twitter itself.

44 ● If you identify as a feminist, in what ways, if any, has your feminism influenced your use of Twitter? [open­ended] And finally, I wanted to build in a mechanism on the survey that would allow participants to share anything related to the scope of the study that the other questions didn’t necessarily address. Therefore, the survey concluded with an open­ended space to “please describe any personal experiences or stories you wish to share about online harassment.” In service of my goal to prominently feature women’s own stories and words, it was important to me to let the participants speak for themselves while communicating rhetorically through the design of the survey questions that their experiences are meaningful and valuable. Once submitting their form, respondents were asked to indicate if they were interested in receiving more information about a possible follow­up interview. One of my concerns from the outset of the project was about academic biases that frequently put feminist work like this at a disadvantage. My concerns were intensified at several points throughout initial explorations of this project when I started to hear the common refrain that this kind of work belongs in other fields or that feminist research agendas like this one rarely lead to a “good” job.30 I also heard, many times: is online harassment even a problem? This question points to a realization that perhaps my heavy steeping in feminist circles prevented me from seeing earlier—that many people outside of women’s circles, especially digital women’s circles, really have no idea that sexist online harassment is going on at the rate and intensity that it does, day in and day out. This question also helped me realize that I was on the right track in attending to a knowledge­gap in the field, but I had to consider that academia is not inoculated to sexist bias and a tendency to devalue work that is feminist in nature. Experiencing resistance to my taking up of feminist research questions is unfortunately, not uncommon. Feminist researchers regularly encounter questions of validity, as feminist inquiry and methodologies often deviate from the patriarchal research traditions of the academy. Jen Almjeld and Kristine Blair (2012), for example, highlight a systemic problem within academia: that feminist research and their methodologies are at risk of being deemed not academic enough to “count” as “real” research. Feminist theorists such as Jacqueline Jones Royster and Gesa Kirsch (2012), Adrienne Rich (1995), and (1985) have noted this problem as well, with Daly going so far as to say that academic methodologies are tyrannical in that they reject women’s questions and therefore, studying women’s experiences becomes almost impossible. Ahmed (2017) speaks to this problem by drawing on Nirmal Puwar’s (2004) notion of “space invader:” a figure who enters “spaces that are not intended for them” (Ahmed, 2017, p. 9). Given how the mechanisms of the academy are patriarchal in nature, often deeming feminist research questions and methodologies inferior, feminist researchers must become “space

30 I don’t mean to suggest that these sentiments wholly dominated conversations about my work, though I’d be lying if I said I didn’t encounter them frequently. However, I’ve gotten a tremendous amount of support from a lot of people in the field, both at my home institution and at others. I’m grateful for the network of scholars that advised me to keep pursuing this topic and have helped me to address validity concerns more pointedly in my work.

45 invaders.” Ahmed says we can occupy this role simply by referring to the wrong texts or by “asking the wrong questions” (Ahmed, 2017, p. 9). “Wrong,” of course, in this case refers to the types of questions that disrupt patriarchal foundations. Trying to make these otherwise patriarchal methodologies fit our research questions and designs does our work a disservice in that they inherently cover up what we might be trying to see. In her book , Writing, and Critical Agency: From Manifesto to Modem , Jacqueline Rhodes (2005) argues that feminist work, particularly that which examines political and feminist communities outside of academia, runs the risk of mishistoricizing important events and spaces “because of the drive to legitimize ourselves within academe” (p. 20). The “legitimacy” or validity question is one that can easily become the central force behind the research, relegating the real work of feminist inquiry—intervening in unjust practices against women—to the backseat. Almjeld and Blair (2012) see feminist research as political in the sense that capturing and understanding women’s social activities in their everyday lives is paramount to breaking down barriers to the validity of inquiry into feminist spaces and communities. As I discussed in chapter one, I know firsthand about the powerful feminist communities on Twitter and the threats to them as a result of harassment. My survey, and later interviews, sought to tap into these phenomena through individual experiences of women who use Twitter, like I do, on a daily basis. Royster and Kirsch (2012) note that as a field we have relied on examining conventional public forums as sites for understanding rhetoric and communication despite the fact that these are places where women have long been excluded. They argue we must expand our scope to research women’s ways and sites of participation and communication beyond traditional public sites (p. 99­100)—to examine “the social circles within which [women] have functioned and continue to function as rhetorical agents” (p. 24). Increasingly, many of these social circles reside online. As I’ll detail more fully in chapters three and four, Twitter has become an exceedingly important discursive space for women. The social activities and everyday lives of women that feminist methodologies encourage us to examine happen for many women, myself included, on Twitter, and they are being threatened by the presence of harassment. This alone makes the experiences and effects of sexist online harassment worthy of scholarly attention.

Survey Distribution: The Messiness of Social Media Social media spaces are, as Postill & Pink (2012) call them, messy, and studying them is even messier. Frankly, we have yet to fully outline methodological frameworks that address not only the clutter of social media but also the mayhem of harassment itself. At the outset of designing my research plans, I knew that I would have to think carefully about the choices I made in terms of where and how to publicize my project in order to gather participants. Collecting data about harassment within the very network that is ripe with harassment is a risky choice but a necessary one. Again, I was most interested in containing my discussion to Twitter for reasons outlined in chapter one, so it made the most sense to limit the survey’s circulation to

46 that specific platform. Also, Twitter is the platform I’m most familiar with and wanted to draw on that knowledge in order to circulate it widely and manage the promotion of it as carefully as one can in networks as vast as Twitter. Digital ethnography, sometimes also referred to as netnography (Kozinets, 2010) or virtual ethnography (Hine, 2000), allows for an examination of individual experiences within chaotic and messy environments such as social media where it’s easy for the individual to get lost in the crowd (Beaulieu, 2004; Markham, 2013; Postill & Pink, 2012). This approach positions the researcher as a participant and active member of the community being researched because this is an efficient way to understand the norms and values of community, corroborating Blair’s (2012) participant observer approach to technofeminist research. In many ways, the researcher as participant model helps uphold researcher transparency, as they integrate their own experiences and community­member knowledge into their research narrative (Hine, 2000; Kozinets 2010; Markham & Baym, 2008). My existing knowledge of a.) Twitter, b.) feminist Twitter, and c.) harassment on Twitter informed how and where I distributed the survey. Further, because I was already embedded in these communities, I was able to put my existing Twitter ethos to good use. I have two different Twitter accounts, both of which I used to distribute and publicize the survey. One is used mainly for professional purposes and the other is more personal in nature. At the time of distributing the survey, my professional account had around 700 followers and was set to public, meaning anyone can follow me, see my tweets, retweet my tweets, and direct message me. I’ve had this account since around 2011 and maintain it as a means to live­tweet academic conference and connect with other scholars in composition and rhetoric, feminist theory, and social media studies. The majority of my followers there are academics. My personal account, which I’ve maintained since 2008, has almost always been set to private, meaning only pre­approved users can follow me and see what I tweet. At the time of distributing the survey, this account had around 130 followers, consisting mostly of internet friends, activists, and street feminists unaffiliated with academia. I decided to unlock this account and use it in addition to my professional account to publicize the survey. Unlocking meant other people could follow me and retweet my tweets. Christine Hine (2000) says the digital researcher’s interactions in the environment they study “is a valuable source of insight” (p. 65), because they help the researcher reflect on her positionality within the community she studies, drawing attention to her networks, in all of their advantages and limitations, as well as her position within them. Considering I was more entrenched in feminist and public communities on my personal account, I thought it important to draw upon that ethos, despite my concern that unlocking would sully this network. In her outlining of a methodology that triangulates feminism, activism, and technological literacy, Blair (2012) describes technofeminist research as connecting the personal and the political and notes that this feminist triangulation is done with “ in mind” in ways that incorporate the subjectivity of the researcher because technofeminist researchers “are often personally and politically connected to the groups they study” (p. 67). Therefore, the researcher’s

47 own story becomes crucial to the transparency of methodologies and methods, as she draws on her insider/outsider knowledges relevant to the community of study. My insider knowledge has grown as a result of actually doing this work. The story that I opened this chapter with taught me a lot about some of the more severe effects women might experience as a result of even just the idea that harassment might be coming. But what I haven’t yet discussed is what led up to the night that I received the strange email from Poisonous Public Reputation—what happened when the survey went live. I opened the survey on Tuesday, August 2nd, 2016 and distributed it by tweeting about the project along with a link to the survey, which was built and housed via GoogleForms. During the first two days of the survey being live, I would tweet about it (along with the link) sporadically, hoping to catch people who check Twitter at different times of the day. I would also ask my followers to retweet it into their own networks as well. I also tweeted it at specific people with large networks of followers and asked them if they would retweet it. It was retweeted by some big names, including Jessica Luther, a sports journalist and author of Unsportsmanlike Conduct: College Football and Politics of Rape , and Soraya Chemaly, who I introduced in chapter one as the Director for the Women’s Media Center Speech Project, a foundation dedicated to raising awareness about online harassment chaired by anti­harassment activist and actress Ashley Judd. Other prominent figures, such as racial justice activist and author Jessie Daniels, didn’t just retweet it but took it a step further to write their own tweets or quote my tweet plugging my research and linking to the survey using their own words. Distributing via Twitter, then, proved to be advantageous because of the ability to move information from my personal networks into additional and larger networks so easily. While my own personal networks are relatively small, the ability for my tweets to be shared by others allowed the survey to be spread farther than it might have using another method or another site of distribution. However, this is where the disadvantages of Twitter’s “messiness” comes in—it’s incredibly difficult to track all of the link to the survey was retweeted or how many people actually saw tweets about my survey given some users shared the link without retweeting or tagging me at all. Another limitation to this method of distribution is that the networks on both of my accounts are skewed in distinct ways, and while I try to diversify my networks as much as possible in terms of race, gender identity, and sexuality, I can’t help but wonder how much the survey failed to circulate in communities important to women of color or LGBTQ women, given my identity as a cisgender white woman. I made sure to send the survey to women from communities outside of my own identities, but the truth remains that my personal networks aren’t as diverse as they could be. I wasn’t entirely sure what to expect in terms of participation, but by the end of the first day, I had collected almost 40 responses. Additionally and unsurprisingly by the end of the first day, I also started being harassed. The first few instances were relatively innocuous—the occasional “sea­lion” would show up in my mentions (a “sea lion” is harasser who masks their

48 harassment as civil discourse and then gaslights31 their target into thinking they’re the one being rude and uncivil). But as the day drew on, I started to receive more severe forms of harassment from people who were agitated as to why I would research harassment and propagate the idea that feminists receive severe forms of it. Most of the harassers expressed these sentiments through . After reporting one of the users to Twitter, I noticed that several hours later he was back in my mentions using a new but similar account—he created a new account to work around the fact I had blocked him. Another user, a self­identified “men’s rights activist” (MRA),32 spammed me with link after link to bogus studies about how men are the “real” victims of online harassment, suffered at the hands of “femi­nazis” like me. I was added to a few new Twitter lists with titles that led me to believe they functioned as surveillance mechanism set up by harassers wishing to target feminists.33 I also noticed an uptick in new followers on both of my accounts. Most seemed to be people genuinely interested in my research, and I was excited to see activists and feminists from all over the world take interest in my work. But some were clearly accounts set up with the sole purpose of surveilling me, evidenced by them being newly created with no followers, no picture, no personal details about the user, and only following one account: me. Some of my new followers were less anonymous than this, displaying their anti­feminist and anti­woman stances proudly in their account details. I spent a lot of my afternoon blocking users or scouring profiles in an attempt to determine if the random man who popped up in my mentions meant serious harm and disparagement. These events rattled me. One could make the argument that I was (and still am) overly sensitive to unwanted interaction on Twitter given my research and how many narratives of harassment I’ve encountered that ended in significant upheavals of women’s lives. But to this I say, shouldn’t we be overly sensitive in these instances? While these harassment experiences were incomparable to some of the horror stories other women carry with them, it doesn’t change the fact that they cost me time and significant emotional labor. Having only witnessed harassment like this in the past, my orientation at the start of the project towards the possibility that it would happen to me was a strange amalgamation of knowing it was likely I would experience harassment but uncertainty that it would actually happen. What sincerely surprised me was how fast all of it began. Literally within the first half hour of the first tweet promoting the study, a man sent me a series of tweets explaining his position that inquiries into sexist online harassment against women are inherently sexist against men. While this person didn’t threaten

31 is a manipulation tactic used to gain power in which a person causes another person to question their reality, perception, memory, and sanity (Dorpat, 1994). 32 It’s difficult to define the men’s rights movement as it's comprised of many individuals with sometimes competing ideologies. Arthur Goldwag (2012) dissects some of these complexities as he explains that the men’s rights movement was born out of what’s known as the fathers’ rights movement, and “is made up of a number of disparate, often overlapping, types of groups and individuals. Some most certainly do have legitimate grievances, having endured prison, impoverishment or heartrending separations from genuinely loved children” (n.p.). However, Goldwag notes, current robust branches of the movement, particularly those that exist online, have spawned into distinctly anti­feminist and anti­women coalitions. 33 More on how the list feature of Twitter can be used as a harassment vehicle is discussed in chapter three.

49 me or call me names, he did spam my notifications—a tactic I don’t hesitate to align with those that mean to drown out opposing voices, as discussed in chapter one. When the first instances of being called a cunt or a bitch rolled in, I had a panicking feeling of, “it’s starting” knowing that in most instances, there’s no telling how harassment like this will escalate or when it will stop. I believe a significant reason for my feelings of panic and uncertainty lies partially with the fact that harassment, the entire diverse range of experiences, aren’t discussed or challenged enough. By the time I read the email from Poisonous Public Reputation, I was convinced that I now appeared on multiple sinister lists circulating in dark corners of the internet, populated by men’s rights activists ready to orchestrate attacks on individual women doing work towards gender equality. I couldn’t shake the feeling that this project would end with me being doxxed or something even worse that I hadn’t even considered. It’s not paranoia if there’s precedent. And there is plenty of precedent. On the morning of August 5, 2016, about two and a half days of opening the survey, I called my advisor, Jason Palmeri, to talk through my concerns, and with his support I decided to take the survey down and lay low online for a while until my visibility decreased. I locked both of my Twitter accounts, changed all of my passwords, and even disconnected my laptop from the internet for a few days, just to be safe. We talked about the possibility of re­opening and circulating the survey again later on, but I ultimately never did. For a few weeks after this experience, I tensed up whenever I looked at my phone to see a large number of Twitter notifications or opened my email to find a message from an unknown sender. I don’t mean to over­dramatize what turned out to be a pretty standard harassment experience, but the fact remains: it happened, and the effects—the fear, anxiety, lost sleep—were all real. How can harassment researchers, then, a.) stay engaged while experiencing the negative effects of harassment, and b.) use their own harassment experiences to inform their methodologies and practices that must protect participants and understand what they’re going or have gone through? I don’t have all of the answers to these questions, but I do have two suggestions. First, researchers must know going into this kind of research that it will take time. I’ve had to take many unanticipated breaks throughout all stages of this project, not just during data collection, in times when emotional exhaustion set in and I just didn’t have the stamina to read another terrifying and heartbreaking doxxing narrative. Not only experiencing harassment but reliving those moments through other women’s stories caused some significant emotional distress that called for letting my work rest for large periods of time. It is, to say the least, difficult to encounter research and narratives that can be triggering in that they’re about systematic hatred and opposition to your very being. Second, documenting and reflecting my own harassment experiences, while challenging, allowed me to create reminders for myself about what harassment is like throughout the lengthy data collection process. This in turn helped me think about best practices in formulating interview questions or even approaching interview volunteers, which will be discussed further in the next chapter. In fact, this experience, the deterioration of the survey as the focus of my data collection, helped me to rethink my plans to focus more on in­depth interviews with women. At

50 the start, I thought collecting women’s narratives via the survey would be a good way to capture a range of experiences and let participants speak to the aspects of online harassment that suited their narratives best, but after the visibility hindered this plan, I recalibrated to put greater emphasis on interviews. Afterall, individual stories were what I was after in the first place, and one­on­one interaction would allow me to ask questions that are more contextual and pointed to participants’ individual experiences. Individual stories, in this case both participants and mine, are a necessary part of this work because emphasizing narratives of lived experience helps us to reconstruct our understanding of the world to include women’s diverse ways of meaning and knowledge making (Hemmings, 2011; hooks, 1984; Rich, 1995; Rhodes, 2005), which are unfortunately too often undervalued by mainstream and academic epistemologies. Blair (2012) argues that emphasizing narratives of lived experience is “a potentially powerful form of technofeminist, activist research” (p. 68), because women’s stories are so woefully undervalued in many arenas. Yet, it’s important we understand the line between using individual stories as a representation of what it is we’re researching and using individual stories as a representation of the macro experience of a marked group. bell hooks (1981) cautions us about relying too heavily on a singular experience as a stand­in for a group as a whole. For example, she writes, “All too often in our society, it is assumed that one can know all there is to know about black people by merely hearing the life and story and opinions of one black person” (p. 11). She goes on to note that hearing these individual stories is clearly important, as storytelling is a method that can reveal what otherwise would have remained hidden; however, hooks notes that too often a singular experience has been used as a justification for, say, white women to claim they wholly understand the collective experience of Black women. This important point must absolutely inform our approach to centralizing individual stories in our research, especially when drawing conclusions about what these stories mean for groups of people as a whole. I don’t mean to suggest that what happened to me during my data collection process will necessarily happen to other researchers. But I do mean to suggest that if it does there will be real consequences, both ones the can be anticipated and ones that are impossible to predict. In the next section, I’ll talk through what this problem of volatile visibility then means for researchers and how it sets extensive limitations on methods such as surveys.

Volatile Visibility: Methodological and Institutional Challenges of Harassment If I hadn’t already made this point clear, let me reiterate: for women, particularly feminist women, an increase in visibility can very often lead to an increase in harassment. This volatile visibility, the correlation between being “seen” online and the amount of harassment experienced, has implications for how we might rethink virality and circulation. A feminist theory of volatile visibility, then, considers how circulation of feminist identities and voices often involves harassment and, by extension, also considers how harassment impedes circulation. Further, a feminist theory of volatile visibility considers how one can even do research on and

51 with social media safely. More meta­aspects of my experience beg the question, how can research about online harassment using social media be done in ways that don’t leave participants or the researcher vulnerable to harassment themselves? Volatile visibility obviously presents several challenges to research of this kind. Namely, it significantly impedes the ability to research harassment through visible means without the researcher experiencing harassment herself, which comes at a cost of physical, psychological, and emotional distress. Further, data collection or even publicizing our work about harassment on social media, the place where these events occur, can alert harassers to the research, ultimately compromising the quality of the data in, for example an open survey form. This was brought to my attention by one of my readers during the design phase when he asked, “what will you do if your survey is trolled?” I didn’t know the answer then and I don’t know the answer now. But I do know that this is a legitimate concern for research about online harassment. It’s a risk in any kind of person­based research that the information provided may not be truthful, but that risk is enhanced under volatile visibility. Clearly dilemmas such as this bring about important and tricky questions as to how we should design our studies to collect robust and accurate data without compromising the safety of those involved. Many times, concerns of participant safety are chiefly guided by our institutional review boards (IRB), but compelling arguments have been made about the value of going beyond those minimum standards of ethics, especially when engaging in digital research. Take here, for example, the sticky concerns of open versus close social networks. IRBs generally treat public and private as diametrically opposed forces, but, as Malin Sveningsson Elm (2008) argues, we should see “public” and “private” not as a dichotomy but as a continuum, and make ethical decisions accordingly (p. 75). Heidi McKee and Jim Porter (2009) also advocate for a more complex understanding of social networks than an either/or orientation, especially when it comes to participant consent. They point out that there are very little hard­and­fast notions of “public” or “private” online, and by thinking of it as more of a scale, we can make more informed ethical decisions about aspects like participant consent (p. 136). Because of grey areas such as this one, it’s not enough to justify our methodological decisions based on academic or IRB norms, especially considering IRBs have been critiqued for underestimating risk at times (p. 45) and being slow to understand the complexities of internet­based research (p. 36; see also Banks & Eble, 2007). Instead, McKee and Porter argue, researchers should make ethical judgements based on interactions with myriad entities, including people outside of their fields and, in some cases, even participants themselves (p. 15). Going beyond IRB requirements can help ensure we treat digital environments and events as users understand them to be. This is why digital ethnographic and (techno)feminist methodologies are important: because they allow the researcher’s position within the community of study to inform the research. Researchers who are already familiar with community norms can rely on their subjectivity to make ethical decisions informed by insider knowledge in a way an “objective” researcher might not be able to. All this to say, while we

52 should follow IRB standards, they don’t always go as far as they need to in order to address all of the safety concerns at play. While IRBs are a necessary and important function of the university, they can’t possibly consider the whole of the vast and multidimensional context of a research topic, especially one as complicated as online harassment. As McKee and Porter write, “circumstantial details matter” (p. 28). After submitting my proposal to my IRB, I was found to be exempt from additional screening as long as I agreed to two modifications. First, I needed to garner interview participants via a form independent from the survey (originally, a request for interview participants was embedded as a question in the survey itself). I remedied this by creating two separate GoogleForms, one for my main survey and one where participants could enter their contact information should they care to receive follow­up information about a possible interview. I then linked to that form in the confirmation message participants received after submitting their survey responses. Second, the IRB reviewer asked that I provide survey respondents with “contact information for non­commercial voluntary counseling,” and a “hotline” was encouraged as a fitting solution. I appreciated the keen concern for the ways in which recounting experiences of harassment might trigger participants through reliving emotional harm. But I ran into a snag here: hotlines specific to online harassment don’t exist. I thought about including contact information for a general emotional distress hotline, but given the unique context surrounding online harassment and the wide­range of ways one can experience it, I wanted to try and find a more appropriate avenue for help. I thought about providing a link to resources or guides on protecting oneself from online harassment, but again, part of why I wanted to do this research in the first place is because resources for addressing online harassment, and specifically sexist online harassment, are scarce and many, frankly, are written in a way that victim­. With that said, there are several online resources that do a good job of providing tips and support in ways that recognize online harassment as a varied and serious problem. I opted to provide participants with a link to a foundation started by Zoë Quinn, the first target of GamerGate, called Crash Override Network, an “advocacy group and resource center for people who are experiencing online abuse” (Crash Override Network, 2015). The network works with private citizens, law­enforcement officials, and tech companies to help them combat online abuse through training and education. Their website has a whole host of resources visitors can read or download that gives them information about how to stay safe online. This example of extending the IRB reviewer’s suggestion for participant safety is representative of why we must consider from the outset of a project how the IRB can’t possibly know all of the ethical dimensions of any given topic. My experiences with harassment after posting the survey also revealed to me a shortcoming of the review board: they hadn’t considered my safety, and frankly, neither had I. Everything I know about online harassment told me that taking this up as a research topic was risky and that making my feminist research agenda visible to a far­reaching audience using social media was even riskier, but I hadn’t done any conceptual work at the start to consider how I might handle harassment or doxxing should I encounter it

53 myself. And I might have had there been institutional mechanisms that account for these kinds of risks. But there weren’t. Ultimately, my failure to reflect more intensively about researcher safety played a part in the survey having to be taken down after only two and half days of circulation. This obviously limited its reach. The small amount of time the survey was up and circulating coupled with my aforementioned relatively small personal networks led to a significant lack of among survey participants in terms of racial and ethnic identifications. Respondents are overwhelmingly white. Of the 77 responses to the open­ended question “what are your racial and/or ethnic identifications,” 61 people (79.2%) listed white or caucasian. The next most frequent response, written in by three people (3.9%) was “white Jewish.” The entirety of the results to this questions are presented in table 2.1 and responses are presented exactly how they were written by participants:

Q2: What are your racial and/or ethnic identifications? Open­ended 77 responses

Response Number Percentage white/caucasian 61 79.2% white Jewish 3 3.9%

Latina 2 2.6%

Hispanic 2 2.6%

European­American 2 2.6%

Native American 1 1.3%

Asian American 1 1.3%

American 1 1.3%

Asian/Pacific Islander 1 1.3%

Biracial (black & white) 1 1.3%

Middle Eastern/Italian 1 1.3% Table 2.1: Responses to the survey question, “what are your racial and/or ethnic identifications?”

54

The majority of respondents identify as heterosexual, though responses show a greater diversity of sexuality than race and ethnicity, as seen in table 2.2, which presents the responses to the question, “how do you describe your sexuality?” Again, responses are presented exactly how they were written by participants.34

Q3: How do you describe your sexuality? Open­ended 75 responses

Response Number Percentage

Heterosexual 45 60%

Bisexual 16 21.3%

Queer 3 4%

Pansexual 2 2.7%

Gay 1 1.3%

Asexual/Demisexual 1 1.3%

Lesbian 1 1.3%

Heteroromatic asexual 1 1.3%

I generally don’t 1 1.3%

Female 1 1.3%

Fierce 1 1.3%

Cis 1 1.3%

Straight but open 1 1.3% Table 2.2: Responses to the survey question, “how do you describe your sexuality?”

34 Two responses, “female” and “cis,”are commonly understood to refer to gender identity not sexuality. It’s possible these respondents wrote their answers in the wrong field, but rather than omitting these response, I think it’s important to present the responses in full, especially when discussing individuals’ self­described identities.

55 Additionally, while “female” or “woman” was the gender listed with the most frequency, no one explicitly identified themselves as “transgender,” a population that is significantly impacted by online harassment, and only two participants explicitly claimed gender identities outside of the cisgender binary system. The responses to the question, “how do you describe your gender” are displayed in table 2.3, and responses are presented exactly how they were written by participants:

Q1: How do you describe your gender? Open­ended 76 responses

Response Number Percentage

Female; woman 48 63.1%

Cisgender female; cisgender 9 11.8% woman; cisgender girl

Male 13 17.1%

Cisgender male 2 2.6%

Lady type 1 1.3%

Non conforming, non binary 1 1.3%

Cishet 1 1.3%

Usually a woman, sometimes 1 1.3% an enby35 Table 2.3: Responses to the survey question, “how do you describe your gender?”

There are some serious limitations here brought about, I believe, by the volatile visibility of the project. By taking the project out of circulation so early, the survey failed to gain diverse perspectives that online harassment research so desperately needs. This is a key methodological point for those interested in pursuing person­based research into online harassment using online methods: we have to actively seek out circulations that will put our research in touch with diverse populations, in terms of race, ethnicity, gender, sexuality, and otherwise, and further, we should think through methods that will make transparent both the real reciprocity and the importance of their participation in the study. By “real,” I mean reciprocity that goes beyond the academic trap of claiming that this research will help the communities they study. This is not to

35 “Enby” is a term derived from the abbreviation N.B., or non­binary.

56 say that doing research with the goal of helping communities we study is not necessarily a form of reciprocity. But what’s the plan beyond publishing the research in the pages of our closed journals for academic readers only? What’s the action to be spurred by the outcomes of such research? How does this action extend to communities beyond the academy? Sometimes that’s unknown at the outset of a project, as was the case here. But research of this kind must begin with an awareness of what the researcher hopes to accomplish through taking stock of the project goals and considering how these goals enact action for the betterment of the communities the stated issue impacts the most. Only having 77 responses makes the results not statistically significant in any kind of quantifiable way. However, despite these limitations, the survey does do important qualitative work, and as I see it, serves three significant functions that warrant discussion in subsequent chapters: 1. The circulation of the survey and the ensuing harassment reveal the methodological complexities involved when doing research about online harassment, especially as a researcher with both personal and professional identities and goals that are likely to offend sexist and misogynistic harassers. There’s much to be gleaned here, as I’ve discussed in this chapter, in terms of methodologies of inquiry into online harassment and a feminist theory of volatile visibility. 2. The open ended questions allowed for the women who have experienced harassment to share their individual stories in their own words. When Pamela Takayoshi (1994) collected stories from women about their experiences with email harassment, she noted that these responses do not represent “all women’s experiences with electronic media” but do raise “important issues [...] that have not yet been adequately addressed in the literature” about electronic media (p. 25). Similarly here, while these narratives may not represent the whole of “women,” individual women’s stories matter and allow us to reach a level of specificity that’s necessary for this kind of work. By collecting narratives of lived experience from women whose daily lives are affected by online harassment, we can start to see the magnitude of the problem and the material effects it has on people, culture, and discourse. Further, we can start to see how the problem fractures along diverse lines, only visible through an examination of individuals. Documenting these stories also help to reveal commonalities as experiences map onto one another in key ways. 3. The survey enabled me to gather volunteers for interviews, which became the greater focus of my research. This, in a way, was a blessing in disguise, because again, gathering stories and building theory with women rather than about them through their individual experience was a core goal from the outset of this project, and being able to gather interview participants through the survey put me in touch with people who were eager to share their stories on a more personal level than the survey.

57 Sexist online harassment is too often viewed as inconsequential and apolitical (Filipovic, 2007; Jane, 2014b; Megarry, 2014), but collecting these stories and amplifying the very consequential and political ramifications of online harassment can change that. The stories shared with me by the women I interviewed that I discuss in the next chapters are illuminating, affirming, heartbreaking, and brave. The interview process involved tears, bonding, relating, and confirmation that while no two stories are exactly alike, there are clear and distinct patterns of the perpetration of online abuse against women as well as the fall out. Women who experience online harassment belong to a community of other women who have experienced similar and fallout, even if that community is unbeknownst to them at the time. I hope that by sharing these stories that community is made more visible to these women.

58 Chapter Three The High Stakes of Online Harassment: Threats to Women & Feminist Action

For women, particularly feminist women, interacting and circulating publicly online comes with high stakes, as seen in the previous chapters. On Twitter, while the potential for sexist harassment is high, so to is the potential to participate in feminist community and action. Twitter is a platform that bridges the personal with the political (Papacharissi, 2014, p. 119), an intersection that has long been important to feminist movements (Rhodes, 2005, p. 78). This hybridity helps women connect with each other to share their experiences in a community setting while challenging patriarchal dominance. For many women, these online spaces are where they live, work, and play. What happens there not only has personal implications, but political ones as well. So what happens if and when online harassment impedes access to feminist communities and action? This chapter begins by discussing how Twitter can be used as a platform for feminist community building and , highlighting survey responses that speak to the ways in which Twitter has proven to be influential for learning about intersectional issues. Despite these important uses, however, Twitter also enables sexist harassment, complicating feminist engagement with the platform. Therefore, I discuss how abuse is proliferated through Twitter’s policies and design. Then, I share the stories of two women who reflect the tension between Twitter as a feminist space and a space that, simultaneously, can be hostile towards women. “Tracy” is a activist who uses social media for her feminist work but experiences severe harassment both online and off as a result of living out loud as a feminist. Unfortunately, Tracy’s story is not uncommon, as evidenced in “Kate’s” story. Kate has also experienced high­volume targeted harassment after having several tweets that are feminist in nature go mildly viral. Tracy and Kate’s stories give insight to the prevalence and severity of online harassment and how it influences women’s lives on a day to day level. I’ll conclude the chapter by discussing more of the stories shared through the survey that map onto what Tracy and Kate describe, all of which demonstrate the high stakes of online harassment as it affects women personally and politically, individually and collectively.

Twitter as a Hub for Diverse Feminist Action In recent years, we’ve seen how Twitter can function in ways that are akin to the consciousness­raising groups central to the in the 1960s and 70s. These groups worked to bring more people to feminism and increase the public consciousness about women’s issues by creating a space for women to share their stories and analyze personal experiences in relation to gendered oppression. The discussions that took place in consciousness­raising groups acted as a precursor to direct action “in an effort to change social conditions” (Rhodes, 2005, p. 35), often taking the form of collaboratively composed public writing meant to make the concerns of the feminist movement more visible to the general public

59 (Rhodes, 2005, p. 26). Compare this kind of public writing to that which exists on Twitter and we see how feminist spaces on the platform, like certain , act as a place where personal expression happens at the same time that public consciousness is raised. For instance, the hashtag #ShoutYourAbortion was started by Amelia Bonow in response to legislation passed in the House of Representatives to suspend federal funding to , the U.S.’ largest provider. Bonow posted the story of her abortion with the tag, and #ShoutYourAbortion quickly turned into a space where women shared their own abortion stories, working to fight the stigma attached to (Syfret, 2015). The tag went viral within 24 hours of its first use, drawing larger public attention to the harmful effects of abortion stigma. Another example of one of the most widely used and sustained feminist hashtags is #YesAllWomen, a tag that surfaced after Elliot O. Rodger killed six people and injured fourteen before committing suicide on University of ’s Santa Barbara campus in May of 2014. Rodger left behind a manifesto and digital footprint that detailed his misogynistic and racist ideologies. In the years leading up to the attack, Rodger frequently posted videos online describing his deep­seated misogyny (Lovett & Nagourney, 2014), and his manifesto outlined a proposed “,” which involved putting women in concentration camps and starving them to death “for the crime of depriving me of sex” (Alcindor & Welch, 2014). One of the primary targets in his attack was a sorority house on UCSB’s campus, as sororities, Rodger said, “represent everything I hate in the female gender” (Beekman, 2014). This sparked a larger conversation about rape culture and the harmful effects of toxic masculinity, hence the tag #YesAllWomen was born. The tag’s grammatical construction is in reference to “not all men,” a phrase often relied on in conversations about sexism and toxic masculinity to point out that in fact not all men are sexist or represent toxic masculinity. Of course, as many have noted, “not all men” acts as a distraction from the systemic issue at hand by redirecting attention to the men who aren’t behaving in sexist and misogynistic ways rather than to those who are . As Kelsey McKinney explains, “When a man (though, of course, not all men) butts into a conversation about a feminist issue to remind the speaker that ‘not all men’ do something, they derail what could be a productive conversation. Instead of contributing to the dialogue, they become the center of it, excluding themselves from any responsibility or blame” (2014, n.p.). “Not all men” eventually became an , the origins of which are difficult to trace, but Shafiqah Hudson is often credited as the originator of its first viral use (McKinney, 2014). On February 20th, 2013, she tweeted, “ME: Men and boys are socially instructed to not listen to us. They are taught to interrupt us when we­ RANDOM MAN: Excuse me. Not ALL men.”36 Part of the problem, Hudson says, is that using “not all men” as a qualifier “undermines your argument and recenters [men’s] feelings as the central part of the dialogue,” about sexism and misogyny (McKinney, 2014). The conversation surrounding the UCSB killings turned

36 Hudson, S. [@sassycrass]. (2013, February 20). Retrieved from https://twitter.com/sassycrass/status/304432121223716864

60 towards how “some of the attitudes toward women expressed by the gunman [...] reflect some views that are echoed in the mainstream culture,” causing men to start using #NotAllMen, a tag “used to argue that men should not be universally portrayed as sexist aggressors,” (Medina, 2014). In response, #YesAllWomen was started by a woman who had to delete her Twitter account due to persistent harassment for inventing the tag (Medina, 2014). Despite the pushback from sexist online harassers and #NotAllMen supporters, #YesAllWomen was tweeted an estimated 1.2 million times in the four days following the attack (Grinberg, 2014). Women used the tag to share their range of experiences with sexism and misogyny, arguing that yes, all women are affected by the rape culture and the pervasive attitudes about women that were reflected in Rodger’s manifesto. Similar to #YesAllWomen, the tag #MenCallMeThings was also started in order to raise public consciousness about pervasive cultural sexism and misogyny. First tweeted by journalist and feminist blogger Sady Doyle, #MenCallMeThings was used to discuss sexist abuse women writers face online (Gibson, 2011). In her study of the tag, Jessica Megarry (2014) observed, [While women] did not specifically refer to the hashtag conversation as a form of consciousness raising, the discussion nevertheless displayed a sense of inclusivity, the breaking down of barriers between private suffering and public political relevance, and the ability to formulate a commonality between experiences that was characteristic of the consciousness raising groups of second wave feminism. (p. 51) In other words, the tag brought women together while also bringing their stories to the fore of a larger public discourse. The consciousness­raising groups that Megarry references here were not without problems, as Jacqueline Rhodes (2005) points out. Feminist activists regularly disagreed about the “content and purpose within discussion groups,” (p. 26) leading to varied approaches to the practice. Furthermore, these were generally spaces made up of “mostly white, mostly middle­class women,” (Rhodes, 2005, p. 35) meaning the concerns of these identities were usually the focus, and this phenomenon, in many ways, has translated onto Twitter. In 2013, Courtney Martin and Vanessa Valenti of Columbia College’s Barnard Center for Research on Women published what would become known as the #FemFuture Report, a look at how feminism is and can be enacted and sustained online. The report was written after a diverse group of women engaged in online feminist practices gathered to talk about the issues they face in their work. In a section titled “What is Online Feminism,” Martin and Valenti note that the consciousness­raising groups of the 1960s are now online, only “instead of a living room of 8­10 women, it’s an online network of thousands” (p. 6). They go on to outline what they see to be the most important aspects of online feminism and how the internet can be used for feminist means. Once the report was released, it immediately received backlash from the online feminist community via the tag #FemFuture for “what appears to be U.S.­centric, mainstream, feminist and historical erasure of radical women of color spaces and communities” (Loza, 2014, n.p.). While the initial gathering of women to discuss the future of online feminism was made up

61 of a “a racially diverse group of feminists engaged online, a fact mentioned often to defend the report as inclusive,” #FemFuture ultimately focused on “ the vision of Martin and Valenti” (Daniels, 2016, p. 22), both of whom are white. By centralizing white women and white feminism in the writing of and content within the report, #FemFuture revealed that longstanding fissures along axes of race within feminist movements remain in the new digital era. Part of why the #FemFuture report’s inattention to race is so troubling is because digital platforms, such as Twitter, have introduced new potentials to feminist movements in the form of increased visibility and amplification mechanisms for women of color, who may be excluded in other arenas. Twitter specifically has been a uniquely diverse environment since the platform launched in 2006—it is culturally more diverse than the U.S. population (Papacharissi, 2014, p. 94), with more African­American and Latino users than white users (Duggan & Brenner, 2013, p. 4), and has “a more equal distribution” of races among users than other social networks such as Facebook, , and Pinterest (Krogstad, 2015). Racial and ethnic diversity contributes to the affordances that Twitter provides to women of color, who, as pointed out by feminist activist Mikki Kendall, achieve increased visibility through the connection to larger audiences by way of hashtags, leading to amplification of diverse feminist voices online (qtd. in Tobin, 2013, n.p.). Mychal Denzel Smith writes that the use of hashtags for feminist ends brings us to “a critical moment of community building among women of color that speaks to an unfortunate truth. Across movements, women of color are still being silenced, their concerns are going unaddressed, and their work is being policed in a way that leaves them with Twitter hashtags as the most visible means of fighting back” (2013, n.p.). Smith’s point is an important one: hashtags provide the necessary infrastructure that women of color need to both build community and draw attention to intersectional issues online. One such issue is the harmfulness of white feminism, exemplified in the #FemFuture report. The tag #SolidarityIsForWhiteWomen, created by Mikki Kendall, helped draw attention to, critique, and start a dialogue about the harmful effects of white feminism. Kendall created the tag during what was described by NPR as “the digital self­immolation of Hugo Schwyzer,” a prolific blogger and self­identified feminist who admitted to deliberately attacking other bloggers, primarily women of color, for critiquing his work. Kendall’s tag was meant to convey her frustration that Schwyzer was given an outlet to berate women of color through feminist run by white women. #SolidarityIsForWhiteWomen turned into “an impassioned debate about the continued exclusion of [women of color] from mainstream feminism” (Loza, 2014, n.p.) and white feminism’s frequent appeal to an illusory solidarity. The tag was tweeted over 75,000 times in just four days (Loza, 2014, n.p.), enough for it to appear on the trending topics list on Twitter, and the tag’s longevity is evidenced by its continued use today. Access to spaces where women and other marginalized groups can build support networks and draw greater attention to the issues that concern them the most is essential, and the Federal Communications Commission’s vote on December 14, 2017 to repeal protections is predicted to significantly impeded that access. Commissioner Mignon Clyburn,

62 one of two women on the commission, both of whom cast the only votes to protect net neutrality, noted in her dissent why access to platforms is critical, particularly for people of color to build networks of support and activism. She wrote, Particularly damning is what today’s repeal will mean for marginalized groups, like communities of color, that rely on platforms like the internet to communicate, because traditional outlets do not consider their issues or concerns, worthy of any coverage. It was through social media that the world first heard about Ferguson, Missouri, because legacy news outlets did not consider it important until the hashtag started trending. It has been through online video services, that targeted entertainment has thrived, where stories are finally being told because those same programming were repeatedly rejected by mainstream distribution and media outlets. And it has been through secure messaging platforms, where activists have communicated and organized for justice without gatekeepers with differing opinions blocking them. (qtd. in Lecher, 2017, n.p.) In other words, platforms can be used to fill in the holes left by mainstream culture that often favors hegemonic perspectives and identities. The instances Clyburn cited in her dissent point not just to how intersectional networks of support and activism can rise and sustain on platforms, but also to the frequency with which marginalized groups turn to these platforms when they aren’t being heard elsewhere. The case of #SolidarityIsForWhiteWomen and other such movements point to how Twitter as a platform makes it useful for feminist discourse and action. Kitsy Dixon (2014) says Twitter allows women to amplify and share narratives, circulate educational resources, and support one another on a global scale, which can expose users to diverse feminisms and activisms, changing the nature of feminism itself (p. 39). This exposure to diverse modes of feminist thought and action is a key benefit of Twitter that many of the people who responded to my survey made note of in their answers to the question, “If you identify as a feminist, in what ways, if any, has Twitter influenced your feminism?” One man, who identified himself as white and heterosexual, responded, Twitter has greatly increased my exposure to wider variety of feminist (and anti­racist, pro­LGBT, etc) thought and viewpoints, and in doing so has deepened and expanded my own thinking on feminism (and racism). That is, I was raised to believe in equality, but only in recent years (say 5, and I'm 35) I have been more aware of all the ways inequality is manifested in our society and thought more deeply about what believing in equality really entails. What I've seen via Twitter has been a big part of that. He suggests that the exposure to multiple viewpoints through Twitter specifically is what has helped him to confront more fully issues of inequality, and perhaps these kinds of opportunities are scarce for him elsewhere. For some, their upbringing was void of much diversity, and Twitter puts them in touch with identities and viewpoints they may not have opportunities to interact with offline. For example, one white woman noted that growing up in a small coastal town limited her “exposure”

63 to racial diversity. She said that her feminism “has definitely become more intersectional as a result of Twitter,” because it’s a space where people can share “their lived experience.” Other women noted, “It has broadened my worldview,” “I have been provided a range of perspectives on politics, social issues, and social movements that I did not get from other social media or news sources,” “It's made me more aware of diverse and intersectional feminist voices,” and that Twitter “ will bring topics to my attention that I might have overlooked, specifically when it comes to intersectionality (since I'm a cis white woman).” These results point to a key benefit of the platform: exposure to diverse viewpoints, if used in ways that allow for such exposure. Given how many white women responded to the survey, I can’t help but wonder how social media like Twitter influences white women’s awareness of intersectional issues, and in turn, can be used to counteract the whitewashing problem consciousness­raising groups, both on and offline, have had in the past. Some women who responded to this question cited specific intersectional Twitter spaces and offline movements as having an influence on their feminism. For example, one woman, who self­identified as queer and white, said that “Black Twitter has influenced my feminism because it gives me a greater sense of conversations that people are having who aren't specifically in my geographic area/discipline/friend group.” Another woman, who identified as white, said that Twitter makes her “more aware of contemporary intersectional feminist issues” in ways that she can’t understand by reading about them in books. She said that she’s currently attending college, and while she has learned about intersectionality in her courses at school, she’s never been “confronted with these issues” in her daily life. Twitter, however, allows her to be “more aware of the Black Lives Matter movement and other movements in marginalized communities (like people with ) and the specific issues facing their communities right now.” This kind of awareness is the first step in inspiring women, especially white women, to get involved. One woman, who identified as heterosexual and white, said that Twitter keys her into local grassroots movements in her area, making it easier for her to “participate in movements to help dismantle misogyny, racism, Trans misogyny, misogynoir, and men's hate groups.” Greater consciousnesses of intersectional issues can reveal to white women the ways that white feminism functions as a gatekeeping mechanism and is harmful to social progress. And Twitter helps some to realize that. Another white woman noted that Twitter revealed to her the fractures within feminism and how those fractures operate along axes of race. She wrote, “Watching the way that wealthy white feminists use feminism as a shield to hide from criticism has given me significant pause when considering how I interact with feminism on a daily basis.” Other women noted that exposure to intersectional issues on Twitter have caused them to examine their own practices and reorient themselves to better reflect an intersectional approach to feminism. For instance, one woman, who identified as bisexual and white, noted that being exposed to a larger, more diverse group of people has helped her “to be more aware and conscious of my own practices.” Being cued into diverse feminist communities and issues on Twitter has also helped reveal to some the prevalence of systemic aversions towards feminism. For instance, one woman

64 who identified herself as white and Asian/Pacific Islander explained that T witter has increased her awareness “of men's opinions about women who identify as feminists. It's shocking to me that people would go as far as they do to defend their dominant role in society.” Another woman, who identified herself as heterosexual and white, responded that seeing sexism and misogyny on Twitter has discouraged her from visibly engaging in feminism, both online and off, “because I never knew how much men hate and disrespect women until Twitter. I know, of course, it existed,” she wrote, “but not to this extent. Same with racism.” For others, seeing this misogyny and racism enacted online has renewed their belief that feminism helps to combat these acts, as evidenced by a response in which a woman noted that Twitter has reinforced “my view that sexism and and body shaming is rampant and needs more people standing up against it all.” Another woman, who self­identified as Jewish and bisexual, said that while our culture has made some strides towards gender equality, Twitter “has made very clear how widespread misogyny still is.” Another respondent who identified as a heterosexual white male noted, “It's very easy to forget that there really are some mean and spiteful men out there who will almost instantly call a woman some very vile things.” Twitter, however, despite all of the ways in which it can be good for intersectional feminist practice, reminds us that sexist harassers are there watching and waiting.

Proliferation of Abuse Through Twitter Policy & Design Harassment infiltrates Twitter in a variety of manifestations. And yet, the platform has consistently made design and policy decisions that devalue the safety of users, evidenced not only by their lack of action to implement changes that decrease harassment but also by changes they make that increase the abusive potential of the platform’s features. For example, in 2017 Twitter made a sudden change related to “lists,” a feature that allows users to curate lists of other users and share those lists publically. Ostensibly, this is useful for users who wants to group their family independently from, say, coworkers, or for users to be able to organize accounts they follow by topic or interest. Harassers, however, use the feature to create lists of people they want to track or target that they can easily share with broader harassment networks, making mass abuse easy and organized. Usually, harassment lists have some sort of threatening title, so the harassment is oftentimes executed in the simple act of adding someone who is then notified that they’ve been added to this list with a menacing name. Twitter, in a move that indicates ignorance for how harassment is carried out and circulates on the platform, decided to eliminate the function that notifies a user when they have been added to a list, suggesting that it’s the notifications that are a problem for users, not the abusive lists themselves. This change resulted in immediate backlash from everyday users and big names in the tech world causing Twitter to reverse their decision the same day they implemented the change (Perez, 2017). Even more recently, Twitter responded to the longstanding association that its default profile picture, an egg, has with harassment by replacing it with a silhouette akin to Facebook’s default profile picture. One of the official

65 reasons they gave for making the change is that the association between the egg and harassment accounts “isn't fair to people who are still new to Twitter and haven't yet personalized their profile photo.” Even more recently, Twitter responded to the longstanding association that its default profile picture, an egg, has with harassment by replacing it with a silhouette akin to Facebook’s default profile picture (Matsakis, 2017). One of the reasons they gave for making the change is that the association between the egg and harassment accounts “isn't fair to people who are still new to Twitter and haven't yet personalized their profile photo” (“Rethinking Our Default…,” 2017), in yet another example of the company failing to make decisions that address the systemic causes of harassment. While these seemingly under­informed decisions about the connections between interface design and user experience baffle users who very clearly understand what sort of platform and policy changes need to take place to curb harassment, most don’t account for how it is in Twitter’s best interest financially to sustain abuse cultures and structures because of how many users they would lose if they cracked down on harassment in a serious way. The company has yet to become profitable since its inception in 2006 (Goldman, 2016), and most of that profitability relies on a growing user base (McArdle, 2015), something Twitter has struggled to do over its 12+ years of existence. Further, new research suggests that a significant amount of active accounts aren’t actually people but instead are bots. Researchers at Indiana University and the University of Southern California found that as many as 15%, or 48 million, of active Twitter accounts are bots,37 defined as, “accounts controlled by software, algorithmically generating content and establishing interactions” (Varol et al., p. 1, 2017). Bots can be a positive presence on the platform, such as those designed to send out emergency disaster alerts with people’s safety in mind or, in the case of Kevin Munger (2017) who designed bots to respond to racist language, in ways aimed to change a users’ behavior for the better. And they even have pedagogical applications for students wishing to engage in activist rhetorics (Lussos, 2018). But we’ve also seen how bots can have malevolent implementations as well. Evidence suggests, for instance, that bots were used to influence the outcome of ’ 2016 President election (Markoff, 2016), and Varol et al. (2017) note bots can “emulate human behavior” for a variety of reasons including recruitment for terrorist organizations, stock market manipulation, and to circulate dangerous conspiracy theories (p.1). Bots have strong associations with harassment. For instance, Guilbeault and Woolley’s (2016) analysis of how bots were used throughout the 2016 Presidential election found that they altered the democratic process through harassment in that they would “silence people and groups who might otherwise have a stake in a conversation. [...] This spiral of silence results in less discussion and diversity in politics. Moreover, bots used to attack journalists might cause them to stop reporting on important issues because they fear retribution and harassment” (n.p.). With the rise of bots as harassment tools, anti­harassment organizations and entities have started using

37 For practical tips on how to decipher whether or not a Twitter account is a human user or a bot, visits user @RVAwonk’s multi­tweet thread here: https://twitter.com/RVAwonk/status/853804561726918656

66 bots themselves to fight back (Adee, 2016), targeting harassment accounts in order to distract them from human targets. Needless to say, mass suspensions or bans of accounts associated with the practice of harassment, human or bot, would lead to significant decrease in Twitter’s active user numbers, thereby affecting the company’s monetary value. Twitter has, in the past, privately acknowledged that their failure to deal with harassment from a functionality and policy standpoint also results in a loss of users. In a 2015 memo to Twitter employees, then CEO Dick Costolo wrote, “We suck at dealing with abuse and trolls on the platform and we've sucked at it for years. It's no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day,” He went on to say he was “ashamed” at how poorly they handle harassment and pledged, “We're going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them” (Tiku & Newton, 2015). Within four months of this memo’s circulation, Costolo was replaced as CEO by Twitter co­founder Jack Dorsey. Dorsey has spoken about Twitter’s harassment problem, and publicly defended the decision to ban Milo Yiannopoulos38 in July of 2016 after Yiannopoulos orchestrated a targeted abuse campaign against actress Leslie Jones (see chapter one). Dorsey said, “At its best, the nature of our platform empowers people to reach across divides and build connections to share ideas and to challenge accepted norms. As part of that, we hope—and we also recognize that it's a high hope—to elevate civil discourse.” However, Dorsey noted, “Abuse is not part of civil discourse. It shuts down conversation and prevents us from understanding each other. Freedom of expression means little if we allow voices to be silenced because of fear of harassment if they speak up. No one deserves to be the target of abuse online and it has no place on Twitter,” (Wang, 2016, n.p.). Dorsey said dealing with harassment would become a top priority for the company over the course of the following year. Ten months later, in another interview, Dorsey admitted that despite making harassment intervention a priority, the company had made very little headway. He explained, “We recognized that the very nature of the product was giving unfair advantage to people who wanted to harass. So we needed to change the product experience. We made it a priority last year, but to be very frank and honest, we only shipped one meaningful thing all year. So our progress is not something that we are proud of.” When asked why that happened, Dorsey acknowledged that they had developed systems that “put a lot of burden on the victim instead of taking the burden upon ourselves. So we learned a bunch in that past year around how slow we were, and we just completely shifted our mindset.” But he never gave a concrete answer as to how that shifted mindset would lead to more direct intervention in the growing harassment cultures on the platform. He said,

38 While it’s difficult to prove, there has been widespread speculation that despite being banned from the platform, Yiannopoulos still maintains accounts on Twitter through aliases and anonymized accounts (Ennis, 2017).

67 There’s not going to be an endpoint where we can say we’re done. But the progress we’ve made in the past few months has just been phenomenal. It just took a mindset shift, and we had to go through that year of really learning that and the previous years before that. We didn’t prioritize it in the right way, but now we have. So I feel like we have a real strong handle on what it is and, most importantly, how to bring it into a steady state instead of it being an emergency state. (Levy, 2017, n.p.) The very next question of the interview asked Dorsey how Twitter planned to grow its user base, and despite having just said they now have the “shifted mindset” to prioritize dealing with harassment, he makes no mention of how harassment relates to the growth (or decline) of Twitter’s user base. He makes no mention of any anti­harassment plans in the works. He makes no mention of harassment at all for the remainder of the interview. In 2015, Twitter did take steps to better assess how harassment affects the platform by partnering with Women, Action & Media (WAM), a nonprofit focused on enhancing gender equality in the media, who collected data on myriad aspects of harassment on the platform (types, reporting practices, Twitter’s responses, etc.). WAM’s analysis notes that “while Twitter is a platform that allows for tens of millions of people to express themselves, the prevalence of harassment and abuse, especially targeting women, women of color, and gender non­conforming people, severely limits whose voices are elevated and heard” (Matias et al., 2015, n.p.). The WAM report notes that diversification of Twitter’s leadership would be an important first step in untangling the harassment issue (Matias et al., 2015) given that Twitter’s leadership is overwhelming white and male, two categories that WAM’s data found aren’t as seriously affected by harassment on the platform as others. In 2014, the year before the WAM study, Twitter’s leadership team was 72% white and 79% male, and the technology team was 90% male (Ferenstein, 2014). This reflects a standard in corporate leadership demographics. Among Fortune 500 companies, for example, 80% of executive and senior officials or managers are men, 72% of which are white men. The gender gap widens even further among CEOs, 93.6% of which are men (Jones, 2017). Technology companies specifically have a reputation for sexist workplace cultures, as “sexism and sexual harassment appears to be systemic in the tech industry, which has made headlines repeatedly for workplace issues involving mistreatment of women or outright sexual harassment” (May, 2017). Many are quick to point out the undeniable relationship between sexist culture at technology companies and the gender gap in who they have as investors and on their leadership teams (Benner, 2017). Twitter set diversifications goals for 2016 which, in many cases, they surpassed. The overall number of underrepresented minorities39 at the company rose from 10% to 11%, and underrepresented minorities in leadership positions rose significantly from 0% to 6%. The overall number of women working at the company rose from 34% to 37%, with women in leadership positions rising from 22% to 30% (Siminoff, 2017). Twitter also, in 2016, started

39 Defined as Black/African­American, Hispanic/Latinx, and Multiracial.

68 collecting data from new hires regarding and gender identity, which they hadn’t done before. Jeffrey Siminoff, then Vice President of Inclusion and Diversity40 noted, “of employees answering, 10% identified as LGBTQ. As more employees respond in 2017, we expect to have a more complete picture” (Siminoff, 2017, n.p.). Twitter set new goals for 2017 to continue to grow the number of women and underrepresented minorities at the company,41 and while the data they published shows small growth for both women and underrepresented minorities (Castleberry­Singleton, 2018), an investigation done by Recode into the company’s methods for measuring these numbers found some discrepancies, like, for example, that a portion of the percentage classified as underrepresented minorities includes people who declined to identify their ethnicity on the survey (Wagner & Molla, 2018). When Recode presented Twitter with their findings, Twitter issued an apology and took the data visualization down. Of their investigation, reporters Kurt Wagner and Rani Molla write, “Twitter is clearly trying to diversify its workforce and touted internal groups and programs for minority employees, as well as efforts to reach underrepresented minorities still in college as ways it’s trying to diversify. But having consistent, transparent numbers should be part of that process” (2018, n.p.). Indeed, it’s obvious that Twitter heeds the recommendations of WAM in their efforts to diversify the company, but blind spots in their leadership still seem to prevent them from fully understanding how harassment functions and circulates on the platform. The WAM report also made recommendations that Twitter more clearly define “what constitutes online harassment and abuse” based on findings that 19% of reports of abuse submitted to Twitter were more complex than the categories Twitter offered. Further, they recommend that Twitter do more to acknowledge how harassment is experienced and enacted and use that understanding to make changes. For instance, they should redesign the reporting interface using trauma­response design methods and “develop new policies which recognize and address current methods that harassers use to manipulate and evade Twitter’s evidence requirements.” Perhaps most important, WAM recommends that Twitter do more to “hold online abusers accountable for the gravity of their actions,” particularly in light of findings that a significant portion of users who have experienced the behaviors listed in Twitter’s Abusive Behavior Policy and went through the proper reporting channels with little to no resolution (Matias et al., 2015, n.p.). Whether Twitter’s failure to reasonably resolve these reports is because of the sheer volume of reports, a lack of appropriate processes and guidelines, or

40 Siminoff only served as the Vice President of Inclusion and Diversity at Twitter for a short time. He was brought on in 2015 to improve racial and gender diversity at the company, but his hiring was met with some skepticism, as he himself is a white man. He departed the company after only a year and three months, leaving shortly after the 2016 diversity report was published. Siminoff was replaced by Candi Castleberry Singleton, who remains the VP of Inclusion and Diversity as of March 2018. 41 Specifically, they set goals to increase the overall number of women from 37% to 38%, the overall number of underrepresented minorities from 11% to 13%, the number of women in leadership positions from 30% to 31%, and the number of underrepresented minorities in leadership positions from 6% to 8% (Siminoff, 2017).

69 because Twitter simply doesn’t care, the fact remains that abusive behaviors are governing much of what happens on the platform.42 Twitter has updated their Abusive Behavior Policy in recent years, but the language is still vague enough that it’s difficult to prove certain tweets as “abusive.” The policy states, “we do not tolerate behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user’s voice” (“The Twitter Rules,” 2018, n.p.). The policy goes on to identify four broad categories that organize how Twitter understands abuse: ● violence and physical harm, ● abuse and hateful conduct, ● private information and intimate media, ● and impersonation. This is a more streamlined version of the previous policy, which contained seven classifications of abusive behavior: violent threats, harassment, hateful conduct, multiple account abuse, publishing others’ private information, impersonation, and self­harm. Collapsing some of these together in the new version of the policy means many of these categories link to more detailed sub­policies, which flesh out specifics about what constitutes abuse that is likely to result in temporary or permanent suspension from the platform. These specifics are in contrast to the previous version of the policy, which left most categories undefined or described in just a single sentence. The current version of the hateful conduct policy, for instance, explains the policy, provides examples of what forms violations might take, and describes how the enforcement of the policy works. It states, You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, , or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories. (“Hateful Conduct Policy,” 2018, n.p.) With regards to enforcement, the policy notes two defining criteria: “context matters,” and “we focus on behavior.” The information about context is a bit vague and seems to, at first, protect abusers but then focuses more on the safety of victims. It reads, Some Tweets may seem to be abusive when viewed in isolation, but may not be when viewed in the context of a larger conversation. While we accept reports of violations from anyone, sometimes we also need to hear directly from the target to ensure that we have proper context. (“Hateful Conduct Policy,” 2018, n.p.) By indicating that some tweets that are reported only seem abusive, the platform suggests that those who report this kind of activity have incorrectly interpreted the tweets as hateful or abusive. On one hand, it’s good that they pledge to investigate a claim fully rather than viewing

42 More specifics about Twitter’s reporting process is discussed in chapter five.

70 abuse in isolation, but on the other, the phrasing of this information seems to err on the side of protecting hate. Of their focus on “behavior,” the policy states, “We enforce policies when someone reports behavior that is abusive and targets an entire protected group and/or individuals who may be members. This targeting can happen in any manner (for example, @mentions, tagging a photo, and more)” (“Hateful Conduct Policy,” 2018, n.p.). As a reader, I’m confused as to what “behavior” is being juxtaposed with in their assertion that they focus solely on behavior. Behavior, to me, is about conduct, but it’s also about patterns of conduct. Potentially, their focus on “behavior” allows them to or take less seriously conduct that isn’t specifically targeted or is enacted by someone whose entire Twitter ethos, on a cursory look, isn’t devoted to abusive behavior. While Twitter has taken steps to alter and add language to their harassment policies, they still muddy the waters of who should be reprimanded for abusive behavior and to what extent. The incredible potential for diverse feminist uses of Twitter is complicated by abusive cultures and behaviors, and the feminist potential is rendered unachievable when sexist online harassment impedes access to the feminist spaces and discourses that do exist on Twitter. The two women I highlight in the remainder of this chapter, Tracy and Kate, both grapple with the tension that exists between the feminist potentials of Twitter and the visceral experience of sexist online harassment. Before I share their stories, I’ll discuss some of the background of how I came to meet these women, and I’ll further explain the methods I used to interview them in ways that valued conversation and story­sharing.

Methods for Dialogic, Collaborative Interviews Pamela Takayoshi (1994) argues that there’s a significant gap between theory and the “reality [women] live” (p. 25), meaning we haven’t done enough to actually talk to and with women about their realities. This is certainly true for harassment research, and therefore, I want to use interviews as a means to document their stories. Of the 77 survey respondents, 15 of them agreed to receive more information about a follow­up interview. Of those 15, 11 agreed to be interviewed. Ultimately, I interviewed eight of those women (three dropped out for various reasons). Interviews were conducted either face to face, on the phone, or via email, depending on the participant's preference. Initially, I was hesitant to incorporate email interviews because of the concern that opportunity to follow­up questions and organic conversation would be hindered by the nature of asynchronous email communication. However, many of my participants noted that email exchanges would be the most convenient because of time constraints, and two expressed that they were hesitant to give out their phone numbers. In the interest of making my participants as comfortable as possible, I allowed email interviews to take place. Of the eight interviews, one was conducted in person, three were conducted over the phone, and four were conducted via email. In the case of in­person and phone interviews, the conversations were recorded and later transcribed by me. In the case of e­mail interviews, the

71 email chains were downloaded together so as to maintain continuity of our conversation. I took a grounded theory approach (Charmez, 2008; Glaser & Strauss, 2017) to analyzing these transcriptions, which allowed me to identify emerging themes as I reviewed (and re­reviewed and re­reviewed) the interviews. The presentation of the data in both this chapter and the next is organized around those themes. All of the interviews were semi­structured around three main topics or questions. First, I prioritized giving participants an open­ended space to elaborate on anything related to the scope of the study, mainly any personal experiences they have had with online harassment. Second, I asked participants how harassment, either experiencing it or knowing that it's a problem, has impacted or altered how they use social media. And third, I asked whether or not harassment affects them emotionally and/or physically, acknowledging to participants that this is a sticky dichotomy as emotional and physical well­being are so closely intertwined. During in­person and phone interviews, these main questions were generally followed up with other more specific questions geared towards the participant’s answers. Obviously, a back and forth dialogue was easier to achieve in phone and in­person interviews. For the interviews conducted via email, I noted at the outset that participants were welcome to ask questions or deviate from the topics I asked them to speak to. I also noted that I may reach out with follow­up questions after receiving their initial responses. For all of the interviews, regardless of how they were conducted, it was important to me that I leave the door open for further conversation beyond the one­hour time commitment, should the participant want to keep sharing. I concluded every interview by letting the participant know that she could reach out to me afterwards if anything else came to mind that she wanted to say. My interview style was influenced by Selfe and Hawisher’s (2012) approach, which notes that conventionally structured interviews have significant limitations, especially when it comes to feminist research. They maintain that less structure allows for a more reciprocal conversation where participants help to shape the research questions. This approach gives more agency to the participants in how they’re represented through the research, a crucial aspect of my dissertation given the personal and sensitive nature of harassment. Further, Patricia Lather (1988) notes “interviews conducted in an interactive, dialogic manner that entails self­ disclosure on the part of the researcher foster a sense of collaboration” (p. 574). Creating an environment where the women that I interviewed felt safe was important to me, and I tried to do that through being transparent about my own experiences with online harassment. The dialogic nature of the interviews helped establish a reciprocal conversation, whereby participants became active shapers of the questions and where the conversations could unfold more organically and tailored to the nature of the conversation with any one participant, honoring their individual experience. I was also inspired by Jacqueline Jones Royster and Gesa Kirsch (2012), who note that listening is an important part of what they deem “strategic contemplation.” Strategic contemplation, they write, “involves engaging in a dialogue, in an exchange with the women who are our rhetorical subjects, even if only imaginatively, in understand their words, their

72 visions, their priorities,” (p. 21). In their ethnography Troubling the Angels: Women Living with HIV/AIDS (1997), Patricia Lather and Christine Smithies describe a methodology of person­based research in which researchers should become comfortable with “both getting out of the way and getting in the way” (p. xiv). By this they mean that while sometimes it's important for us to participate along with our participants, we have to recognize when it’s time to get out of the way and simply listen to the stories being told. To that end, in writing about these conversations, I rely on quoting the participant directly more often than paraphrasing or summarizing. I want these women to tell their own stories in their own words, and therefore, I try to centralize their voices as much as I can. All of the women I discuss are psuedonymed, and were given the option to select their own pseudonym. In the event that they didn’t want to select their own pseudonym, I randomly assigned one to them. Further, once I drafted a write­up of the interviews, participants were sent copies and invited to make amendments or suggest changes.

Tracy’s Story: Feminist Action, Volatile Visibility, and Harassment “So, I had an abortion when I was nineteen,” Tracy says. This is the first thing she says to me in our interview. “I’ve always used my experience as a way to reach other women and help them and make other women’s lives... not shit. That’s my passion now, repro justice.”43 Tracy says she wanted to participate in my research because she wants more people to know about what feminist activists go through when they’re harassed as severely and consistently as she has been. Tracy, like most activists, has a lot of passion and energy for what she does, but she also, like most feminist activists, struggles with the emotional labor that is required of such work. For Tracy, this labor is compounded by the harassment she experiences on a regular basis. “I’ve always written about [my abortion] and have been outspoken about it. And my car was egged shortly after posting something about it on social media,” she says. I ask her if she believes the egging was a result of having the abortion. “Yeah,” she nods before adding, “and not being apologetic about it.” Tracy pauses, thinking back on the occurrence before saying, “I mean…” She pauses again. “It was also…” Another long pause. “People wrote ‘killer’ on my car with those window pens, and it happened one time, and then that didn’t happen again, but I would get all kinds of threatening, you know, people would tweet at me. Twitter is the worst for the trolls.”

43 “Repro” is an abbreviation of “reproductive.” The phrase “reproductive justice” was introduced at a 2003 conference put on by the SisterSong Women of Color Reproductive Justice Collective. It is different from other terms in the women’s health justice communities in that it’s “an intersectional theory emerging from the experiences of women of color whose multiple communities experience a complex set of reproductive . It is based on the understanding that the impacts of race, class, gender and sexual identity oppressions are not additive but integrative, producing this paradigm of intersectionality” (Ross, 2011, n.p.). Essentially, reproductive justice “links sexuality, health, and to social justice movements by placing abortion and reproductive health issues in the larger context of the well­being and health of women, families and communities because reproductive justice seamlessly integrates those individual and group human rights particularly important to marginalized communities” (Ross, 2011, n.p.).

73 I ask her why she thinks that is and without hesitation she says, “Hashtags. They troll the hashtags,” she explains. “You know, if you use #reprojustice, these anti­abortion advocates are trolling that hashtag to comment terrible things.” The surveillance and sometimes co­opting of hashtags functions as a way for harassers to weaken feminist communities and police gendered boundaries of public space. This is known as hashtag hijacking, the use of a hashtag for a purpose other than the one intended.44 While hashtag hijacking is often used for social protest,45 it can also be used as a tactic for sexist online harassment, as Tracy points out with #reprojustice. I tell Tracy about negative experiences I’ve had with witnessing severe harassment on the #feminist and #feminism tags, which are frequently patrolled by Men’s Rights Activists and others who are hostile to the feminist movement looking for women to add to harassing lists or target in sustained attacks. These harassers hijack the tags to spread misinformation about feminism or degrade women. I was told very early on by a feminist activist I met through Twitter to be careful with how I tag my tweets using a public account because many feminist channels are surveilled and co­opted, and it’s difficult to know which ones without spending some time there. This is something Tracy learned the hard way. She tells me that it’s the hashtags that make Twitter “worse than Facebook” when it comes to harassment, and our conversation takes a turn to the functional differences among social platforms, namely Facebook and Twitter.46 I bring up the idea that the networks on Facebook and Twitter are different from one another in that in my experience, people I was

44 Hashtag hijacking is closely related to the “bashtag,” which is “a hashtag whose original positive meaning has been appropriated by the public and is now used with a negative connotation” (Hayes, 2017, p. 119). A hashtag hijack, however, isn’t always negative. 45 Many of the most well­known cases of hashtag hijacking revolve around the infiltration of a marketing campaign in order to draw attention to consumer issues or unjust practices, as with #MyNYPD, a tag created by the New York Police Department in a public relations effort to show police officers interacting with community members in a positive light. This short­sighted attempt at community outreach backfired and New Yorkers hijacked the hashtag to draw attention to they or people they knew had suffered in New York. Tracey Hayes (2017) argues that in this case and others, hijacking a hashtag can act as a form of DIY protesting in that people can gather across space and time “without any gatekeepers determining an agenda” (p. 127). 46 I haven’t had a Facebook account since 2010 for a number of reasons, one of which being a general uncomfortability with men commenting on pictures of me, I tell Tracy. This was a frequent issue I had with MySpace, too, which I got rid of in 2007 for the same reason. I have always been sensitive about posting photos of myself online after someone lifted a picture of me off of Facebook in 2008 and used it as the basis of a meme that circulated among the network of undergraduates that I was going to college with at the time. Another experience, too, left me with a general distrust not just of my lack of privacy on Facebook but with how others use content on it. As a film studies major in my undergraduate work, for example, my media ethics professor, in a grand gesture to teach the class a lesson about privacy, would find embarrassing pictures from students’ Facebook accounts and pepper them into his PowerPoint presentations. Most pictures were of people drinking or out at bars, but the one he used of me was a simple image of myself and two of my girlfriends from when I was studying abroad in Australia. It was a relatively innocuous picture— we were standing in the kitchen of my apartment on a sunny afternoon before heading to the beach, all three of us smiling and making the peace sign with our fingers. There was nothing scandalous about it except for the fact that we were in bikini tops and shorts. At the time, I didn’t understand this, but in retrospect I realize his using of that picture had the distinct flavor of slut­shaming, and as one of the few women in the class and the major, it felt invasive in a way that was, I imagine, different from what the men in the class felt seeing their pictures projected to the rest of the class.

74 friends with on Facebook were people I had met offline, which isn’t often the case for me with Twitter. “Do you think that has an influence on the way that harassment is circulated or perpetrated,” I ask Tracy. She nods vigorously and responds, “Oh yeah. I feel like I get less [harassment] on Facebook because I do know most of the people on my Facebook.” She goes on to explain that she uses each platform for a different purpose. Facebook, Tracy says, is mainly used to keep in touch with family and friends, especially since she has been living in the U.S. since grade school but is from another country. It helps her keep up with what’s going on with friends from her home nation. “Twitter is more for my activism,” she explains. “And because my Twitter is public, because it has to be if you’re going to do anything on it, really, anyone can say anything because they don’t know me.” It’s at this point in the conversation that Tracy brings up the idea that anonymity is explicitly linked to one’s propensity to harass. She says more often than not, her harassers on Twitter are anonymous, which she says is almost worse to experience. “They have that egg picture47 or some random name. There’s no way of knowing who they are.” Tracy makes an important point regarding anonymity and disinhibition. To reiterate the discussion from chapter one, anonymity has, if anything, the reputation for being linked to harassment behaviors, and it’s no wonder Tracy alluded to that in her talking to me about her anonymous harassers on Twitter. She told me, “I think that cover of anonymity makes them feel like they can just say whatever they want. Because these things that people say, I just highly doubt would ever be said in person, because they’re that bad. I think people would say things like this in person, but the level that it gets online… it’s different.” When I ask Tracy how often she is harassed on Twitter, she notes that the more active she is, the more harassment she gets, but she also discussed a correlation she’s observed between her visibility online and the amount of harassment she receives. Tracy has published articles online about reproductive justice at big name outlets, and she said, whenever an article comes out it never fails that “people find me through that,” and the harassment sustains across platforms and time. Her most recent article at the time of our conversation had been out for several weeks, but Tracy says she was still receiving “shitty tweets” in response to it as well as new comments on the article itself. She said, “Actually, this past week there were new comments on it. I read half of one, and I just didn’t read the rest because I started to not read comments on my online writing… just because it’s too much.” In addition to harassers finding Tracy through her articles, she mentioned that sometimes harassment was driven to her account through her use of feminist hashtags as well as harassers surveilling other feminist accounts. For example, Tracy holds a leadership position in her state’s chapter of the National Organization of Women (NOW) and therefore follows all of NOW’s affiliated Twitter accounts. She said it’s not uncommon for people hostile to NOW’s mission to systematically harass all of the accounts who follow NOW.

47 At the time of our interview, eggs were still the default profile image on Twitter.

75 I ask her about the nature of the harassment she’s experienced and whether or not she’s ever been or felt physically threatened. She says that she has, and immediately the hashtag campaign #ShoutYourAbortion comes to her mind. Tracy shared her own abortion story using the tag the day #ShoutYourAbortion went viral, resulting in violent threats lobbed at her, something that happened to many women who used the tag including its creator Amelia Bonow who was severely harassed and doxxed. In an interview the week after #ShoutYourAbortion went viral, Bonow said she was “inundated with threats and emotional terrorism,” before going on to say, “I’m scared. I mean, I do not actually think that an anti­life militia person is going to murder me—knock on wood!—but I’m not exactly overreacting,” because sexist online harassment “are the kinds of things that no human being should ever have to accept as collateral damage for being a woman who speaks publicly in a way that challenges male power.” Bonow said, “being doxxed and subsequently harassed has chilled me to the bone. I do not know if I will ever feel totally safe again, and this is a harsh toke for a woman living in a world that has never been a safe place for women, not even close” (qtd. in Davies, 2015, n.p.) Participating in #ShoutYourAbortion produced similar revelations for Tracy who didn’t disclose the exact nature of the threats she experienced but did say they were violent and that she still, more than a year after her use of #ShoutYourAbortion, receives threats about it regularly. She says, “I used to get really upset about it and now I just don’t look at it as much. I just kind of force myself to numb myself to it because I don’t want it to stop me from doing the work.” It is incredibly important to Tracy that her feminist action persists, but that persistence can wane in the face of threats and harassment. She says, “I’m not gonna stop. They won’t stop trying to make me stop but I’m not going to stop. You just have to have really thick skin to do this work, and it’s not fair but it’s just another effect of the and rape culture.” It becomes clear to me that Tracy’s #ShoutYourAbortion participation was a turning point in her activism. I ask her, “When you say that you used to get really upset about it, what effect did it have on you?” She replies, “It just made me feel like shit. I stopped doing the work for a while. I would get really depressed and second guess my own choices and my own subjectivity as a feminist. The things people say really… they are so hurtful and so personal that it’s hard not the let that affect my daily life.” There’s a bit of a pause here before Tracy returns to the story of having her abortion, telling me that it wasn’t having the abortion that was hard on her—it was the subsequent harassment that was the most difficult to deal with. She says all of the harassment she’s received online and offline following her abortion made her “really depressed,” and her was only made worse by her friends and family undermining the seriousness of harassment. “Everyone was like, ‘why do you care what they’re saying, these random people, and letting it affect you so much?’ Everyone was like, ‘they’re not threatening you.’ And I would be like, ‘but they are though!’” So, it wasn’t just the harassment itself that contributed to fallout Tracy experienced but the entire environment in which harassment exists. People she trusts and loves had the attitude of, “what’s the big deal,” making Tracy second guess whether or not she was overreacting. I too have experienced this exact phenomenon, not just through people making

76 light of harassment I’ve experienced but also in how people react to the topic of this very dissertation. I can’t even count how many times I’ve told someone in casual conversation that I’m researching sexist online harassment and they tell that women should just log off.48 I tell Tracy this and she makes the comparison between these reactions and rape culture, “because people are always just blaming the victim and trying to… they minimize the damaging effects of it.” She expresses frustration that more conservative family members in particular have a general orientation towards Tracy’s feminist work that positions Tracy as the aggressor. She says they’ll ask her questions like, “why can’t you just not talk about it? Why do you always have to start something?” Again, Tracy touches on platform differences and how readers of Tracy’s Facebook are people she knows personally. She says when people comment on reproductive justice articles she posts on Facebook, “I’ll fight with them, because it’s not like they’re a random person. I know them personally. I’ve met them.” She says knowing someone personally is a powerful motivator for her to engage with their comments because, Tracy says, this might make it more likely she can change their mind. She brings up an instance where she was successful in changing someone’s mind about abortion: “One of my really good friends in my undergrad was brought up Catholic and was super against abortion but after we became friends and he saw the work I was doing, he kind of changed his mind.” This is why it’s important to Tracy that she remain outspoken in her feminist work on social media because it raises people’s awareness. “I know that people are always like, ‘oh, social media doesn’t do anything,’” she says, “but it really is powerful, especially within the feminist movement. So, yeah, I’ll keep arguing with people who feel the need to comment on my posts on my Facebook. I never seek a fight but if they start it, then I’m gonna call them out.” Tracy explains, “That’s why I really don’t like that rhetoric of saying ‘just ignore the trolls’ because then you’re just enabling them. It’s not easy to argue with people who are so hateful and sexist and misogynist, but sometimes you can kind of make a breakthrough there. And if not with them then with other people.” As she acknowledged throughout our interview, however, this doesn’t come with significant costs to her own emotional health. I was struck by Tracy’s assertion that she has forced herself to “become numb” to harassment in order to be able to continue her feminist activism. I ask, “How do you do that? Are you even cognizant of it? How are you able to make that shift?” Tracy pauses and thinks about my questions for a few beats before responding, “I feel like it’s not a healthy shift honestly because it’s kind of just me like…” And she stops to think again, this time for an almost uncomfortable amount of time. “Take the beating,” I ask, finishing her sentence for her. “Yeah, and just forcing myself to be numb to it. The more work I do, the less affected I am just because I care so much about the work so I’m not gonna let these trolls stop me from doing it.” She notes the significant “emotional drain” harassment takes on her, which her affects her mood. She says sometimes after experiencing sexist online harassment, she gets “hateful” and won’t want to

48 The popular yet problematic advice we give to women about how to deal with online harassment will be discussed further in chapters four and five.

77 leave her house. She acknowledges that she’s on an upswing with her motivation to continue her feminist work, but says she has good days and bad days. Tracy tells me about some of the strategies she’s developed through trial and error to curb the depression she feels associated with harassment. She says, “Some days I just have no social media days to give myself a break. I’ve also turned off all of the notifications and alerts on my phone and on my computer, so the only way I get the notifications is if I click on it and look at it myself. The pop­up still comes up but it doesn’t vibrate or whatever. That way if I want to engage it I can, but if I don’t, I don’t.” I tell her that the vibration of my phone became triggering for me after having tweets about my research get retweeted 30 or 40 times. I would always worry, when I received multiple notifications in a row, that I was being doxxed. Tracy said she could relate. Tracy also notes that while there are harassers who surveill certain hashtags, as we talked about earlier, “there are some feminist groups who also troll those hashtags. So like, sometimes you’ll get a response to the [harassing] response from other feminists in the community berating the person who is trying to harass you.” She says this is a new tactic she’s observed only recently, but she sees it more and more. “I feel like there is more awareness of how [harassment] happens and the frequency of how often it happens than there was in the past,” she says, which is encouraging to her. But she also says that it’s her feeling that there’s nothing an individual can really do to curb the problem, “especially on Twitter,” and especially individuals doing feminist work. She says, “You can’t prevent it if you’re going to do this work. You just have to come to terms with that and make the decision if you can do that or not, and I feel like if you can’t, then that’s fine. It’s not every kind of person who can do it. You have to be thicked skinned.” Tracy is clearly action­oriented and relies on social media, particularly Twitter, to engage in discussion and promote feminist organizations, offline direct­actions, and her own writing. The woman you’ll meet in the next section, Kate, is similar to Tracy in the sense that she also uses Twitter for feminist ends, largely in the form of participation in feminist discourse and consciousness­raising through the wide­circulation of feminist stances on current events. While these two women use Twitter for slightly different ends, they’ve experienced similar forms of harassment and have much in common in terms of the correlation between visible feminism and harassment as well as the effects sexist online harassment has on well­being.

Kate’s Story: Feminist Circulation and Volatile Visibility Kate begins our interview saying, “I've been on Twitter for almost seven years now… oh my gosh…” and notes, “I definitely don't experience [harassment] on a daily basis, but it's not so irregular to be harmless.” She tells me, “the instances when I've experienced harassment have seemed to fall into a few pretty clear categories.” Kate’s case is different from Tracy’s in that while both women have experienced significant sexist online harassment, much of which has been violent, Kate has seen a lot of targeted dogpiling, a term used to describe droves of people engaging in sustained denigration of someone they disagree with online, which drives

78 harassment to her account in hoards in a single day and then tapers off over time. In many instances she shared with me, the harassment she’s experienced has been highly concentrated. This kind of harassment, the kind that is deployed in high­volumes over a short period of time, is one of the “clear categories” she tells me about throughout our interview. Kate, a lawyer and mom, explicitly makes reference in her Twitter bioline that law, politics, and feminism are her major interests. She has cultivated a lot of followers in the law community and frequently tweets about public policy. She also tweets a lot about feminism and politics. Immediately at the start of her interview she tells me that primarily, if she experiences “abusive tweets” it’s when one of her tweets “gets legs,” as she calls it. She defines this as her tweet being retweeted by a big name or, occasionally, when her tweet is quoted in an article. Instances of wide circulation, Kate observes, almost always results in sexist online harassment. The first story she tells me is of when she had a tweet go mildly viral49 in April of 2016 when she joked about the backlash surrounding putting Harriet Tubman on the twenty dollar bill. Specifically, her commentary was aimed at those who felt Harriet Tubman didn’t look happy enough to be on our currency. “ I was getting replies to that for weeks,” she tells me. Kate told me about the variety of responses she got ranging from general scoffing and antagonism to full on vitriol and violent threats, but she doesn’t downplay the impact even the perceivably lesser offenses had on her. “I got quite a few replies from men [telling me] to stop whining and making a big deal about expecting women to smile,” she says. “It definitely wasn't the most abusive I've seen, but the number of randoms replying was huge because of how many people saw it and from how defensive men get when we say not to tell us to smile.” Even though she doesn’t categorize these kinds of tweets high on the abusive scale, they still affected her primarily because of the sheer number of them. Kate doesn’t go into detail about the tweets she received that were more violent in nature, but I’m reminded of the aforementioned politician who received countless rape threats over wanting to put a women on British currency. The Harriet Tubman tweet wasn’t the only time Kate has experienced high volume harassment, and she gives me many other examples. For instance, once during the democratic Presidential primary season, she posted several tweets criticizing Presidential candidate ’ negative comments about Planned Parenthood,50 triggering harassment that “was pretty

49 There are a lot of definitions on what exactly “viral” means and how it should be counted. Laurie Gries (2015) explains, “going viral” can commonly be understood as a “means of explaining how ideas trends, objects, videos, and so forth spread quickly, uncontrollably, and unpredictably into, through, and across human populations” (p. 2). But we lack a continuum that tells us when content crosses the threshold into viral territory, and how content’s virality is influenced by genre. Do we consider a video with 100,000 views “viral?” At how many retweets do we consider a tweet to have “gone viral?” Kate’s tweet, in this instance, may not constitute full­on viral, but I would say it had virality in the sense that it spread relatively wide, and well beyond Kate’s own follower count, and fast. Her tweet was retweeted 5,402 times and liked 4,968 times, and Kate told me most of that happened within the first 24 hours. 50 In an interview with Rachel Maddow, Sanders expressed his displeasure that Planned Parenthood endorsed his opponent Hillary Clinton and in response he labeled the organization “part of the establishment,” the establishment referring to elite entities that hold unjust political power over the majority of the population (Carmon, 2016). Sanders ran much of his campaign on the idea that he was the anti­establishment candidate.

79 nonstop” and from both sides of the political aisle, and it caused her to lose sleep. The night that the harassment started, Kate said she “was up constantly stressed out about the replies I was getting. I kept checking my phone and worrying another person would be calling me things. I'm generally an anxious person so when waves like that happen they really wear on me.” The harassment was so overwhelming, she said that she ended up just deleting her tweets “because it was getting so obnoxious.” Two other separate but related instances had to do with her commentary on high­profile rape charges and accusations brought against comedian Bill Cosby51 and NFL player Ben Roethlisberger.52 In both of these instances, she didn’t necessarily have high visibility in the sense that her tweets circulated as widely as they have in other instances. Instead, Kate speculates, harassers found her because they were patrolling keyword searches of “Bill Cosby” and “Ben Roethlisberger” for criticism, looking for people to fight with. Kate suspects this is how harassers found her because her tweets weren’t replies to anyone, didn’t include any hashtags, and didn’t link to Cosby and Roethlisberger’s usernames. She was told by harassers that “Bill wouldn't even want to rape me,” and the harassment surrounding her comments on Roethlisberger was relentless. She tells me, “the Roethlisberger backlash was especially bad,” and she had to spend an inordinate amount of time blocking people. Twitter’s blocking mechanism didn’t always stop blocked users from attacking her, however. “One guy,” she says, “manually retweeted me, even after I blocked him and deleted my tweets,” meaning he typed out her handle and the words of her original tweet versus what’s known as a “true retweet” in which a user uses the “retweet” button to recirculate the tweet, intact and from the original user. His manual retweet functioned as a call for his followers to harass Kate, since he was now blocked, and they did so in droves. They all “started tweeting at me, calling me gendered insults and weirdly mocking the fact that I was a lawyer and didn't understand ‘innocent until proven guilty.’ One guy told me to quit law and ‘hit the pole,’” she tells me. In this instance, Kate said the sheer amount of abusive comments is what rattled her the most. “That was one of the few times I felt the need to lock my account for awhile until the wave of replies was done,” she said. And while she had experienced sexist online harassment in the past, she explains, “It was the first time I had experienced targeted harassment from people, and it really overwhelmed me.” Both of these experiences taught Kate something: that she should be “more wary of using people's names or certain phrases [...] to avoid keyword searching trolls.” One of the strategies she uses is to obscure her discussion to make it less likely a search would lead harassers to her. Now, “if I ever talk about Roethlisberger,” she explains, “I say ‘Ben R’ or something along those lines. GamerGate is another example of that. When that was at its peak, I was one of those

51 Cosby has been accused of sexual assault by over 60 women (Daly, 2017) and stood trial in 2017, which ended in mistrial when the jury deadlocked. Prosecutors say they plan to retry the case (Redden & Pengelly, 2017). 52 R oethlisberger has been accused of sexual assault by two women, one of whom filed a civil suit that Roethlisberger settled out of court. The other woman did not press charges (“Ben Roethlisberger,” 2015).

80 tweeting about it but replacing letters in the word with symbols or garbling the words, like G*mer&ate or GoberGote.”53 It’s unknown who first developed the tactic of intentionally misspelling words so as to avoid a harassment network, but it’s a strategy that came in handy during, for instance, the 2016 Presidential election as women wanted to participate in public discourse about the many gendered issues that came up throughout the election­cycle without experiencing the inevitable sexist harassment. When tweeting about candidates who had supporters known for targeted online harassment tactics, such as Donald Trump (Alderman, 2017; French, 2016; Jones, 2016) and Bernie Sanders (Borchers, 2016; McMorris­Santoro, 2016; Millhiser, 2016), users might use a pseudonym or jumble the letters in the name so as to decrease the likelihood that the tweet could be found by a simple search. Intentional misspelling is also a tactic used by harassers themselves to circumvent Twitter functions that can be used as anti­harassment mechanism. For example, the “mute” function allows users to censor certain words and phrases so that they will not appear in their mentions or feeds, again allocating the responsibility on the user herself to put measures in place that will deal with abusive language. But harassers picked up on this use of “mute” quickly after Twitter implemented the feature and began intentionally misspelling slurs or that women may preemptively mute (Ehrenkranz, 2017) to ensure the damage could still be done. As Kate points out, intentionally misspelling “GamerGate” was a literacy many women developed out of necessity because of how severe and dangerous GamerGate­related harassment became. Arguably the most well­known event in relation to sexist online harassment, GamerGate’s origins, much like other internet­based events, are difficult to track. Many agree that the controversy began in August 2014 when video game developer Zoë Quinn was severely and systematically harassed by members of the gaming community after a defaming blog post written by an ex­boyfriend went viral (Levy, 2014). The campaign against her drew increased attention to the sexism and misogyny rampant in video game culture, yet members of the gaming community who don’t see misogyny as being a widespread problem were upset by these charges, deeming it a conspiracy by progressives and feminists to inflict censorship and regulations on video games (Elise, 2014). Operating under the guise of fighting for high ethical standards in video game , the group developed the hashtag GamerGate to organize their resistance, establishing a collective ethos as aggressive, relentless, and antagonistic evidenced by their reliance on rape and death threats to silence opposing viewpoints (Wu, 2014). Some of the most vocal women in opposition to GamerGate were so severely harassed, they were forced to delete their social media accounts, flee their homes, quit their jobs, and/or, in cases of more well­known figures, cancel public appearances (Ahmed & Marco, 2014; Totilo, 2014). While participating on Twitter at all can potentially leave you vulnerable to harassment, Kate points out that GamerGate was especially fear­inducing because of how common doxxing

53 T his was something I also did at the height of GamerGate, even from behind a locked account. I know women who have been found and harassed by GamerGate supporters years later, forgetting about a single tweet from years past that contained the phrase “GamerGate.”

81 became for women who were simply talking about it online. As Andrew Quodling (2015) notes, doxxing “provides an avenue for the perpetuation of [the victim’s] harassment by distributing information as a resource for future harassers to use” (n.p.). In other words, it provides the necessary first step for harassment to graduate to potentially more violent and severe forms, like swatting and offline stalking, both of which typically requires harassers know personal information about a target, like their name and address. Doxxing, then, often acts as a prerequisite for unequivocally sinister acts. Like Tracy, Kate has also experienced sexist online harassment after participating in a feminist hashtag. In fact, Kate says that “tweeting in any hashtag about feminism, sexism, or misogyny,” is the “most reliable [way] for bringing abuse” to her account. Kate tells me, “I tweeted in the #YesAllWomen tag about comments from men at work or in school and immediately had people tweeting at me calling me a bitch or some version of stupid, saying I was making a big deal out of nothing and should take things as a compliment.” She says her participation in a tag about street harassment produced similar results as she “got replies from people saying I was making it up, or just simply ‘shut up, bitch.’” As with other cases discussed throughout this dissertation, here we see how the mere presence of a woman sharing experiences she has had with sexism and misogyny has resulted in sexism and misogyny. Kate notes here too the immediacy of the harassment. By utilizing a hashtag, her visibility increased, putting her comments in front of more people than just her followers and clearly, the tag was being surveilled by those who weren’t there to participate in the tags intended use or sympathetic to its cause. Kate says that even though sexist harassment stemming from hashtag participation tends to fade relatively quickly, it causes her to use feminist hashtags less often and, she says, she’s more inclined to just retweet someone else rather than use her own voice through tweeting herself. Kate says the silver lining to experiencing context­specific harassment, like that which exists within a tag meant to discuss sexism such as #YesAllWomen, is the built­in community that’s there, because other women who use the tag will understand what she’s going through. In instances where she experiences sexist harassment as a result of hashtag use, Kate says, “I feel pretty empowered to share the abuse with everyone else tweeting about [the hashtag]. I'll tweet a screenshot of the abusive tweet with the tag and usually that helps to release the negativity I feel. In those situations I know I'm not alone and the solidarity with other people experiencing it, naming it, shaming it, is cathartic.” Kate shares other strategies she has for dealing with sexist online harassment and the “negativity” that comes along with it. Some of her go­to responses are to utilize Twitter’s platform features. For instance, she says, “I've become much more likely to block than I used to be. For some reason I had an aversion to it for awhile, but now when Trump supporters or anti­abortion crusaders pop up they're blocked pretty much immediately.” In addition to blocking, Kate has also made use of Twitter’s “quality filter,” a feature that was added in late 2016, and according to Twitter, “can improve the quality of Tweets you see by using a variety of signals, such as account origin and behavior. Turning it on filters lower­quality

82 content, like duplicate Tweets or content that appears to be automated, from your notifications and other parts of your Twitter experience” (“New Ways to Control,” 2016). This feature works as a gatekeeping mechanism to prevent replies sent from bots or spam accounts from showing up in a user’s notification. In theory, it helps to limit the amount of harassment someone might see, but there are significant limitations. As Kate points out, there’s no way to tell what it filters out or if it has filtered anything at all, and therefore, Kate has “no idea how much that has impacted things.” Like Tracy, Kate has also taken measures beyond Twitter’s functionality to protect herself from the harmful effects of online harassment. For instance, she has taken to temporarily removing the Twitter app off of her phone during certain times of the day, which helps her limit how much she can actually receive notifications. She said otherwise, she’s “tempted to obsessively check.” Setting has helped her, as she calls it, “put some space between me and Twitter. Since I'm a stay­at­home mom now, it's easy to be checking it at all hours through the day and that hasn't been good for me.” Of course, as Kate touched on throughout our interview, Twitter is incredibly important to her because it’s where she has a community and where she can participate in feminist discussions. She has experienced a wide rage of sexist online harassment styles, but it’s the moments of increased visibility that produce the kinds that have stuck with her. Both Kate and Tracy have had to make choices about how to deal with the harassment and the negative effects it has had on their lives. Clearly, they have both decided to continue using the platform, but their experiences have resulted in their withdrawal, even if sometimes temporary, from public conversation, pointing to a key impact that sexist online harassment has on public discourse: it limits who participates and what is discussed. In the next section, I turn towards more of the survey responses, which greatly reflect multiple aspects of what Tracy and Kate described in their stories, particularly in terms of what seems to trigger harassment and the effects harassment has on the women who experience it.

The Long Reach of Online Harassment What we can learn from Tracy, Kate, and others is that women are quick to make the connection between visible feminism and harassment, pointing to a need for a feminist theory of volatile visibility that understands circulation as something that can potentially enhance harassment. Once women witness or experience severe forms of sexist online harassment, they learn quickly that removing oneself from view online significantly reduces the chances they might have to go through such ordeals. Highlighting the dilemma of Twitter as a simultaneously positive space for exposure to intersectional issues and a negative space because of the presence of harassment, one woman said that while Twitter “made me stronger in my beliefs and in my activism, it has also made me more susceptible to online harassment,” causing her to, at times, disengage. Similarly, another woman responded that Twitter has made me at once more inspired, by putting me in contact with and exposing me to other feminists of all . It has also made me more afraid to speak openly about

83 being a feminist or post too strongly or retweet anything too militant or strongly worded on being a feminist, for fear of trolls, swatting, doxxing, etc. I even feel uncomfortable posting about issues not directly related to feminism, for these same fears, simply because I am concerned I will be harassed because I am a woman saying these things. In light of these sentiments, we must ask: what does this rational fear do to women’s engagement with feminist and social justice discourses? What effect is online harassment having on public discourse and the types of issues that are brought to the fore of public consciousness? The final question on the survey asked respondents to share any personal experiences they’ve had with harassment. Again, I wanted to give participants the open space to describe anything that the other areas of the survey didn’t touch on or allow them to expand upon. The results are heartbreaking and could probably fill an entire dissertation on their own, but what’s particularly striking to me about the responses is how many of them speak to two aspects of online harassment: 1.) how online harassment affects behavior and well­being, and 2.) identities or topics that arouse the abuse. I’ve presented these stories here, grouping them into these two categories and noting what I see to be the distinctive feature of each story. By drawing attention to these features, I hope to reveal additional layers of overlap among the shared experiences and bear witness to how online harassment comes in many forms and at many costs.

How online harassment affects behavior and well­being

Distinctive Feature Stories

Becoming uninvolved or silent “A publishing house retweeted an extremely offensive body shaming meme. I responded that it was inappropriate and got a number of responses shaming me, the person in the picture, and being generally horrible. The publishing house blocked me. I reported the incident to Twitter and was told the situation didn't fall under their definition of harassment. I took a break from Twitter for a little while. Now I don't get involved if I can help it.”

“Tweeting about Bahrain during the Arab Spring which I had been following closely, when a pro­government stooge suddenly appeared and told me to 'shut up cunt', the language was bracing and not what I wanted to be receiving. I never tweeted about Bahrain again.”

Panic, Fear, and Terror “The only truly panic­inducing experience I had was about a year ago. I got piled on after making a sarcastic comment

84 about anti­vaxxers. My sarcasm apparently didn't translate, and from what I could tell, a group of people assumed I was supporting anti­vaxx sentiments and flooded my mentions with bile. I had to go locked for a day and block a ton of accounts and I think that was about when I finally decided to remove my name from my account. A cautionary tale for me, because harassment is definitely not limited to one side or another of any social issue.”

“No matter how much you read about it, my initial response to having five obviously fake accounts created solely to antagonize me over a single tweet was terror. You block one account and then another account which has only two previous tweets—both retweets of the just­blocked account—comes after you.”

“Nothing too scary, just standard schoolyard insults and comments (positive or negative) about my looks. But I do definitely fear becoming the target of coordinated harassment from some Gamergate­style mob, which is on my mind whenever I tweet.”

“One word; GamerGate. I have not felt safe on Twitter since. My story would only be another in the endless pile from people that have been personally victimized by those terrorists.”

“I had a stalker situation. He stalked me on several social media sites and I ended up closing out a few of them because I didn't get any support from the sites. I would love to have a public twitter and engage on the public forum but I am terrified he or someone like him could find me again.” Table 3.1: Stories shared on the survey that address how online harassment affects behavior and well­being.

Identities or topics that arouse online harassment

Distinctive Feature Stories

85 Identity “I honestly can't say how many times I've gotten awful tweets for being a Muslim woman, especially an LGBTQA one. Some are petty and not that bad but some border on violent and I feel like most people just don't take it seriously enough.”

“I’ve gotten really ugly anti­Semitic language and images from Sanders & Trump trolls.”

Feminism “I attracted a lot of attention when I tried speaking to the creator of minecraft about feminism. He called me a gendered slur, which led to many others tweeting the same at me for several weeks. I still get one or two tweets like this every now and then.”

“My girlfriend uses Twitter much more than I do and she has been facing continued and concerted harassment for expressing her feminist opinions there. For example she had to stop using Twitter for a while because hundreds of men were tweeting ‘cunt’ at her over and over again for weeks.”

Rape “Because I have an anonymous account followed mostly by people I know, I think my experiences of harassment on twitter are relatively harmless/tame compared to some. However, I have still been told it was my fault I was raped, been mansplained to countless times, and been made to feel generally unsafe about being a woman and having thoughts, none of which, mild though it may be, is acceptable.”

“Once I spoke of woman and rape. A male user was angry that I did not include men and spoke of men as the rapists, and was therefore being sexist. I blocked the user, who then said ‘good riddance to soapboxy cunts.’” Table 3.2: Stories shared on the survey that address identities and topics that arouse online harassment.

While these stories are brief and only begin to scratch the surface of the nuanced ways online harassment influences well­being and intersects identity, it’s my hope that their compilation here reveal both the range and sheer amount of harassment experiences women experience online as well as the effects. Again, like with Tracy and Kate, here we see how women are quick to understand what harassment can do and what, perhaps, prompts it. Several of the women noted the normalization of online harassment in their open­ended responses. For

86 example, one woman shared her experiences in writing, “I was once sworn at, I've been belittled and had my arguments thrown away. I received an email one time that told me to kill myself and I've been called a bitch with many adjectives many times. The usual.” The fact that she deemed these experiences as “usual” is deeply disturbing and simultaneously telling about the state of online harassment, especially that which is sexist and misogynistic. Another woman expressed frustration at how the normalization of harassment against women contributes to culture’s apathy towards making a change. She wrote, I have witnessed hideous, blatant harassment of cis­ and Trans­ women who are peers and friends, to the point where they were personally subjected to rape threats, personal threats including knowledge of their whereabouts and threats of rape and harm to their children, and many of these women have made the understandable personal choice to leave Twitter. It is vile and it makes me angry. It also makes me scared to express my opinion, and wonder if I should leave Twitter. It makes me angry that police and others suggest that women “just stay offline” as if that is a viable option in the 21st Century, and as if they accept harassment of women as normal and tolerated. What's even worse is that these women's experiences are mocked and dismissed as if they should just ignore it or tolerate it. It is infuriating. There is much to unpack in this response, and it touches on some of the most important aspects of online harassment—its gendered dimensions, its violence, and its institutionalization—and this woman’s anger and fear is echoed in many other responses across multiple questions on the survey. Many women who responded to the survey described their fear of being doxxed , which was often linked to why they remain silent about feminist issues. One woman said she is “very cautious with what I tweet or choose to retweeet to limit the opportunities for harassment. A few years ago I wouldn't have been as concerned, but with the rise of doxxing, swatting, and the mob­like intensity that groups can harass with, it is worth keeping in mind.” At least three of the survey respondents mentioned they had personal experiences with doxxing, and for one woman, “ it was a case of mistaken identity.” She wrote that her potential doxxers thought she was someone else and “spent more than a year trying to dox me,” though fortunately were unsuccessful. They did, however, threaten “to falsely identify me as someone who tweeted a bomb threat to an FBI office,” so she immediately filed a police report “with all my screenshots and then emailed the local FBI office to explain this was about to happen.” It’s worth noting here that she self­identified as a Hispanic woman and said Twitter has caused her to “become more committed to promoting & supporting women's rights & causes,” and she makes this identity known online. She didn’t reveal the outcome of the attempted doxxing. There are obviously a variety of strategies one can use to deal with harassment, but utilizing the block function on Twitter was the most popular among survey respondents. 70% said they also block as a strategy to deal with harassment, though one respondent said they are “scared” to block, without elaborating further. I too hesitate to block at times precisely because

87 of what happened to Kate when she blocked the man who harassed her about Ben Roethlisberger—because blocking someone can agitate them and draw out even more harassment than before. I’m not sure if this is what the survey respondent meant by “scared,” but it’s definitely a concern for me. One survey respondent shared a story that positioned blocking as a mode of harassment itself, which I had never considered before. She shared: When I started my current account, I engaged with and RTd a successful woman in my industry who doesn't kiss up to the (white, young­ish) guys who dominate the community around our industry on Twitter. They'd mounted a campaign of targeted harassment at her for some time at that point. Because I had the audacity to befriend someone they did not approve of, they all blocked me. Now, being blocked, as opposed to active harassment (death/rape threats, , etc.) is nothing, and I know that. But in a professional context, it means I am a.) not amplified by the people in the core of my industry who talk to and support each other (a core group of white males and the "cool girls" they allow on the margins, and B) I am shut out of core industry discussions. Not the end of the world in the context of online harassment, but definitely a situation that has real world professional implications. By being mass­blocked because of her social interactions with another woman on Twitter, she was essentially erased from her professional community, making it difficult for her to network or even maintain an awareness for what was going on in her industry’s circles and cutting her off from interactions that are necessary in order to succeed in her field. The orchestrated blocking sent a message to her that she did not have the same right to be there as others. When asked, “Does harassment alter how you use Twitter,” 68.8% of respondents on the survey said “yes,” while 3.9% said they were “not sure,” and 9.1% responded “other” and wrote in a response. Among these write­in responses, respondents largely noted that simply knowing that harassment is a problem or witnessing the harassment of others influences what they do, or do not, say. One woman wrote, “I've avoided obvious forms of harassment due to the small circle of people I interact with, but yes, who I engage with and how is influenced by concerns of harassment.” Another wrote, “I read a lot about harassment, and I plan my tweets carefully in hopes of avoiding it.” Another responded that while she herself isn’t dealing with any immediate concerns of harassment, “it is something that is in the back of my mind every time I post.” In short, women are developing innate literacies that are reactive to the presence of sexist harassment online. When asked, “In your experience, what identities, actions, or discussions provoke54 harassment,” survey respondents had a lot to say. Most often, participants said that simply being

54 In retrospect, I should have phrased this question differently, as “provoked” suggests the target is responsible for the harassment. This kind of framing, as I’ll discuss further in chapter five, plays into a very dangerous aspect of rape culture that is reflected in a lot of what we tell women about how to behave: victim­blaming, as Tracy talked about in her interview. The fact no one, of the many different types of people (academics, non­academics, and feminist activists) who read drafts of my survey questions flagged this word, I think, might be an indication of how encultured we are to victim­blaming language.

88 a woman or having a female avatar elicited a lot of harassment (more on this in chapter four). Other frequently mentioned identities, actions, and discussions that draw out harassment were: ● Critiquing anything skewed traditionally male, especially white male ● Politics ● Feminism ● Rape and sexual assault ● Support of progressive politics, issues and movements like Black Lives Matter, abortion, and LGBTQ rights ● Using heavily surveilled hashtags; two mentioned by multiple respondents were #everydaysexism and #BlackLivesMatter ● Using easily searchable trigger phrases; two mentioned by multiple respondents were “Bernie” and “GamerGate” Among all of the topics and identities women mentioned, many also noted that harassment regarding these topics and identities is often centered on gender, sexuality, or looks. One white woman who responded that she experiences harassment at least once a week said, Most of my harassment has come from my support of Black Lives Matter or criticism of police or Trump/other tweets about race. The harassers often don't reply about my thoughts or beliefs, but about my looks, what they imagine I do sexually etc. Calling me an ugly bitch, a whore, stupid, say graphic and vulgar things about me being with black men— “You must lay on your back all day, f—king your n—” or talk about how my “black baby daddy must give me black eyes.” In a police brutality discussion, I was told that they'd be happy when my biracial son (who doesn't exist) would be killed like Trayvon.55 But most of it centers on me sexually or my looks. Unfortunately, these kinds of experiences are all too common—the ones that are so personal, like we heard about with Tracy, and so visceral, like we heard about with Kate, that it becomes simultaneously paralyzing and normalized. Through stories shared by Tracy, Kate, and the women who participated in the survey, we see how online harassment has become a major influencer in how women exist and interact online. Further, their stories unmask the scope of harassment’s consequential and potentially irreversible impact on women and our digital public spheres. In the next chapter, I’ll highlight the stories of two more women, “Olivia” and “Ella,” who both have gone to great measures to limit their visibility online because of online harassment experiences, resulting in their silence and erasure from public discourse.

55 Trayvon refers to Trayvon Martin, a Black teenager who was shot and killed by George Zimmerman in 2012.

89 Chapter Four Tactics of Avoidance: How Harassment Makes Women Disappear

“Women must first invent a way to speak in the context of being silenced and rendered invisible as persons” —Joy Ritchie and Kate Ronald, 2001, p. xvii

One of the ways in which women can actually control the amount of online harassment they experience is through silence and self­erasure. As discussed in chapter three, women who have experienced or witnessed harassment are quick to learn that removing oneself from view is the fastest way to avoid harassment. Self­censorship, deliberately staying uninvolved in public discourses, staying quiet, or even abandoning online spaces altogether are all learned strategies that are making women disappear from online spaces and public discourses. This chapter attends to these concerns. I begin by discussing silence as a rhetorical move, as it’s one that has been made with purpose and agency in the past to much success. However, when many of the women who have experienced sexist online harassment are driven into silence, their agency is removed and silence becomes the last resort in attempts to curb the problem, as evidenced by survey results. In the latter part of this chapter, I present conversations I had with two women who bring up issues of self­censorship and abandoning of online spaces. “Olivia” approaches the careful crafting of her online identity and interactions around silence and self­censorship after years of seeing friends and strangers be beaten down by online harassment. Now, she does little to insert her own voice into online spaces and simply lurks in order to avoid experiencing harassment herself. Similarly, “Ella” has taken great measure to ensure that her digital footprint is as small and anonymized as possible in avoidance of online harassment. Past experiences with digital stalking and doxxing have made her rightfully cautious about “existing” anywhere online. Through their stories, we see how women’s behavior changes as a result of sexist online harassment, even if such harassment isn’t experienced directly.

Silence, Abandonment, and Hiding as Avoidance of Harassment The devaluation of women’s voices is an embedded part of our culture, acting as a foundational precedent for sexist online harassment. Mary Beard (2014) points out that history of Western culture provides evidence of women being consistently shut out of public discourse. Speaking to a larger cultural problem in how men are socialized, Beard notes that, historically, “An integral part of growing up, as a man, is learning to take control of public utterance and to silence the female” (p. 11). Even when they are not silenced, women who choose to speak in public forums “still have to pay a very high price for being heard,” in the form of suffering sustained harassment (p. 12). Not only does harassment have a silencing effect in the sense that it drowns out the voices of women in public, but harassment often causes women to stop speaking

90 altogether, as they learn that speaking up often results in even more harassment. Jane (2014b) explains, “The tyranny of silence associated with [online harassment] has parallels with that associated with offline . Many female commentators report feeling reluctant to speak openly about receiving sexually explicit online vitriol, and hesitant to admit to finding such discourse unsettling” (p. 536). Women’s reluctance to challenge online harassment arises partially because, Jane argues, speaking out against this type of abuse has long standing connections to accusations of being opposed to free speech, a popular justification for abusive discourse.56 Elsewhere in this dissertation, I’ve addressed the ways in which women can be and are silenced as a result of their audacity to speak in public forums. While many of the women I’ve spoken to throughout the course of this research have felt forced into silence, this kind of silence does, in a sense, serve a rhetorical purpose: avoiding negative consequences of speaking in public while a woman. Yet silence has been adopted by rhetors throughout time as a strategy that serves other rhetorical purposes as well. Before I more fully discuss silence as a result of harassment, I want to briefly address the ways in which silence has been taken up as a rhetorical strategy not by force but as one with purpose and agency that empowers the rhetor. In her book Unspoken: A Rhetoric of Silence , Cheryl Glenn (2004) describes silence as, apart from being necessary for speech on a fundamental linguistic level, a powerful and meaningful act. She writes, A rhetoric of silence has much to offer, especially as an imaginative space that can open possibilities between two people or within a group. Silence, in this sense, is an invitation into the future, a space that draws us forth. There is not one but rather many silences, and like the spoken or written, these silences are an integral part of the strategies that underlie and permeate rhetoric. (p. 169) Glenn, throughout her work, notes that silence, while often “perceived as emptiness,” can be used as an act of resistance in some rhetorical contexts (p. 75). She argues, “employed as a tactical strategy or inhibited in deference to authority, silence resonates loudly along the corridors of purposeful language use, of rhetoric” (p. 15). Of course, throughout history and today, there are cultural uses of silence that are often left out of western conceptions of rhetoric. For example, Arabella Lyon (2004) notes the Confucian tradition of silence as a basis for deliberation. Here, silence is used for a variety of purposes. Lyon writes, “Silence is more than absence of quiet; it is a constitutive part of interactions, communication, and even making of fulfillment, knowledge, choice, and commitment. Silence can even indicate questions, promises, , warning, threats, , requests, command, deference, and intimacy” (p. 137). In The Analects , for example, silence is posited as something that helps to build the character of the rhetor, emphasizing “the wisdom of not engaging what connact be changed” (Lyon, p. 137). But it would be a mistake to understand

56 Intersections of online harassment and debates about First Amendment Rights will be discussed further in chapter five.

91 Confucian silence as inaction. To the contrary, it supersedes bloviation as the rhetor acts rather than articulate their plans to act. As Stuart Sim (2007) documents in Manifesto for Silence: Confronting the Politics and Culture of Noise , silence can be, in many ways, a political act. Sim details many of the ways in which noise is a prominent feature of our world and argues that this noise is a threat to our humanity in that is affects us physically, emotionally, and cognitively. He argues that we must examine the deep connections between thought and silence: “Thought is an essentially silent activity and it is difficult to sustain in a noisy society—and certainly is likely to become superficial when competing with other stimuli,” he writes. “This cannot be good for our collective cultural health” (p. 39). Sim argues that silence matters because, as an antithesis to noise, it bucks all of the ideological power that noise can hold. Silence has had long associations with listening as well, an area taken up by Krista Ratcliffe (2005) in Rhetorical Listening: Identification, Gender, and Whiteness . Listeners, Ratcliffe notes, are usually marked as feminine because listening is traditionally thought of as a passive role (p. 11). The feminization of listening, then, means it is often unfairly devalued. In our own field, for instance, “Reading and writing reign as the dominant tropes for interpretive invention; speaking places a respectable third; but listening runs a poor, poor fourth” (p. 18). However, she’s quick to point out the distinctions between silence and listening and rhetorics of silence can be enacted in dysfunctional ways that can often fortify gendered and racial biases. Ratcliffe’s theory, instead, defines rhetorical listening as “a stance of openness that a person may choose to assume in relation to any person, text, or culture; its purpose is to cultivate conscious identifications in ways that promote productive communication, especially but not solely cross­culturally” (p. 25). Rhetorical listeners, then, should invert “understanding” to read “standing under.” As she explains, “Standing under the discourses of others means first, acknowledging the existence of these discourses; second, listening for (un)conscious presences, absences, unknowns; and third, consciously integrating this information into our world views and decision making” (p. 29). Such a stance then leads to more productive discourses and moves us to a place where listening is valued as an important part of the rhetorical process. Jacqueline Jones Royster and Gesa Kirsch (2012) also discuss silence’s associations with listening, an important component of what they deem strategic contemplation, a method for understanding the world around us, as described in chapter two. Strategic contemplation, they write, involves recognizing—and learning to listen to—silence as a rhetorically powerful act. It entails creating a space where we can see and hold contradictions without rushing to immediate closure, to neat resolutions, or to cozy hierarchies and binaries. The intent of such strategic contemplations is to render meaningfully, respectfully, honorably the words and works of those whom we study, even when we find ourselves disagreeing with some of their values, beliefs, or worldviews. (p. 21­22)

92 While their discussion of strategic contemplation is situated as a methodology for doing scholarly work, there’s much that can be applied to our thinking about dialogic exchanges, both online and off. Silence can often be misconstrued as an indication that a rhetor has nothing to add, when quite the opposite can be true. Silence, as Royster and Kirsch point out, is a necessary pathway towards contemplation, understanding, and thoughtfulness in how it is we choose to participate or respond. In the kinds of rhetorical silences discussed by these theorists, silence is not something foisted upon a rhetor by way of harassment or opposition. In the context of online harassment, and particularly sexist online harassment, victims are often forced to adopt a silent stance out of fear or avoidance, not out of contemplation or strategic listening. Sometimes, this kind of forced silence occurs in the form of abandoning online spaces for fear of sustained or additional harassment, while those who stay can suffer “significant emotional harm” such as severe damage to self­esteem, confidence, and ability to express themselves freely” (Barak, 2005, p. 78­79). Jill Filipovic (2007) recounts her experiences as a target of online harassment perpetrated by colleagues while attending law school. Almost all of the women in Filipovic’s class were sexually harassed through a digital message board affiliated with the program, and as a result opted not to continue using the space—a decision that no doubt left these women at professional and academic disadvantages. When interviewed Filipovic and many others about their experiences, “all of the women [...] asked to remain anonymous out of fear that the [harassing] posts on the message board might have negative consequences for their employment prospects” (p. 295). This type of online harassment is not unlike offline harassment in its intention, as it “systematically targets women to prevent them from fully occupying public spaces” (Mantilla, 2013, p. 569), and works towards silencing. When women are made to feel like they don’t have ownership of a space through use of harassment, they are unable to participate in conversations let alone occupy any space, physical or digital, without suffering some type of adverse effect. For many women, the solution is not to abandon a space but to eschew a female gender identity in order to avoid harassment, as embodying a male or gender­neutral avatar lessens the amount of harassment one might experience. This became evident to Julia Enthoven, co­founder of an online video editing website, after suffering significant harassment while operating the customer service chat widget on her site which used her picture as an avatar. In trying to come up with solutions on how to cut down on the amount of harassment she experienced, Enthoven decided to conduct an experiment by changing the chat widget avatar from her picture to that of her male co­founder’s, Eric Lu. Prior to changing the avatar, Enthoven would experience harassment, on average, twice a day through threats, name calling, sexual harassment, unwanted photos, and suggestive comments on her looks (Griffith, 2018). Once she started using the picture of Lu, the harassment stopped completely. After some time, Enthoven changed the avatar again, but this time used the picture of another woman. Predictably, the harassment started up again in full force. Out of , Enthoven decided to change the avatar one last time: to that

93 of the company’s cartoon cat mascot, and just like with Lu’s picture, the harassment stopped completely. Enthoven’s experiment resulted in the company making the decision to use the cat picture as their customer service avatar indefinitely. Of this decision, she said, “In some ways, it makes me sad that it’s harder if I represented myself online, but I also think [using the cat image] is just one easy way to get around it,” (Griffith, 2018, n.p.). One might say that the context of a customer service chat widget has something to do with the preponderance of harassment, as well as the people using the site, which Enthoven identifies as mainly teenage boys; however, these aspects of her situation shouldn’t overshadow the fact that it’s safer to present oneself online as a man or a cat than it is to present oneself as a woman. Claudia Herbst (2008) argues that harassment “of women on the internet is a form of censorship through which female­identified content has been suppressed” (144). In describing women using male names or avatars in order to protect themselves against harassment, Herbst writes, As women are surrendering their female identities online, often in the name of safety, they are in essence wearing a digital cloak that renders them invisible. In the process, women yield the floor and abandon their topics of interest lest too much is revealed. The diluting, or outright denial, of female identity on the internet suggests that women’s increased presence on the internet by no means ensures their free and audible expression: The surrendering of one’s voice is the emblem of subordination. (p. 143­144) The surrendering of one’s voice, as Herbst puts it, through silence and the suppression of female gender identity are topics that came up in responses to the survey. When asked to identify what strategies they use to deal with harassment on Twitter, fifty­seven percent of respondents said they self­censor. Many respondents used the write­in option on this question to elaborate and noted strategies that directly correlate to silence or suppressing a female gender identity, including: ● “avoidance by being low­profile.” ● “Use a profile picture that is not a photo of me and is not used on other social media platforms affiliated with these or my legal name.” ● “I do not use a human picture as an avatar; removing visual cues that I am female reduces harassment.” ● “I have watched so many friends get harassed into silence. That is the main reason I sometimes self­censor.” ● “I self­censor and keep account locked partially for fear of harassment, and I use a gender­neutral avatar.” This sampling of responses shows a concern for how the projection of a female gender identity might intersect with the omnipresent issue of online harassment. Such a concern, for some, leads to a reluctance to speak about certain issues that are known to attract harassment out of fear. One survey respondent noted that they avoid “tweeting about topics that get harassment,” and another

94 wrote that they take “preventative measures” which includes not tweeting “on issues/hashtags that trolls frequent like #gamergate or #everydaysexism,” both tags being ones that have deep connections to gender politics. When asked to share stories of either experiencing or witnessing harassment, women responded in ways that indicate a want to stay silent or out of view for fear of harassment. One woman expressed having developed such a fear after seeing the harassment of others, both friends and strangers. Seeing these acts lobbed at other women online has instilled in her a concern for even participating on Twitter at all in any capacity. She wrote, I have witnessed hideous, blatant harassment of cis­ and Trans­ women who are peers and friends, to the point where they were personally subjected to rape threats, personal threats including knowledge of their whereabouts and threats of rape and harm to their children, and many of these women have made the understandable personal choice to leave Twitter. It is vile and it makes me angry. It also makes me scared to express my opinion, and wonder if I should leave Twitter. It makes me angry that police and others suggest that women “just stay offline” as if that is a viable option in the 21st Century, and as if they accept harassment of women as normal and tolerated. What's even worse is that these women's experiences are mocked and dismissed as if they should just ignore it or tolerate it. It is infuriating. Similarly, another woman wrote about her experiences as carefully crafting what kind of persona she projects on social media in avoidance of harassment. Such a persona avoids talking about “risky” topics, or ones that tend to garner the attention of harassers. She said that she foregoes “tweeting things that are too militant, too vocal, too personal,” and as a result, her online presence is fairly sparse. She wrote, “ I restrict my presence on this platform to protect myself from what I have seen happen or have read first­hand account descriptions.” Some of the “risky” topics that she avoids as a result are “feminism, gender equality, vaccination and anti­vaxxers, equal pay for equal work are the ones that come to mind most easily.” These progressive issues becoming marked as ones that are dangerous for women to talk about in public indicates a serious turn for our social media: that harassment is inflicted by those in opposition to progressive issues in order to silence conversations about such issues and prevent the circulation of their information. Other women indicated that while they themselves had not experienced harassment in the first­hand sense, their knowledge about it as a gendered problem has instilled feelings of fear and prevents them from fully exercising their voices. One described her questioning of how warranted her fear actually is. She wrote, My experience? Fear of being targeted, regardless of how real or likely that threat will be, prevents me from exercising my voice. I self­censor to give myself the greater illusion that I am safe, that I am anonymous, that I will be left alone to enjoy the tool as a way to share information, to share news, to share links, to share cat videos, safely.

95 Another women noted that experiencing harassment in the past has informed how she chooses to use social media, often resulting in her choice to avoid particular conversations. She wrote, While I have not experienced a lot of harassment, it's probably because my FEAR of harassment has me censoring myself quite frequently. I avoid using popular hashtags and sometimes resist talking about certain controversial topics altogether. The few times I've had (white cis men) attack something I've said, it's caused me great anxiety that had me afraid of opening my computer. It has, in the past, also kept me up at night. Instances of self­censorship stemming from fear should not be ignored, as they indicate a larger, systemic problem for women online: speaking as a woman, especially about progressive issues, is both risky and dangerous. These experiences, concerns, and fears bring up questions of privacy, silence, and erasure from online spaces as they pertain to women. The two stories presented in the remainder of this chapter touch on these issues in a variety of ways. Like Tracy and Kate’s experiences, these two women have also altered their use of the internet in the face of witnessing and experiencing harassment. These women’s stories highlight how their fear and avoidance of harassment has had impacts on their professional lives in addition to their personal ones.

Olivia's Story: Self­Censoring & “the wrong kind of attention” Olivia, since the beginning of her use of the internet, has always valued her privacy and security. In many ways, it has governed where she goes online, how she creates accounts, and with whom she interacts. For her, relative anonymity online is important, and her reasoning as to why has evolved over time. Apart from compulsory accounts created for her in school or at jobs, she never uses her real name in email addresses or social media handles. In our interview, she explains to me that, “I may not have been completely diligent about guarding against personal information slipping in here and there, photos posted by my accounts on different services which could all connect back to each other, leading back to me if someone were persistent enough, diligent enough, or had the right tools and enough time to follow the threads.” But in general, she’s cautious to use her name. At first, she says, she chose “other names to be other versions of myself,” but later, “it seemed like a good idea to use names other than my own, in order to have a semblance of privacy and security on the internet.” For Olivia, this privacy and security was, at first, also connected to “self­censorship,” in her own words, on social media in the sense that she didn’t think she had anything to say that anyone would actually want to read. For example, when she first joined Twitter, “it was an experiment” because while she didn’t necessarily have a desire to join, all of her friends used the platform and convinced Olivia it was a great outlet for self­expression. As a blogger and someone who is personally and professionally invested in the blogging community, Olivia saw how Twitter might help her keep track of a variety of bloggers’ activities more conveniently than having to check many disparate websites for daily updates. Quickly, she saw both the personal and professional value of Twitter and created two accounts, both using pseudonyms—one used

96 mainly to follow her friends and one used in more a professional capacity “as a way to share resources, links, and articles” with friends and colleagues. Before using Twitter, these resources, links, and articles were things that Olivia circulated via email but second guessed this method of distribution after hearing from a few people that “they might be better able to keep track of what I had sent if I tweeted the links instead of emailing them.” Largely, her foray into more social forms of communication media was based on convenience. Olivia saw the potential that Twitter held for connecting with her profession’s community—something that is clearly very important to her.57 She explains, When I started this account, I entertained the notion that, in addition to sharing information with ten to fifteen people, I could help fit myself into my professional community and create a professional network on this medium. Suddenly, I had way more followers than my ten to fifteen people and I never really understood why. I continued to tweet links to sources with hashtags and only rarely posted anything straight from me. It was rare for me to tweet anything about myself or my thoughts. It didn't seem to be the place for it and, after all, who would care? Olivia began to think of this professional account as being disconnected from her personal identity. Instead, she worked hard to keep it void of anything that would personalize the account, “not out of fear of harassment,” but out of “self­censorship” stemming from a pervasive belief that she, as an individual, had nothing to add to conversations that people would care about. Harassment has come to Olivia’s mind “now and again since [joining] the internet,” but her awareness of how serious harassment could be increased in the wake of GamerGate. This was a turning point for Olivia and her internet literacy. In the era of a post GamerGate internet, she’s “consistently concerned about it.” The specific features of GamerGate opened her eyes to how easily extreme harassment tactics, like doxxing, could happen to the average internet user for simply expressing one’s opinion. She explains, Reading about what was happening to journalists, bloggers, and experts, as well as random people who were expressing their opinions against sexism in gaming or for greater representation and diversity in gaming, really affected me. The doxxing and harassment that occurred to well known and unknown people suddenly took this from a case of people misbehaving online and brought it into the physical realm in a way that felt dangerous and threatening. What struck me here is Olivia’s mentioning of harassment tactics that go beyond what she calls “misbehaving” and actually infiltrate the “physical realm.” The way she phrased this part of her answer had me thinking about the ways in which people categorize online harassment, which then influences its perceived seriousness. For Olivia, casual misbehaving is, perhaps, just part of

57 Olivia was vague with me about what her e xact profession is, and I sensed she had anonymity concerns with her participation in this work. She told me she’s a writer and that most of her writing can be found online, and while I gathered that she mainly writes about technology, I didn’t press her for further details about what kind of writing she does or where her work is published.

97 a normal internet experience. But doxxing, as she mentions, posits real danger and real “physical” threats. Her dichotomizing of physical with something else that she doesn’t name reflects a common attitude among the wider discourses about online harassment: that threats to mental or emotional well­being, ones that aren’t explicitly connected to the physical body, are less menacing, and therefore less important, as physical threats. Such attitudes are dangerous and misguided. Given how closely intertwined our emotional and physical health are,58 it’s difficult to say for certain where the line is between the two. Olivia makes an important point about GamerGate’s widespread use of the internet to inflict physical harm via tactics such as doxxing: this cultural moment marked a shift in how many internet users came to understand that when we’re online, we’re not just “online,” but we’re somehow both online and offline at the same time , always. So much of our offline lives now exist online and vice versa. Again, where is the line between these two realms that were for so long seen as very much separate? It’s getting harder and harder to tell. Olivia identifies GamerGate as the cultural moment that caused her to confront online harassment on a personal level. It’s not that she wasn’t aware of online harassment before, it was just that instances of it were abstract for her. She gives the example of Brian Krebs, an American journalist specializing in who fell victim to doxxing, and later swatting, after reporting on and exposing groups or entities engaged in illegal cyber activities (Jackman, 2013). Olivia says that she was a longtime reader of Krebs’ website “Krebs on Security,” and had learned about the doxxing and harassment he experienced. She explains, “while I had felt for him and his family and was concerned that things like this were possible, it wasn't until GamerGate that it felt as though it could happen to me.” High profile instances of online harassment like Brian Krebs’ experience made her aware of the problem, but GamerGate forced her to confront what online harassment might look like for her personally as a woman who uses the internet both casually and professionally. Doxxing suddenly became a real concern for Olivia as she began to see it happen to all kinds of women for “simply expressing opinions.” Her expectation for online discourses is that if someone disagrees with an opinion, they will leave “rude” or “impolite” replies. During GamerGate, however, the large­scale targeted harassment of women and the exposure of their or their family's addresses scared her. And that fear confirmed her ongoing decision to maintain relative anonymity online. Olivia is not alone in reacting in this way, demonstrated by the survey responses discussed earlier in this chapter. The idea that fear is the driving force behind so many women’s behaviors and choices in how they do or do not engage online is profoundly upsetting and an indication that we lack online platforms and communities that provide sanctuary for women who are fearful of harassment, especially the types of harassment that go beyond being rude or impolite. Olivia explains, “Facing threats of violence is much more easily managed when one

58 There is a whole host of medical research that supports this claim—too much to list here, but two good starting places are Surtees et al. (2008) and Ohrnberger et al. (2017).

98 has the sense of anonymity and the safety privacy provides.” For her, she goes on to say, the possibility that threats of violence will actually be inflicted enter the realm of possibility when the harassers know any bit of personal information, even just one’s name. The idea that someone might find out her name and then later her phone number or address makes her “feel very very unsafe.” Olivia is not wrong to instinctively want to protect her information—this is a mark of a savvy internet user who understands everyday privacy concerns that warrant our use of passwords, two­step verifications, and security questions. But her feelings of unsafety, danger, and surveillance go beyond a general want to protect personal information. Olivia, like many other women, take precautions to extremes because she knows how devastating harassment can be. Olivia’s privacy and safety concerns started influencing how she interacts with Twitter in ways that go beyond her use of a pseudonym. “Prior to GamerGate,” she says, “I would occasionally receive replies to my tweets from people I didn’t know soliciting a response.” In these instances, she would spend some time analyzing the replies to decipher if the sender was genuine or looking to start a fight. Most of the time, people were “conversational and amicable in tone,” or “trying to start up a chat on a topic.” In these instance, Olivia would usually respond. Other times, replies to her tweets were marketing bots and she would use the block or mute functions. But other times, “it felt less safe,” she tells me. One representative experience involved “someone telling me to be quiet about things ‘I didn't and couldn't understand’ and to stop sharing lies.” Olivia, being very “wary of anything resembling trolling,” would “avoid engaging at all,” even to the extent that she wouldn’t block or mute the accounts. She doesn’t explicitly say this, but I inferred this might be because of the same things I’ve heard other women tell me, both in my research and casually: that blocking or muting a harasser runs the risk of making things worse, as this can trigger them to encourage their followers to dogpile or create a new account with the sole purpose of harassing a single person who has blocked them, like I’ve experienced and like was discussed with Kate’s case in the previous chapter. Olivia also discussed one of the key ways that social media has been a positive presence in her life: she has met some of her closest friends online, “through blogging platforms or online groups.” Initially, these relationships were formed while they were “shielded, more or less, by some degree of anonymity,” but eventually grew to know each other better and “now know real names rather than pseudonyms or avatars.” In many cases, Olivia has met her friends in person. Now, however, she doesn’t see how that could happen given how pervasive online harassment has become. As someone who has been using the internet since the days of dial­up connections and AOL, Olivia feels there has definitely been a cultural shift towards a more hostile and dangerous internet. Olivia laments, “Perhaps I’m still naive about the menace that I might face as a result of trying to balance privacy and being open and friendly online.” After all, she says, “I haven't deleted accounts because of harassment. I haven't quit frequenting online social spaces or

99 changed phone numbers or closed email accounts.” But these are “all things that I know have happened to strangers and some close friends,” and loom large in her mind, informing her decisions to stay guarded and reserved online. She opened up about these experiences she has witnessed people go through. Many of these stories involve Olivia’s friends suppressing their identity as a woman or erasing themselves from online spaces altogether in avoidance of harassment. One such friend maintains a popular blog with a large readership. Olivia noticed that in pictures she posts to her blog, her friend never shows her face or will crop her face out of the image. Olivia asked her about it once and her friend explained that people would harass her through the blog by leaving menacing comments on the posts with her picture and, at one time, had an online stalker who used information from the blog to stalk her. Olivia goes on to tell me that her friend had to eventually abandon her blog and start a new one, building her readership back up from scratch. In this new space, Olivia says, her friend adopted a strict policy of “not showing her face (or anyone else's) and not using real names for anyone, in order to help keep herself and others more safe.” Olivia was featured on the blog herself, where she was given a pseudonym and while a photo of her was used, she was facing away from the camera to maintain relative anonymity. Another friend of Olivia’s, someone she became close to after meeting through an online knitting club, “suddenly disappeared from the platform. A few days later,” Olivia says, “I received a message from a user I didn't know. It was the friend who had disappeared.” The friend explained she was being harassed and had to abandon her original account in order to avoid the harasser. When she created a new account, she changed personal details and adopted a new anonymous avatar so as not to be found by her harasser again. Olivia says, “I don't know what kind of harassment my friend experienced, but it was enough for her to be willing to go to this length to avoid it.” Witnessing her close friends go through these experiences has affected Olivia greatly in terms of how cautious she now is and her fear that she’ll one day experience something similar. Overwhelmingly, Olivia is frustrated at being pigeonholed into using the internet in certain ways because dangerous harassment is so prevalent. But she’s also frustrated that the internet doesn’t seem to be what it used to be for her. She has certain wants for her use of the internet and social platforms, but those can’t be achieved simply because she’s fearful that she, like so many others she’s seen or knows personally, will experience online harassment. She says, On the whole, I want to open and exchange information freely. I want to not feel like I need to censor everything I say on any platform… not feel like I need to regularly check permissions settings to see what outsiders can see of me and my information and what they can do. I remember when one used to Google oneself to see what popped up, and the first things that would pop up would be news from schools attended or news items from childhood. Now, it is scary and worrisome. I worry what personal information I will find

100 there, what sorts of things anyone could be privy to, and what could be done with that information. So while she has had to curb how much of her true self she shares or circulates online, she says, “I don't want to leave the internet entirely. There are too many fun things and pictures of cats, but it is a less carefree place of joy.” After becoming more and more aware and fearful of experiencing harassment on a regular basis, Olivia, who still uses Twitter to distribute links to articles or materials relevant to her profession, has altered her use of the platform in avoidance of harassment. She notes, “ As time went on, I switched from using Twitter's desktop web­based platform for tweeting and started scheduling tweets with Tweetdeck.” Tweetdeck is a Twitter management tool. It affords users more features and functionality that the official Twitter app doesn’t. For example, users are able to follow more than one timeline at once, as opposed to Twitter’s website and app, which only allows users to see a timeline that’s comprised of every account they follow. On Tweetdeck, users can create different timelines that appear next to each other. One might be a feed of all of the people you follow, another might be the feed of just a single user, and one might be a feed of a hashtag. Additional features that Tweetdeck offers caused Olivia to switch. She says, “it was more convenient for me and easier for me to manage the amount of time I spent on Twitter versus the other aspects of my work.” Tweetdeck, like other third­party applications, also allows users to compose tweets and set a timer to schedule exactly when they’ll be tweeted. Once Olivia started using Tweetdeck, she says, amidst the height of GamerGate, “I started shifting my tweets on more sensitive subjects to the hours when I figured they might get some circulation, probably less, but also would receive less attention from people who might respond negatively at best or aggressively at worst.” As someone who lives in the United Kingdom, it was relatively easy for Olivia to schedule tweets to come out when professionals more local to her immediate country could see her tweets, while avoiding timezones in America, where most of the initial GamerGate activities occurred.59 For Olivia, lessening the probability that her tweets might be seen by more people was desirable solely because of her fear of harassment. She didn’t disclose how she defines “more sensitive subjects,” but did, elsewhere in our interview, make reference to subjects that deal with politics or women’s issues. Another clear example of volatile visibility, Olivia displays signs of a learned tendency to avoid being too visible online for fear of harassment, much like many of the other work who participated in my research. Through strategically scheduling tweets, she devised a system for avoiding circulation in circles that may be hostile to her while still reaching people who might be interested. Detailing her meticulous process, Olivia explains, I started saving tweets on topics relating to gender for weekdays—never weekends—from six [o’clock in the morning] to eight [o’clock in the morning]

59 While GamerGate campaigns occurred and continue to spread worldwide, it was largely an American­grown movement, as the initiators and their targets were American.

101 Greenwich Mean Time. I figured that, in this way, I could still tweet on subjects of interest and share the information, but most of the Twitter denizens based in the States who might react most vehemently would be asleep and would miss my tweets completely. Olivia acknowledges that this strategy has some problems. For starters, while it has cut down the amount of hostile strangers responding to her tweets, it certainly doesn’t wholly protect her from the people who see it in their feed later in the day, people who search based on keywords, like some of Kate’s harassers from the previous chapter, or from people who are simply awake at the odd hour. Further, this schedule­tweet strategy also means she limits her ability to reach people who are legitimately interested in or could benefit from what she tweets. Again, her behavior points to a major effect online harassment has on women and our online environments: women are willing to be seen less if it means they won’t have to come in contact with harassment, even if this means their work or participation online will have less of an impact than it could have otherwise had. Despite these problems, Olivia thinks the benefits outweigh the drawbacks. “Sure, some interested people might miss the tweets,” she says, “but I try to remind myself that the account was originally intended to share information with only ten to fifteen people”—her colleagues and other acquaintances from her profession—“not the 400 some odd people who were suddenly following me,” the majority of whom were strangers to Olivia. She justifies her retreat from public view with the logic that shifting the timing of when her tweets were likely to be seen helps her “a void any threats or harassment.” This justification is certainly valid, as her tactic helps protect her in ways she deems necessary for her own well­being, but I can’t help but notice that Olivia’s phrasing seems to downplay the massive impacts avoidance of harassment has on her and her work. Essentially, taking herself out of peak circulation not only puts her work in front of less people, but it also prevents her from exploring new possibilities, setting loftier goals, and participating online in more robust ways. When I ask Olivia whether or not (and how) online harassment affects her offline life, she says it is “unavoidable that the worries and anxieties” about online harassment “spill over” into her offline life. Part of that, she says, is because our online and offline lives are so closely intertwined, especially for someone like Olivia who does most of her work online and is married to someone who works in information security. Because of these things, “it is impossible to avoid the fact that I know many of the ways in which information can be obtained,” which contributes to her nervousness surrounding privacy. It’s at this point that she dives deep into the theoretical concept of privacy and security. While she, like many others, believe ideologically that privacy is a right, it’s something we have to fight for. “Privacy is no easy thing to obtain,” she says, “let alone maintain.” She offers an example of being contacted at her work by a business regarding a piece of property she . For this business, simply searching the internet for her name told them her place of employment, where it was located, and her work phone number. “When they couldn’t reach me directly, they called the employer's main line and

102 gave some excuse or explanation about needing to reach me.” Social engineering coupled with digital tools like the internet make privacy, as she puts it, difficult to both obtain and maintain. “I don't like this and find that it can be sometimes more difficult for me to accept than others,” she tells me, revealing her disappointment that so many people blindly turn their information over to corporate entities through their unsecure or naive uses of the internet. I also ask Olivia if harassment, experiencing, witnessing, or simply knowing about, has influenced her behavior online, she says, “Absolutely.” Notably, she tells me that it has a great effect on what she says and when she says it, evidenced by the story she told me earlier in our interview about scheduling tweets to come out at certain times in avoidance of harassment. Additionally, “I self­censor even more than I might have in the past,” she says. By this, she means that she specifically avoids talking about topics “that might be too controversial or might garner the wrong kind of attention.” This pretty much cuts her off from talking about anything political. She follows this up in saying, “And I am not unaware of the language I used there, ‘the wrong kind of attention.’ That’s a phrase that popped up in a lot of school and college trainings and warnings about avoiding rape.” Being told, in these trainings, “not to dress too provocatively” have cemented in her mind as appropriate strategies for avoiding “that wrong kind of attention,” which put the cultural onus on women to avoid being raped rather than on men, the most common perpetrators of rape, to not rape. Sadly, for Olivia, this logic extends into her approach to participating and interacting online. “I try to not speak too provocatively,” she says, “to avoid the online equivalent.” Finally, I ask Olivia if online harassment, again, either experiencing it, witnessing it, or simply knowing about it, affects her emotionally. She says, “Absolutely,” and goes on to explain, It plays on my anxieties, it plays on my fears, it makes me doubt whether anyone can be safe online and, while before, I might have expressed opinions or spoken in solidarity on matters of importance, I balance all of this against a desire to not draw attention to myself and not become harassed myself. I despise that change. I feel I am being weak and without conviction and wish to be otherwise. So, it's an ongoing battle about to which degree I avoid anything that could draw attention and possibly conflict and to which degree I throw caution to the wind and say or do what I want and take things as they come. I admire those who achieve the balance. Sadly, so few do.

Ella’s Story: Self­Erasure in Avoidance of Online Harassment Ella, like Olivia, has seen what sexist online harassment does to women through friends’ experiences as well as her own. As a result, Ella has learned to leave as little digital tracks as possible. Her job requires her to frequently appear in the media—on , in magazines, in newspapers, and on the radio. She’s extremely cognizant about her public persona and for that reason, she is careful about what she adds to the internet as her “real” self. But her skills regarding extreme privacy were honed as a result of harassment experiences. At the beginning of

103 our interview, she explains, “I’m actually not on a lot of social media and I have a number of reasons for that. One of which is avoidance of harassment. The other is that, because I’m a public figure in my work, I don’t want to be identified personally with things that I am saying online.” She goes on to say that most of reasons are about privacy, but she also says that the public aspect of her job means she might be more likely to be “a target for harassment.” So Ella takes measures to mitigate that risk. “The first thing is that I’m not on a lot of social media,” she tells me. “I don’t have a Facebook account. I don’t have a LinkedIn account.” But she does have two Twitter accounts, one locked, started in 2009, and one unlocked, which she created in 2013 when she was toying with the idea of self­publishing short stories she writes in her spare time. As of today, her public account remains unused. I ask her to elaborate more on the choice to have the two accounts, and she tells me that her locked account has always been locked, and even though she does have an unlocked account, both accounts are anonymized. “For both of them,” she says, “I do not have my name associated with them and the avatars are not my picture or even a picture of a human. Specifically,” Ella elaborates, “the parts that are viewable to the public are not gendered in anyway. And that’s deliberate.” Ella tells me that her choice to suppress her identity as a woman by using avatars that are non­human and genderless is partly about privacy but also “because I don’t want to attract the sort of harassment I see happening when people are known to be gendered on Twitter as female.” Echoing many of the sentiments expressed on the survey and in Olivia’s story, Ella tells me that she knows people, both offline and through Twitter, who “have had to either leave Twitter, take extended breaks from Twitter or locked their account specifically because they’ve received harassment.” Her choice to remain locked isn’t explicitly tied to a personal harassment experience on Twitter; rather, she has witnessed harassment time and time again, making her justifiably sensitive to the prospect that it could happen to her. I ask Ella to describe the people she has seen get harassed on Twitter and she tells me they’re exclusively women, but women of all different backgrounds. “Queer women, straight women, women of color, white women…” she trails off. “This is actually a fairly common thing among the people that I’m connected to on Twitter, women at least temporarily going locked. And of the men I’m connected to on Twitter, I don’t recall that ever happening, any of them choosing to do that.” She pauses before reiterating that leaving Twitter, permanently or temporarily, or “going locked” as she puts it, “is a common occurrence among the women I’m connected to on Twitter.” Our conversation shifts as she tells me a story about an inciting incident from her college years in the early 2000s that changed the way she thinks about privacy and the internet. In her own words, Ella said, So, in college, unbeknownst to me, a long term boyfriend I had at the time who was helping me with my computer basically granted himself superuser access to my personal computer so that he could, without my knowledge or consent, log in remotely and

104 basically see anything that I had on my computer. We were together for a couple of years and he had done it early on in the relationship. I didn’t learn he did that until after we broke up. After we did break up and had been broken up for some time, he contacted me to say he needed me to change my password. He revealed it to me when he pleaded with me to change my password because he couldn’t stop himself from logging onto my computer, and it was bothering him. He needed me to prevent him from doing that. He revealed to me he had installed this stuff on my computer and told me how to change it because he was still able to learn about me even though we weren’t connected in anyway anymore, and he wasn’t able to stop himself from doing it. This experience is one that looks different from the other stories of harassment participants shared with me throughout my research. Up to this point, this was the first woman who (officially) talked to me about being stalked by a boyfriend using digital means. This moment in her past was pivotal for Ella in terms of her digital literacies. After pausing following the conclusion of this story she solemnly tells me, “Every computer… every account I’ve ever opened up online, every new computer I purchase, I think about that experience.” This is something that has traumatized her well into adulthood. “I still live with the…” she pauses again before going on, “I want to say paranoia but it isn’t because the funny thing about that experience was having your paranoia confirmed that it was actually happening.” When Ella says this I’m immediately reminded of my own thankfully short­lived experience with sexist online harassment and the paranoia I felt for days afterwards, reverberations of which I still feel today. Ella continued, “During the course of our breakup, he would say things to me and I was like, how could you actually know that? And then later to have it confirmed that he was spying on me,” Ella explained, audibly exasperated. She circles back to her overarching point that this experience of digital stalking largely shaped how she interacts with hardware and the internet. “So now, pretty much every technological move I make, whether it’s opening a new account or changing my passwords or buying a new computer or when I leave a relationship, or even while I’m in a relationship, the kinds of things that I do about securing my technology... that has informed my experience ever since. That was over fifteen years ago.” At this point in our conversation, I’m thinking a lot about the nature of definitions. Again, Ella’s story about a long­term boyfriend installing backdoor surveillance mechanisms on her computer isn’t necessarily one that I would typically see in my research about sexist online harassment because so much of the conversation is dominated by experiences of strangers, often anonymous, lobbing insults and threats at women via social media or email. In my view, Ella’s story absolutely meets the criteria of sexist online harassment in that her perpetrator surveilled her whereabouts, physical and digital, without her knowledge or consent and in doing so reified cultural patterns of men’s subjugation of women and the treatment of women like property (see chapter one). I reveal to Ella that I purposely don’t define harassment for my participants precisely for moments like this one, because people usually have a very specific idea of what harassment is and what it encompasses, shaped by popular discourse or cultural connotations.

105 Typically, I tell her, “when people think of harassment, they think about sexual harassment or workplace harassment or cussing at someone. This particular instance, though, is an example of a silent form of harassment.” Digital technologies, I say, have allowed people to execute harassment in ways that we might not immediately think of as “harassment,” changing the nature of its definition and scope. Ella responds, “This [instance] is kind of... it’s stalking. But like, secret stalking, so that I couldn’t know that it was happening. So, yeah, that would be the analog. Stalking is harassment, right?” I “mmhmm” in agreement as she moves on: If I were to tell the parallel story of like, this same individual also hung around outside the one exit to my dorm knowing I would have to leave my dorm and confront him. He planted himself there. That would be understood as a harassing thing to do. But if someone were to just ask me like you did at the beginning, ‘tell me about your experiences about harassment online,’ the thing that I think of are comments that people leave on posts that people make or doxxing or stuff like that. I wouldn’t have necessarily, on my own, thought of this example that has been actually really instrumental in pretty much all of my behavior around technology. And not just technology but relationships generally since then. It’s important to dwell on the point Ella implicitly makes here because it encapsulates why work on sexist online harassment is so necessary: she notes that the scope of conversations surrounding online harassment do not include her experience, and therefore, she hasn’t thought of them necessarily as harassment. In other words, our cultural narratives about online harassment shape victims’ perceptions about who it affects as well as its severity, validity, and extent. As Ella indicates, and as represented in Olivia’s story, there seems to be a popular dichotomy made between offline and online harassing actions; Ella was comfortable categorizing her harasser waiting around at her dorm uninvited as stalking, and while she knew that his digital stalking was wrong, she wasn’t quick to deem it in such severe terms perhaps because she had never seen those kinds of acts described in that language. In short, and to evoke the discussion in chapter one regarding the importance of our terminology: words matter. Ella goes on to reflect on additional ways her experience of online and offline stalking has shaped her digital habits and says this experience is another reason she avoids having social media accounts. “I know that if I had a Facebook account, this person in particular, but really anybody, could take whatever it is that’s publically viewable and use it,” she says. “I’m aware of the things that I can’t control about myself online. I am not constantly but fairly regularly monitoring my Google search result of my name, and the image results of my name. Having had this experience, like I said it was a long time ago, but I still worry and think about this particular maladjusted individual and what they might do.” Ella also recognizes that are things that get published to the internet about her that are beyond her control, largely because of her profession. “It’s hard,” she tells me, “because I’m in the news because of my job and there are pictures of me

106 that are published because of my job, and I can’t control that. But what I can control is that I don’t myself put anything out on the internet that is publicly viewable.” Ella’s point about control speaks to the evolution of cultural norms surrounding the internet and our relationship to it. It’s expected, to a large degree, that information about us will appear online without our contributing it. As a result, we’ve culturally shifted to a model where our control of personal information is exercised through opt­out methods, when and where available, and through our choices of what we share online. For example, it’s not uncommon for job seekers to be given advice that in order to control what your top Google hits are when one searches your name, you should contribute by creating a website, a Twitter account, or a blog and craft those spaces in ways that shapes a positive image you (Ambron, 2018). Ella, in contrast, withholds information by limiting her digital output, whether that be a website, social media accounts, or even reviews left on Amazon, Yelp, Google, or other public platforms. As if this traumatizing experience Ella had with her college boyfriend isn’t enough, she also shares with me the time, also in college, that another former boyfriend doxxed her. Setting the stage, she tells me that although she was in college “before social media was a big thing,” (Facebook debuted in her junior year), the culture at her school was fairly digitally social. Lots of people shared what they were up to through blogging platforms like LiveJournal or Xanga much in the same way someone might do on Facebook or Twitter today. Her doxxer had his own blog “where he talked about his life, among other things, and it was pretty popular and well­read among people on campus.” One of the things his blog became known for was how he would write, as Ella describes it, “in plain view” but also “put personal stories in the comments, commented out in the actual coding of the website,” hidden from the front view of the page. “People who read his website and were a little tech savvy could go read these things that wouldn’t necessarily appear in text on the website but were embedded in the code of the website.” Ella and this person broke up after only having casually dated for less than three months, and, upset by the breakup, he started spreading vicious and bizarre rumors about her “both in plainview and in code on the website.” His harassment of her, then, was embedded in the actual structural fibers of the blog. This experience didn’t necessarily stay with her as deeply as the first she described to me, but both definitely revealed to Ella the ways in which men, especially tech­savvy men, can use technology and the internet against women. Such revelations have had profound influences on her relationship to the internet, technology, and frankly, men. We return to talking about Ella’s current use of social media, which again is wholly done through a locked and anonymous Twitter account. I ask her how online harassment, either experiencing it or simply knowing it’s a problem, has impacted or changed how she uses social media or interacts with others on Twitter. “For example,” I offer, “does it influence what you say or when you say it?” “Yes and yes,” Ella replies. Online harassment “means I don’t say very much on social media at all. And what I do say, I put behind a lock.” She tells me about a personal rule she has developed: she only approves followers whom she has met in person

107 explaining, “I’m not just gonna follow a friend of a friend. I want to have my information on lockdown.” Strictly following this rule, she says, mitigates the risk that her social network on Twitter expands such that suddenly she’s sharing personal information with strangers, some of whom might have “nefarious intent,” echoing similar sentiments shared with me by Olivia. It’s also important to Ella that she talks with her friends about her personal choice to use Twitter in the way that she does. If she has a relationship with someone offline, “then I can explain to them in person what I’m comfortable with.” She gives me an example of the kind of thing she wants her followers to understand about her social media use: Everyone who follows me on Twitter knows that I’m gonna talk about my work on Twitter but that doesn’t mean that it’s okay for you to repeat what I say about my job elsewhere. That’s the kind of thing, that sort of etiquette isn’t baked into Twitter. Having that sort of rule allows me to run over that with people before I allow them to see what I have to say. Making known how she uses her social media and establishing a certain amount of trust with someone before she approves them as a follower is, in a sense, an anti­harassment strategy. As discussed in chapter three, Tracy and Kate and many of the survey respondents also have a multitude of anti­harassment strategies, but typically, they’re deployed after harassment has already happened. For Ella, and much like Olivia, she takes preemptive measures. Yet her strategies, conditionally, are predicated on removing herself from visibility to a broader network of people and discourses, and, in some ways, don’t even succeed in shielding her from the harm online harassment. “Even just having that strategy means,” she says, “I am, everyday, online operating from a place of fear.” Ella sees what friends or public figures who have experienced severe sexist, homophobic, or racist online harassment have gone through, and it scares her. She says For me, the witnessing part or witnessing even women that I don’t know personally but are Twitter famous, like Zoë Quinn or various women who are nerds or in the science fiction community or in the tech community and seeing what happens to them, seeing them get harassed off of Twitter or off the internet or having to move. Seeing these things, the emotional fallout from that is one of fear. Much like Tracy, Kate, and Olivia, Ella’s awareness of sexist online harassment makes her fearful to participate or interact with anyone online. In Ella’s case, a lot of this fear has been facilitated by simply seeing other women get harassed. Her own personal experiences with being stalked and doxxed in college, she says, instill fear and paranoia, causing her to be extremely cautious with her technology use. For her, these emotional resonances translate into “doing as little of public interacting on the internet as possible precisely because of those things.” Remaining silent or hidden from public view is what keeps sexist online harassment and all of the emotional fallout that comes with it at bay. Reminded of my discussion with Tracy about the advice that has incensed her (and me) of “women should just log off” or “don’t feed the trolls,” I ask Ella what she thinks about the

108 way women are told to deal with sexist online harassment by removing themselves from public space and discourse. After all, “it’s damn near impossible to not be online in this day and age.” Ella, being someone who has gone to great lengths to minimize her online presence and actions, has a lot to say about this advice. “I’ve come as close to that I think someone my age can, who is a professional, in that I’m not on Facebook. I’m not on LinkedIn. Everything I do online is pseudonymous.” And while this is a personal choice, it has professional consequences. “Just the other day, I revisited this argument with myself about whether or not I need to have a LinkedIn account,” she says. I ask her if it’s typical for people in her industry to have a LinkedIn profile. “It’s not required, but a lot of people do,” she explains. “I was just at a meeting where we talked about the ways to use LinkedIn to advance or bolster the organization I work for. So, I had this internal struggle of like, I’m not doing my bit because I don’t have a LinkedIn account where I’d basically be promoting my organization.” It’s a concern for her because she doesn’t want to be viewed by her coworkers as someone who isn’t part of the team. She also notes that LinkedIn can be instrumental in recruiting “good people” to work for her organization, but not being on LinkedIn makes it difficult to network in those ways. “That’s one avenue of professional development that I cannot participate in because my strategy to protect myself is to just opt out.” Ella points out that advice to women to stay offline, “even if it were efficacious, which it’s not, also creates its own set of problems. You’re basically saying, if you’re a woman or a person of color or queer, you’re more likely to have these problems from being on social media so you just shouldn’t be on social media.” This line of thought creates, in Ella’s words, “a snowball effect,” which she contextualized with her LinkedIn example. If these groups that are more susceptible to online harassment stay off of social media as a strategy, and organizations like Ella’s are trying to recruit job candidates, that means that those groups—women, people of color, and LGBTQ people—are less likely to be recruited for jobs. “This is an example of how the structural things in place that keep people marginalized in meatspace60 replicate onto online spaces,” she says. “And that’s what we’re telling people, ‘you should just opt out of online space.’ Well that’s just a way of perpetuating the marginalization of women, people of color, and queer people. Now there’s one less channel for us to not work that you have.” Indeed, Ella reveals the ways that our advice to victims of sexist and racist online harassment keep systems of inequality that underpin much of so much of culture, both online and off, in firm place. I also ask Ella if she has seen any harassment among the small group of people she does include in her network on Twitter, and while she doesn’t immediately consider it harassment, she mentions that she sees a lot of subtweeting, a cultural practice on Twitter of tweeting about someone without tagging them or directly naming who is being talked about. Subtweeting is akin to gossiping but doing so in plainview knowing full well that the person being gossiped about will see it. Ella admits, for her, subtweeting probably falls more in line with general antagonism

60 “Meatspace” is a term coined by science­fiction author William Gibson to mean the offline world. It’s used to signify the antithesis of the internet, or, physical space offline.

109 than it does with harassment, but she wavers on this point. She works out her thinking out loud: “Okay, somebody at work… if I were in a cube and my coworkers were in the cube next to me talking to a third person about me purposefully loud enough so that I could hear them saying nasty things, I would consider that harassment,” she says. “But it’s your instinct to not consider that harassment when it’s in a digital space,” I ask. “Yeah,” she replies. This moment in our interview was one of a few like it, where we went back and forth about what can be considered “harassment.” Ella was quick to want to comparatively analyze offline analogs to some of the examples of online harassment we talked about. In these instances, comparing the two seemed to complicate or change her defining criteria for what constitutes online harassment. She said she was hesitant in some respects to put her three personal experiences, the digital stalking, the doxxing, and the subtweeting, all in the “harassment basket,” and I ask her why. She responds, I think there’s two reasons. One of them is that, at least in the first example, it’s clear to me that those are fucked up things that shouldn’t happen. But they seem distinct fucked up things that shouldn’t [be classified as] harassment necessarily. Not saying that they don’t rise to the level of harassment, but maybe harassment isn’t the right word to describe what they are, and I don’t know what that word is. The other reason... I think it’s the same reason I’m resistant to describing things that have happened to me as rape, because I compare what has happened to me to what has happened to other people, and there’s this internal monologue of, ‘what happened to me was not comparable to what has happened to other people.’ It would feel like I’m stealing something from those people who were wronged more than I was, to call what happened to me harassment. I sympathize with Ella’s feelings here because I too find myself hedging away from language of sexist online harassment when describing some of the things I’ve experienced. Though sometimes I wonder if ranking our harassment experiences, labeling one worse than another, does more harm than good. Certainly there are levels of severity influenced by hierarchies of gender, race, and sexuality, as I’ve discussed elsewhere in this dissertation. But I also wonder if the general attitude that sexist online harassment isn’t a serious issue causes those who experience it infrequently or in small doses to believe that their experience doesn’t matter or isn’t somehow representative of larger cultural issues. I was surprised, frankly, to hear Ella say that her experiences with stalking and doxxing are not on par with others’ experiences or are somehow less severe. As we talked more, it became clear that Ella is sensitive to the problem of those who co­opt the oppression of others for personal gain. “I don’t want to cheapen the victimization of other folks by lumping my experiences in with theirs,” she says. “And yet you feel like you don’t have the language to describe your experience,” I ask, gesturing towards her comment that “maybe harassment isn’t the right word to describe [my experiences], and I don’t know what that word is.” “Yeah,” she replies. I talk to her about my research and how, at the time, I was (and, at times, still am) greatly struggling with what to call these acts. In interviews, I was using the

110 phrase “online harassment” sans qualifiers like sexist or racist because I wanted women to define online harassment in their own terms through their own experiences. Tracy, for example, was quick to attach the notion of violence to online harassment because of the barrage of physical threats she faces and the physical effects she felt as a result of sexist online harassment, like depression and anxiety, while Kate, for example, didn’t explicitly categorize her experiences as violent despite being threatened with violence. I bring up the idea of violence to Ella and tell her that I find it interesting how quick people are to negate violence as a descriptor for something that happens online despite there being violence contained in the language of a lot of online harassment and a physicality to a lot of the effects of online harassment. To this point, Ella circles back to the first story she shared with me about the ex­boyfriend who had installed surveillance software on her computer. His hanging around outside of her dorm waiting for her to come out was “physically menacing to me,” she says, “even though he didn’t touch me.” But, she points out, “there’s something about the word violence that means ‘physical,’” which makes her hesitant to attach “violence” to her experiences. “I would be less hesitant to describing those experiences as emotionally violent or psychologically violent or abusive. Abusive fits for me for all of those examples,” Ella says. Ella still has reservations about even describing her experiences necessarily as harassment, and she hypothesizes that might be because of where these experiences took place (online) and how many times (a few). But as she talks through her feelings on this, she realizes that our propensity to take physical space more seriously than online space, in many facets, may be an influencer on her thinking: When I think about the word harassment and what it means to me, one way that word ‘harassment’ comes up is street harassment. That could be just a one time thing, and I have no problem calling that harassment. One dude catcalls me and I’m like, that guy just harassed me. But, also there’s an aspect of like… catcalling is ubiquitous. Even if this one guy only does it one time. When I think about sexual harassment, I think primarily about work environments. There are other kinds of sexual harassment, obviously, but that’s the one that’s most salient in my brain. And again, there’s something there around either it happening habitually or there’s something about the environment at work, that it’s in the air and it’s part of the culture that it’s a ubiquitous thing. That feels distinct in some way from some of those examples that we talked about because it’s like a single action that happens but has had consequences for me ever after. As I’m thinking through that, I’m like, well that’s also ubiquitous because I just got finished describing to you that my entire behavior in the internet has been conscientiously focused and changed because of those things in the same way that when I walk home from work, I have a particular route that is designed to avoid catcalling. Cognitively, there’s something about the fact that it’s an online space and it feels, because it’s relegated to online space and we privilege physical space over that, that like… it feels like it’s a different category. But honestly, as I’m talking through this I realize that it’s not.

111 The weight of Ella’s story, for me, comes in two forms. Chiefly, instances in which she was stalked and doxxed via technology and the internet has profoundly changed the way she interacts with technology and the internet. Arguably, it has made her a more reflective user when it comes to privacy and surveillance, but it has also shaped her in ways that remove her from participating in public discourse online. Secondly, Ella’s examination of these experiences have always operated within a framework that positions offline space as holding more importance than online ones and excludes certain acts from the spectrum of what’s popularly considered sexist online harassment. In the final section of this chapter, I’ll unpack what we can learn from Olivia and Ella’s stories about online harassment’s short and long term effects.

The Complex Legacies of Online Harassment Observations made by Olivia and Ella all point to several incredibly important takeaways that add to our understandings of online harassment: 1.) Women don’t have to have first­hand experience with harassment in order to feel its effects, as witnessing or knowing about online harassment can be traumatizing, fear­inducing, and silencing. In both Olivia and Ella’s cases, having seen other women experience online harassment, both personal friends and more high­profile instances, profoundly shaped how they navigate technology, online spaces, identities, and discourses. In short, women like Olivia and Ella, those who are personally and professionally impacted by experiencing and witnessing the epidemic levels of sexist online harassment, are frequenting the public areas of online platforms less and less through locking, silencing, and avoidance. 2.) Online and offline harassment are deeply connected, and it’s difficult to know where online starts and offline begins. All four of the women I’ve discussed in chapters three and four have suffered “offline” as much as on. Tracy has battled bouts of depression. Kate has struggled with anxiety and insomnia. Olivia has deeply feared being doxxed. Ella laments the workplace ramifications of her lack of online presence. These are just scratching the surface of the ways in which concerns about online harassment seep into women’s offline lives. And yet, popular discourses dichotomizing online from off drive perceptions that, for example, offline forms of harassment are more serious and consequential than online ones. Perceptions such as these influence how women think about and share experiences of online harassment. 3.) Online harassment alters women’s behaviors in reactive ways, often erasing them from public view. For Olivia and Ella, consciously taking preventative measures against online harassment has significantly decreased their circulation and participation in online spaces. The removal of women from these spaces only help to maintain digital environments where women aren’t welcome and therefore have limited influence in these realms. It’s also important to note that while Olivia and Ella described to me their strategies of avoiding certain platforms, communities, or topics, neither described a space

112 that they’ve found as a replacement where they feel like they can freely participate and circulate. The fact remains, our culture has a shortage of environments, both online and off, where women feel safe and are empowered to be influential. 4.) Our habit of inaccurately describing online harassment or only sharing stories of what’s considered to be the “most severe” shapes cultural connotations of what “counts” as harassment. A creation of a hierarchy of harassment experiences, then, may lead to some women understanding their harassment experiences as trivial or not understanding them to be harassment at all. These are enormously complicated problems, but it’s evident that by talking to women about issues of online harassment, we continue to learn more about the intricate nature of the issue and its effects. In the next chapter, I’ll discuss what is (and is not) currently being done about these problems on platforms through policy enforcement and functionality design. I’ll also discuss how teachers can introduce topics of online harassment in the classroom with the goal of fostering reflective thinking in students about online environments, discourses, and behavior.

113 Chapter Five Avenues for Change: Policies and Pedagogies of Online Harassment

Online harassment, as seen in the previous chapters, can have devastating effects, yet we are still working to understand all of its manifestations and how mechanisms might be designed and placed both online and off to curb the issue. Online abuse is mediated through social media, making such platforms interested parties in harassment. Platforms, then, play a significant role in the proliferation of abuse and abuse cultures, much of which develops through how social media are designed and how they are governed. Many platforms have implemented anti­harassment structures to varying degrees of success, but we still have yet to see social media spaces that are built with anti­harassment in mind. Further, as user agreements and terms of service statements continue to evolve along with the change and growth of individual platforms, we must look to see which policies are working and how they are (or aren’t) being enforced. In this chapter, I’ll begin by discussing proposed or implemented “solutions” to harassment and anti­harassment mechanisms that have, thus far, largely failed to make a meaningful difference. I’ll then present perspectives offered by survey respondents about what can be done to help slow or stop online harassment. Finally, I’ll discuss an interview with a digital writing teacher who has had experience with online harassment in her course, as an activity gave way to several women students being harassed in­real time during an in­class presentation. I’ll conclude this chapter by offering steps we can take in our digital writing classrooms to move our students towards more critical and responsible uses of social media, and thus, more equitable and inclusive digital spheres. I’ll also offer suggestions for what we can do to intervene in online harassment beyond the classroom in other professional and personal capacities.

“A flaw in the system is now a feature:” to Implement Working Solutions As I’ve argued throughout this dissertation, platforms, in their design, governance, and cultures, re/create social inequalities along axes of identity, influencing the inherent “ownership” of these platforms. This sense of ownership emerges through a variety of economic and cultural practices including platform design and governance. For instance, while the economic dimensions of platforms such as or Pinterest attribute their ownership to an overarching company or financially­invested parties, cultural practices fortified by the platform’s design and governance structures privilege certain users, giving the network a greater sense of who has cultural “ownership” of the space (see chapter one). I argue that the emphasis on designing platforms to meet the needs of those who have “ownership” of these spaces, cultural, financial, or otherwise, is part of what leads to a failure to implement working solutions to the harassment epidemic. Twitter has incorporated a number of mechanisms aimed at fighting online harassment. For example, they’ve developed features such as the “mute” function, which allows users to

114 “mute” another user without blocking them or prevent certain words or phrases from showing up in their feeds. They’ve also made strides to revise their abuse and hateful conduct policies, as discussed in chapter three. Despite these features being developed as anti­harassment mechanisms, they’ve had, as we know, little to no impact on severe, large­scale, and targeted abuse. Recently, a former Twitter executive suggested that their failure to build effective anti­harassment structures is a result of Twitter being built on Ruby on Rails, a relatively simple web application framework that made it difficult for the platform to scale and address emerging issues. Speaking anonymously to journalist Maya Oskoff, this former executive likened Twitter’s original framework to “a Fisher­Price infrastructure” and criticized the company for adopting “non­scalable, low­tech solutions” (Oskoff, 2018). However, Twitter, over time, has progressively phased out Ruby on Rails, as it upgrades its systems and features using other frameworks. One of the most prominent mechanisms they’ve developed in response to harassment is the reporting system, created in 2013 after the mass targeted abuse of Caroline Criado­Perez, a journalist and feminist activist who campaigned to put Jane Austen on British currency. Criado­Perez received upwards of 50 rape threats per hour after the Bank of England announced intentions to adopt her plan (Battersby, 2013). In its current iteration, the Twitter report enables users to file a report on violations having to do with impersonation, trademarks, counterfeit goods, , harassment, privacy, private information, spam, self harm, advertisements, and Twitter Moments. A report of harassment can be filed when a user believes another is “engaging in abusive or harassing behavior.” The form requires the user fill out a number of multiple choice and open­ended questions, beginning with noting what kind of abuse they’re reporting, defining who the actions are directed against, and identifying the offending user, as seen in figure 5.1:

115

Figure 5.1: The first three questions on the Twitter report form for harassment, asking users to define what they’re reporting, who the abuse is targeting, and who is the abuser.

The form goes on to ask users to include URLs to specific tweets in question before providing an open­ended text box for users to further describe the problem. Finally, users enter their email and Twitter handle before electronically signing the form. From there, the process becomes hazy as to who assesses the report and their criteria for doing so. Charlie Warzel, who has made a career out of reporting on Twitter’s harassment problem alone, points out that the sheer size of the network and the number of tweets that are tweeted on a daily basis makes it impossible for the platform to “review each and every suspicious tweet with a human eye” (Warzel, 2017, n.p.). But their methods are not made public. Clearly, as Warzel notes, not all reports are given time and attention from human content moderators, and even if some are, many reports filed are met with inaction from Twitter. Warzel’s investigation into this issue found that “a concerning number of reports of clear­cut harassment still seem to slip through the cracks” and in many cases, users received a stock email from Twitter notifying them that Twitter’s investigation found no violation of its terms of service (Warzel, 2017, n.p.). In doing preliminary research about violation inaction, Warzel found numerous examples of Twitter doing nothing in the face of clear rule violations reported by victims. Some of these

116 violations include doxxing, violent threats, and “extensive, targeted harassment.” When asked for a comment, Twitter issued a statement which reads, Twitter has undertaken a number of updates, through both our technology and human review, to reduce abusive content and give people tools to have more control over their experience on Twitter. We've also been working hard to communicate with our users more transparently about safety. We are firmly committed to continuing to improve our tools and processes, and regularly share updates at @TwitterSafety. We urge anyone who is experiencing or witnessing abuse on Twitter to report potential violations through our tools so we can evaluate as quickly as possible and remove any content that violates Twitter user rules. (qtd. in Warzel, 2017, n.p.) These vague public declarations from both faceless spokespeople and founder/CEO Jack Dorsey himself (see chapter three), understandably frustrates users who have gone through the proper channels to help make Twitter a safer and more inclusive space. In Warzel’s survey of Twitter users who have experienced harassment,61 findings suggest that Twitter’s most common response to reports of harassment is to take no action at all. That inaction breaks down in various ways. As Warzel reports, 46% of respondents said Twitter took no action on their request, 29% said they never heard back from Twitter at all, and 18% were told that an assessment of the abusive tweet(s) found that they did not violate Twitter’s rules. Warzel also reached out to Twitter for a comment after providing them with a summary of the survey’s findings, and he received a statement back from Kristin Binns, Twitter’s Head of Corporate Communications. In keeping aligned with Twitter’s typical style of vague responses, she said, Safety is our top priority—we're building better tools and processes every day. We can't comment on a third­party survey, and its anonymous nature makes it impossible to verify data or corroborate response. While we know there’s still much to be done, we’re making progress toward giving people more control over their Twitter experience and to better combat abuse. (qtd. in Warzel, 2016a, n.p.) Warzel argues that Twitter’s development of mechanisms that are meant to decrease the prominence of harassment on the platform “are a largely cosmetic solution to a systemic problem” (Warzel, 2017, n.p.). Further, user’s experiences of having their reports met with apathy and inaction tell us that Twitter’s methods for assessing and following up reports with meaningful action that upholds stated terms of service are deeply flawed. Inconsistencies in how these reports are handled suggest that building effective methods for dealing with harassment is simply not a top priority for the platform, despite their public declaration to the contrary.

61 The study surveyed 2,702 Twitter users, which, the researchers point out, is a fraction of the Twitter population. However, this is one of the largest and most diverse sample sets of any published surveys of Twitter harassment to date. Of the users who provided demographic data, 26.3% identified as a racial or ethnic minority, 28.8% identified as a member of the LGBTQ community, and a significant amount of respondents identified as female. Warzel reports the gender breakdown of respondents as 1,817 female, 720 male, 58 gender fluid, 26 transgender, 21 agender, and 27 not listed.

117 While the platform has taken marginal steps to provide precise language about what constitutes abuse (as discussed in chapter three), these policies are rendered useless if not enforced consistently and in ways that are congruent to the language that defines the policy. Users have noted that despite the changes to policy language, there still seems to be a lack of enforcement in many cases. This is evidenced by Donald Trump, who in November 2017 retweeted three anti­Muslim propaganda videos originally tweeted by Jayda Fransen, the deputy leader of a racist and fascist political group in Britain. Holding Trump’s activities to the standards set forth by Twitter’s newly revised policies, many felt he and Fransen were in clear violation on the grounds of promoting hateful conduct and threatening a group of people based on religious affiliation. Amidst the public outcry that Twitter hold Trump accountable for his violation, the company directed people to their Help Center (Larson, 2017), specifically the portion of their enforcement policy which says context matters in determining whether or not a use of the platform is in violation of the terms and that sometimes action is not taken on the grounds that the content in question “may be a topic of legitimate public interest” (“Our approach to policy…,” 2017). However, in an apparent reversal of their outward facing decision about leaving the videos on the platform for further circulation, Twitter Safety tweeted a series of tweets which read, Earlier this week Tweets were sent that contained graphic and violent videos. We pointed people to our Help Center to explain why this remained up, and this caused some confusion. To clarify: these videos are not being kept up because they are newsworthy or for public interest. Rather, these videos are permitted on Twitter based of our current media policy. We will continue to re­evaluate and examine our policies as the world around us evolves. We appreciate the feedback and will continue to listen.62 Ultimately, Twitter referred to their media policy as a justification on the grounds that the videos aren’t graphic or violent enough to constitute removal. And yet, the following month on December 18, 2017, Twitter suspended Fransen along with several other high­profile white supremacist individuals and organizations, including the official account for the American Nazi party (Titcomb, 2017). Fransen’s suspension meant all of her tweets were removed from the platform, including the anti­Muslim propaganda that Trump retweeted. Donald Trump remained unreprimanded, despite calls to suspend him as well. It’s important to note, however, that retaining Trump as a user is financially valuable to the company. James Cakmak, financial analyst for an equity research firm, for example, estimates Trump is worth upwards of two billion dollars to the platform based on a number of factors including the immense amount of free advertising Twitter receives as a result of Trump’s frequent and controversial use of the platform (Wittenstein, 2017). As of March 2018, Twitter, a publicly

62 Twitter Safety [@TwitterSafety]. (2017, December 1). Retrieved from https://twitter.com/TwitterSafety/status/936669071243862017

118 traded company, is worth 23.3 billion dollars. Given the subjective nature of policy enforcement and the economic interests of the company, it’s clear some users have more influence, especially those that retain monetary and cultural value to the platform, and therefore can, at times, circumvent or avoid repercussions for violating policies. A platform that’s noteworthy for its relative success in stopping harassment from infiltrating and dictating the tenor of the platform is the dating app Bumble. Founded by Whitney Wolfe Herd in 2014, Bumble is marked by its unique premise and its intervention in gendered power dynamics of heterosexual dating, as only women are allowed to initiate conversations on the platform.63 Herd co­founded another popular dating app, Tinder, but left the company after suing them for sexual discrimination and harassment, which Tinder settled. In the wake of the high­profile lawsuit settlement, Herd faced an enormous amount of online harassment. She wrote about her experience for ’s Bazaar and shares, For months, it was hard for me not to feel like all that ugliness was stamped across my forehead. I sank into a deep depression. I became an insomniac and drank too much in weak attempt to numb the pain and fear I was experiencing. At my lowest point, I wanted to die. I was only 24, and already I felt like I was finished. That’s the poisonous power of online harassment and abuse—especially when it lands on your phone every morning and follows you everywhere you go. (Herd, 2017, n.p.) It’s difficult for me to read her reflections on being harassed online and not see traces of the women I’ve interviewed in my work. Despite her wealth, support mechanisms, and position of power, Herd too suffered serious and long­lasting effects of online harassment. She discusses the pervasiveness of online harassment, arguing that, “the internet has democratized misogyny. A flaw in the system is now a feature.” Herd wanted to speak to this flaw in her development of Bumble. She says, “I channeled the pain of my own experience into trying to engineer a better way. And in building a remedy to the problems I’d faced, I took my power back.” Given the emphasis on women’s safety, comfortability, and empowerment from the start, Bumble is well­known for its clear set of conduct guidelines, fearlessness in imposing content restrictions, and zero­tolerance for abuse. Much of the language in their policies reflects standards across many social platforms (i.e. users can’t, for example, post content that is pornographic or depicts illegal activity), but Bumble is set apart in how strict it is in enforcing said policies. Of the platform’s firm zero­tolerance for abuse, Herd says, “There are no second chances. Harass someone on Bumble and you’re banned for life. Harsh? Maybe. But I feel strongly that we won’t end misogyny until we start holding each other to higher standards, and that starts with setting clear boundaries and enforcing them” (Herd, 2017, n.p.). Obviously, every platform is different, and therefore a cross­platform one­size­fits­all solution to fighting harassment is an impossibility, but Herd’s approach and vision, which seems to differ so wildly from that of the

63 In 2018, Bumble expanded the scope of the app to include same­sex dating. In these cases, anyone can initiate a conversation, regardless of gender identity.

119 CEOs and COOs of other social platforms, is a welcomed divergence from the typical vague promises to make harassment intervention a primary concern. Given that Herd remains one of the only woman CEOs of a mass social media platform, I’m reminded here of my discussion in chapter three about WAM’s recommendation to Twitter about diversifying leadership, an important first step to truly making inroads towards solutions. Herd made headlines again in 2018. In the wake of the Parkland , which left seventeen people dead, a youth­led movement grew large and unyielding to hold elected officials accountable for their financial ties to the National Rifle Association and to enact sensible gun laws. Herd, in response to calls for a shift in American gun and violence culture, announced Bumble will remove all pictures of people holding firearms from the platform, with the exception of military and law enforcement in uniform. Her decision is a controversial one. Sticking to her original principles of cultivating a safe community, Herd said, “We just want to create a community where people feel at ease, where they do not feel threatened, and we just don’t see guns fitting into that equation.” Further, she differentiated her approach to policy to that of other social platforms in saying, “compared to what’s going on with Facebook and Twitter, we take a very proactive approach,” and “if I could police every other social platform in the world, I would” (Hsu, 2018, n.p.). As seen in Charlie Warzel’s work on Twitter harassment policy definitions and enforcement, how social platforms are “policed,” in Herd’s words, is often not made transparent. Yet we are able to ascertain some of these practices through, for example, interviews with content moderators as well as leaked documents, helping to clue everyday users in to how platforms are governing communities in ways that aren’t necessarily articulated in their public­facing communications. Sarah T. Roberts’ (2016) work on commercial content moderation exposes the “dirty work” of content moderation, as moderators are paid low wages to review offensive content (oftentimes of a violent and disturbing nature) and make judgements calls about what should be allowed to stay and what should be taken down. This requires moderators harbor a keen sense of the norms and values of a particular community, norms and values which may differ from their own moral codes. Roberts interviewed content moderators who work for a large social media company based in about the nature of their work, and their reflections reveal an unsettling picture about what kind of offensive content platforms are indifferent to. After one moderator described raising concerns at work about a policy that allowed blackface on the platform to no avail, Roberts’ writes, Platforms make active decisions about what kinds of racist, sexist, and hateful imagery and content they will host and to what extent they will host it. These decisions may revolve around issues of “free speech” and “free expression” for the user base, but on commercial social media sites and platforms, these principles are always counterbalanced by a profit motive; if a platform were to become notorious for being too restrictive in the

120 eyes of the majority of its users, it would run the risk of losing participants to offer to its advertisers. (p. 152) Roberts’ work not only gives us insight into the actual process content moderators go through when assessing content, but it also serves as a reminder that many times, these judgment calls are informed by shifting policies that are crafted with profitability, not safety, in mind. Ultimately, while many think of platforms as arenas to speak freely and openly, content moderation reminds us that these are curated spaces as well as money­making enterprises that profit off of the content users provide free of charge. The presence of racist, homophobic, misogynistic, and other offensive content on platforms is often there for a reason: because the platform decided it should be there, “predicated on a series of factors, including the palatability of that content to some imagined audience and the potential for its marketability and virality, on the one hand, and the likelihood of it causing offense and brand damage, on the other” (Roberts, 2016, p. 157). Profit, as in so many facets of our culture, takes precedence over ethics. Facebook, one of, if not the, largest social network in the world,64 is a platform whose ethics have come into question many times since its founding in 2004. Interviews with their moderators paint a troubling picture about what it’s like to review content for the site. One former moderator described it as a grueling and emotionally taxing job: “you’d go into work at 9am every morning, turn on your computer and watch someone have their head cut off. Every day, every minute, that’s what you see. Heads being cut off” (Solon, 2017, n.p.). Recently leaked documents, including training manuals, spreadsheets, and flowcharts reveal the platform’s disturbing criteria for what should and should not be allowed on the site. The documents outline situations that are permissible and ones that demand content be removed by Facebook’s already overworked and underpaid moderators,65 who are instructed to only remove content that’s been flagged and deemed as violating the guidelines. This means that if a moderator sees offending content, it should remain on the platform until a user has reported it. Once something is flagged for review, moderators are to use Facebook’s internal moderation manual to determine what should be left up and what should be taken down. The manual stipulates, for example, that graphic imagery of should only be removed if its sharing is done “with sadism and celebration” (“Facebook’s Internal Manual,” 2017, n.p.). The animal abuse policy follows similar guidelines. The manual reads, “generally, imagery of animal abuse can be shared on the site,” but “sadism and celebration restrictions apply” (“Facebook Rules On…,” 2017). Facebook’s “Credible Violence Abuse Standards” define what the platform considers to be a credible threat, which requires removal. It reads,

64 Precise data regarding user numbers of various social media is difficult to come by because of many influencing factors including that these numbers are reported by the platforms themselves and because it’s difficult to estimate how many users on these networks are bots, duplicate accounts, or inactive. , co­founder and CEO of Facebook, recently said the platform is closely approaching 2 billion active users (Balakrishnan, 2017). 65 Some reports say Facebook moderators have as little as 10 seconds to make their assessment (Hopkins, 2017) and make as little as $15 per hour to review the gruesome and upsetting content (Solon, 2017).

121 We aim to allow as much speech as possible but draw the line at content that could credibly cause real world harm. People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways. We aim to disrupt potential real world harm caused by people inciting or coordinating harm to other people or property by requiring certain details to be present in order to consider the threat credible. In our experience, it’s this detail that helps establish that a threat is more likely to occur. (“Facebook’s Manual…, 2017, n.p.) The details, Facebook explains, reveals when content moves from a “generic threat” to a credible one. They give comparative examples of violating and non­violating threats; for instance, “I’m going to kill you John, I have the perfect knife to do it!” is considered to be a violation while “I’m going to kill you John!” is not. It’s the detail of the knife that makes the threat credible, in Facebook’s view. Timing is another clue that the manual says content moderators should use when determining whether or not to remove the content. A threat that contains achievable timing (i.e. “tomorrow, in 3 hours, next time I see you, when it rains”) is a violation, while non­achievable timing (i.e. “when pigs fly”) is not. The guide also says the threat “someone shoot Trump” should not be allowed (political figures are a protected category), but “to snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat” is okay (“Facebook’s Manual…, 2017, n.p.). Free speech advocates admonish certain policies, both from entities like Facebook as well as the U.S. government,66 as harmful to our First Amendment rights, arguing that over­regulation of speech would stifle the free exchange of ideas. But these arguments lose sight of the fact that platforms like Facebook are private entities that are allowed, within legal reason, to make decisions about how content is shared, regulated, and censored on the site. Regardless of how the First Amendment is interpreted, Facebook doesn’t have to allow, for example, discriminatory language or other content that might, in some cases, be protected by the First Amendment in other arenas. It comes down to a question of not just the legal obligations that platforms have when it comes to regulating content, but ethical ones as well, particularly in light of how easily misinformation is spread via the internet and the effects that then has on public knowledge and opinion (Daniels, 2009; Meyer, 2018). PragerU, is a right­wing website devoted to persuading, through the use of short videos, towards “the values that have made America and the West the source of so much liberty and wealth.” Their mission statement argues, “the greatest threat to America is that most Americans

66 While it’s beyond the scope of this dissertation to fully examine constitutional law as it pertains to free speech, it should be noted that there is a longstanding robust debate about the interpretation of the First Amendment and how those interpretations should interface with federal internet regulation. Lawyer and information privacy expert Danielle Citron (2014) argues that a more regulatory agenda as it pertains to online harassment, “would not undermine our commitment to free speech. Instead, it would secure the necessary preconditions for free expression while safeguarding the equality of opportunity in the digital age” (p. 190). As she notes, the U.S. has made significant strides to balance civil rights with the values of free speech, but we have yet to apply that thinking to the internet, and after all, “civil rights, criminal, and tort laws have not destroyed workplaces, homes, and social venues. Requiring free speech concerns to accommodate civil rights will not ruin networked spaces” (Citron, 2014, p. 192).

122 don’t know what makes America great. PragerU's mission is to explain and spread what we call ‘Americanism’ through the power of the Internet” (“What We Do, 2017). They distribute their videos via a YouTube channel, which as of March 2018 has 1,323,624 subscribers. In July of 2016, over forty of PragerU’s videos were set to a restricted mode by YouTube, meaning they became demonetized and unavailable to users with certain parental settings. This was particularly troubling to PragerU, as they very clearly target young people. According to PragerU, 60% of their viewers are under the age of thirty­five. The videos are short (never longer than five minutes), animated, and use simple, accessible language. They also offer a student ambassador program for high school and college students to help them “find like­minded students and allows them to share PragerU videos with their friends in campus clubs and events. They have access to special PragerU programs and resources, including staffers who help them navigate some of the challenges of being a conservative college student on today’s campuses,” (“Frequently Asked Questions,” 2017). PragerU, in response to the restrictions placed by YouTube, created an archive of the videos in question on their website along with a petition for YouTube to lift the restriction setting on the videos. They write, Conservative ideas are under attack. YouTube does not want young people to hear conservative ideas as they currently list over 40 PragerU videos—over 10 percent of our entire collection—under “restricted mode” making it difficult for many young people to access our videos. Many families enable restricted mode in order to keep inappropriate and objectionable adult and sexual content away from their children—not to prevent them from watching animated, age­appropriate, educational videos. (“YouTube continues to restrict…”) Some of the titles of the videos that YouTube marked as restricted are “Is Fascism Right or Left,” “Gender Identity: Why All the Confusion,” and “Why Isn’t Communism as Hated as Nazism?” While these videos present themselves as legitimate inquiries into these questions and topics, a close examination of them reveals they are meant to intentionally mislead the viewer and propagate false information. For example, one of the restricted videos, titled “Are 1 in 5 Women Raped in College,” disputes statistical facts about sexual violence on college campuses and claims there’s no evidence that sexual violence is a cultural norm. The video also criticizes affirmative consent laws before claiming that campus judicial panels are “guided by rape culture theory,” where “due process is an afterthought,” which ultimately works to discriminate against young men who are accused of sexual assault (“Are 1 in 5…,” 2016). A screenshot from the video can be seen in figure 5.2:

123

Figure 5.2: A screenshot of PragerU’s video titled “Are 1 in 5 Women Raped at College?” depicting a woman on a campus judicial panel yelling “guilty because accused!” at three college men who have been accused of sexual assault.

In October of 2017, PragerU filed a lawsuit against YouTube and YouTube’s parent company Google, alleging unlawful censorship and free speech discrimination. The filed complaint builds an argument that “Google/YouTube operate the largest forum for the general public to participate in video based expression and exchange of speech in California, the United States, and the world,” and that “Google/YouTube have represented that their platforms and services are intended to effectuate the exercise of free speech among the public,” but, they say, “as applied to PragerU, Google/YouTube use their restricted mode filtering not to protect younger or sensitive viewers from ‘inappropriate’ video content, but as a political gag mechanism to silence PragerU,” (Prager University vs. Google Inc., 2017, p. 2­3). Topics of the First Amendment and free speech play a central role in the prepared complaint. The crux of their complaint is that, This is free speech discrimination: censorship based not on the content of the speech but the perceived identity and viewpoint of the speaker. The law categorically prohibits this type of identity and viewpoint based discrimination and censorship. And the fact that this discrimination emanates from a company that holds itself out to the public as a committed defender and protector of free speech makes Google/YouTube’s conduct that much more unacceptable and dangerous. (p. 7) Google responded to the lawsuit by pointing out that the restrictions only apply to users who have the restricted mode feature turned on—it’s a part of the platform that’s meant to give users the option to filter out “sensitive or mature content,” and they argue, “giving viewers the choice

124 to opt in to a more restricted experience is not censorship.” Their response goes on to evoke Congress in saying the restricted mode is the type of feature that the government “has encouraged online services to provide for parents and others interested in a more family­friendly experience online,” (Gardner, 2017). Google’s response doesn’t directly address why the PragerU videos were put in restricted mode beyond the implication that the videos are “sensitive” and “mature,” though it might have to do with YouTube’s expressed initiative to curb the amount of misinformation that is spread on the platform. At the South By Southwest festival in 2018, YouTube’s CEO Susan Wojcicki participated in an interview with Wired ’s Editor­in­Chief Nicholas Thompson in which she explained that she has started to understand YouTube as a library, a repository of information where people can come to learn and access information, and that recent events surrounding the spread of misinformation and its influence on public opinion and democracy “has really taught me is how important it is for us to be able to get that right—to be able to deliver the right information to people at the right time,” (Thompson, 2018, n.p.). She goes to explain that YouTube takes a clear free speech stance, but they do have a sense of responsibility to point users to accurate information, especially when it comes to “an important news event.” As such, she announced that pages housing conspiracy theory videos on YouTube will also point users to information from Wikipedia in order to “show alternative sources for you as a user to be able to look at and to be able to research other areas as well” (Thompson, 2018, n.p.). This is certainly a step in the right direction, but it’s not enough to combat the harm that the circulation of misinformation does. Though one has to wonder: does YouTube actually want to combat this problem, seeing as conspiracy theory videos and other sources of misinformation generate profitability? As John Naughton (2018) points out, “our current crisis of disinformation and computational propaganda will not be resolved by just finding and publishing ‘the facts,’” (n.p.). Further, does YouTube’s proposed strategy run the risk of driving more traffic to Wikipedia that has a vested interest in circulating misinformation given the audience for conspiracy theory videos? There are many mechanisms at work on YouTube that control how and when content is circulated, viewed, and shared, as evidenced by the restricted function at the center of PragerU’s complaint. Recommendations, for example, help drive users to particular videos based on a set of parameters that isn’t wholly advertised to users, as YouTube doesn’t publicly reveal what goes into the algorithms that structure what is suggested to us by the platform. Misinformation, like that spread by PragerU and other sources online, contributes to a culture in which actions are taken, public opinion is swayed, and policy is written based on falsehoods. It’s a serious problem, mediated by platforms, whose impacts we’ve yet to fully understand. While the PragerU lawsuit and misinformation on YouTube might seem only superficially connected to the topic of online harassment, the circulation of falsehoods and online abuse are very much entangled in more ways than are immediately apparent. Misinformation and the ability to easily share it online can be leveraged to spread falsehoods about an event,

125 individual, or groups of people, fueling hate and abuse campaigns. Further, claims of free speech discrimination are often leveled in an attempt to discredit the very real pain that is caused to victims of hate speech and violent threats, and can be used as a shield for the bigotry of hate groups (Daniels, 2009, p. 61) or sexual predators (Tufekci, 2017, p. 167). While free speech is an essential component of any functioning democracy and democratic civic space, it should not be used as an excuse to harm, intimidate, or silence others. Danielle Citron (2014) addresses concerns that any regulation of speech is a slippery slope and analogizes these stances to those who, in the 1980s, claimed the creation of protections for women in the workplace from hostile sexual environments would “suffocate workplace expression.” She argues that today, we can say with say with confidence that accommodating equality and speech interests of all workers (including sexually harassed employees) did not ruin the workplace. Although antidiscrimination law chills some persistent sexually harassing expression that would create a hostile work environment, it protects other workers’ ability to interact, speak, and work on equal terms. As we now recognize, civil rights did not destroy expression in the workplace but reinforced it on more equal terms. (p. 192­193) Women speaking out about online harassment and asking platforms to take greater action to ban users or remove threats are often accused of draconian attempts to stifle free speech (Jane, 2014b), despite the reality being just the opposite. Online harassment itself is a mechanism that silences and removes women’s abilities to speak freely. As Stephanie Brail (1996) says, “online harassment is, to some extent, already killing free speech on the internet, in particular the free speech of women” (p. 148). There are clearly many complexities surrounding harassment’s relationship to platform design, policy, and governance including anti­harassment tools, internal review processes for claims of harassment, content moderation, federal law, and claims of free speech infringement. As I’ve maintained throughout the entirety of this dissertation: there are no easy answers; however, failures to implement working solutions by platforms and our broader culture continue to enable online harassment to dictate and dominate our digital spheres. In the next section, I’ll discuss what survey respondents think Twitter specifically can do to curb the problem of harassment.

Proposed Action: Perspectives from Women Online Platform design and governance policies meant to intervene and prevent online harassment often fail to make meaningful impact while putting the onus to deal with the problem on the victims. When these designs and policies fail them, women are often told “don’t feed the trolls” or “just log off” (Phillips, 2013; Poland, 2016). This common advice makes a dangerous implication that there’s nothing to do about harassment beyond ignoring it. Bailey Poland (2016) writes, “too often, invoking the call not to feed the trolls is really meant to tell women, specifically, that we should stop acknowledging the trolls’ existence—and our own experience

126 with them—at all.” Ultimately, she says, “‘don’t feed the trolls’ is more often a way of saying, ‘stop making everyone uncomfortable by pointing out the abuse’” (p. 62). “Don’t feed the trolls” or other advice we give to women about online harassment risk essentializing harassment as invective we are expected to acquiesce to or dismiss as “no big deal,” while minimizing the damage it does to feminist opportunities for self­expression, action, and the sheer presence of women online. Amanda Hess (2014) argues we’ve been thinking about internet harassment all wrong in that the conversation often puts the responsibility on the victims by asking them to simply ignore it. Such deflections pass the responsibility from the harassers onto the harassed, deeply deemphasizing the effects it can have a person’s well­being and her ability to “live freely online” (Hess, 2014, n.p.). This complicated issue gives rise to many macro­level questions. For instance, by ignoring the harassment do we become complicit in its production? Are we, in the same turn of ignoring it, reinforcing it? Further, as seen in interviews with both Tracy and Ella, telling women to ignore the abuse or choose between putting up with it and logging off has serious implications. It is unrealistic and ineffective advice. Ann Bartow (2009) argues many attempts to decrease harassment through use of prevention and intervention strategies have not only failed, but, perhaps unsurprisingly, “provoked even greater amounts of abuse and harassment with a gendered aspect” (p. 391). Online communities of women, too, often disagree about how harassment should be dealt with. Susan Herring et al.’s (2002) case study of severe harassment in a feminist forum, for example, describes how intervention strategies varied greatly due to differences in ideological stances of the forum’s members on how to handle harassing behaviors. Some promoted the strategy of ignoring the harasser, while others took to shunning using the same rhetorical tactics the aggressor used in their abuse, such as profane insults and attacks of character (p. 379). Barak (2005) identifies three forms of offline sexual harassment prevention strategies and discusses how they may or may not work when implemented for online contexts. They are “legislation and law enforcement, changing of the organizational­social culture, and education and training of potential victims as well as of potential harassers” (p. 85). Legislation and law enforcement pose logistic problems for online spaces because interactions online can happen asynchronously and originate from many different locations across the world, making it difficult to hold everyone accountable to uniform laws. But also, it’s easy for harassers to evade law enforcement because of the anonymity the internet can afford its occupants (p. 85). Changing the culture is no small feat either, and “it is practically impossible to change the culture of the Internet because of its limitless space and multicultural users” (p. 86). Prevention through education seems like the most feasible strategy to translate from offline to online settings yet is another huge undertaking given the magnitude of the problem and the size of the internet. But how practical is it to understand online harassment prevention methods as ones that can simply follow what does or does not work offline? How can we expect success in transposing offline tactics to online ones when gendered attacks in offline spaces still struggle to receive the prevention and intervention strategies they need?

127 Telling women “don’t feed the tolls” and “just log off” is neither effective nor helpful, and many proposed intervention strategies create more labor for victims. But what is the victim’s role in prevention and intervention? What can platforms do? What solutions are available and waiting to be implemented? Women who responded to my survey were asked for their input. Of the 79 survey respondents, 64 participants provided a response to the open­ended question, what do you think can and/or should be done to curb the problem of harassment on Twitter? And while the question points directly at Twitter, there’s much we can glean from the responses that can apply to other contexts as well. The most consistent response was that Twitter should have clearer policies and do more to strictly enforce them. Couched within these kinds of responses, participants noted specifically what Twitter can do, such as improve on existing and create new anti­harassment mechanisms, take more swift action to ban offending users, do more to moderate content, and be more transparent about what action is being taken regarding harassment. One woman, directly addressing multiple failures on Twitter’s part, responded, “There has to be more efficient ways to report harassment, it has to be taken seriously when it is reported, and they need to come up with algorithms to flag it that takes the onus off of the person who is harassed.” Also addressing the apparent lack of attention Twitter pays to harassment, another women responded, “Twitter administrators should take harassment and identity­based threats against users more seriously—perhaps banning offenders would be a good start. Right now, it seems like harassment complaints aren't taken seriously and that there isn't much recourse for victims.” Stricter and faster banning of abusive users was a popular idea among respondents, as it was mentioned in 26% of responses. Another woman addressed the complexities of trying to find a solution to online harassment in writing, Well, I understand that policing these activities can be difficult because they're often very context­specific, but I think it's horrible that direct action generally only can be taken by the person being harassed (e.g., muting, blocking, etc.). It's maybe an oversimplification to say that Twitter needs to do a better job of regulating content, but there is definitely more that they COULD do: I'm thinking of what happened recently to Leslie Jones—that was harassment on such a massive scale that Twitter could have intervened beyond an “oh, we're sorry, that sucks.” At the very least, as she points out, Twitter could be more transparent about what they’re doing to protect users like Leslie Jones who are on the receiving end of a high­volume harassment campaign. Many women suggested small improvements that Twitter could make to the platform in order to decrease the presence of harassment. For example, one woman suggested that Twitter “provide more robust, flexible, and fine­grained tools to allow users to control their interactions with strangers. Ideas include superior user­controlled filtering of incoming interactions, per­tweet access controls, control over tweet reply threads, effective reporting, and active TOS

128 enforcement.” Another suggested that Twitter make changes to how blocked user status influences suspensions: “the more blocks you've received or reports should result in permanent suspension.” Another woman suggested they harness the power of existing algorithmic capabilities in order to better surveill abuse on the platform. She wrote, “Twitter and other social media do a somewhat decent job at mining content for targeted advertisements. It seems that the same tools can be used to monitor harassing language.” This suggested solution would help shift at least some of the obligation to report harassment from the harassed onto the platform. Of course, some of the proposed solutions would require structural changes to Twitter’s workforce, and a few women acknowledged that staffing plays an important role in the success of anti­harassment structures. One women responded that “broadening the definition of abuse or abusive online behavior would be a good start,” but Twitter should also take steps to increase “the number of customer service reps who respond to those complaints.” Such an increase requires a monetary investment, as pointed out by a respondent who said that Twitter should “offer clear standards on harassment & hate speech and then actually apply those rules. The problem is not whether something can be done, it’s whether Twitter is willing to spend the resources necessary to do so...which requires both spending money on staffing and losing a bunch of active but harassing users.” She went on to say while there are plenty of viable solutions, “Twitter’s Wall Street problems are in the way of progress.” In addition to calls for clearer policies and stricter enforcement of them, many women mentioned not just the size of Twitter’s workforce but the makeup of it, suggesting that when working to address harassment, Twitter should both listen to and hire people who are affected by this problem. For example, after suggesting that Twitter do more to enforce their policies and make it more difficult to set up multiple accounts (a strategy used by harassers to circumvent suspension), one women said, “Twitter needs to listen to women and POC who are consistently harassed—we all know who they are—and have them give input for how the system is designed.” This call to action reflects a common sentiment among women and people of color on the platform who see abuse daily: that the people experiencing abuse know better than those implementing “solutions” on the platform, evidenced by Twitter’s repeated failures to make any headway, as discussed in the beginning of this chapter. Ultimately, as another woman notes, Twitter “ignores victims” and, in many ways, “they absolutely refuse to act.” She suggests, like many others, that Twitter adopt more strict guidelines that would help both users and moderators who work for the platform understand abuse and the kind of culture Twitter wants to establish. She also suggests, “Hiring those with experience with moderating ‘safer space’ communities would greatly help.” One women said that Twitter should rewrite their terms of service “to provide a stronger metric for harassment.” This, in turn, would “give teeth to the reporting mechanism.” Currently, she says, too many instances of harassment that clearly are in violation of the rules are being dismissed with no action taken. An active interest in and solicitation of user­stakeholder

129 input on anti­harassment and privacy feature implementation and design by Twitter would be great—and probably prevent misguided reliance on features like “real name” and “verified identity” policies that only serve to increase the high stakes of targeted harassment. Something to fix the limitation being locked places on an account—it's a great protection from harassment but prevents interaction with public accounts and persons who can't be expected to follow everyone who wants to interact with them. She makes many important points in her response. The first is that there’s a ripple effect to small decisions Twitter could make regarding their policies. By providing clearer language that is transparent about what does and does not constitute as harassment, they would also be working to streamline their reporting process, which in turn could have an impact on the amount of time moderators have to thoroughly investigate a claim of harassment. As I’ve discussed elsewhere in this dissertation, there are still questions as to whether or not Twitter even wants to “fix” the harassment problem, but suggestions like this participant’s demonstrate that small but meaningful changes are well within the realm of possibility. Further, she notes that Twitter should do more to listen to those with actual experience with harassment. This strategy would help the company apply changes that would decrease, rather than increase, abuse. She mentions misguided implementations of features that make abuse easier, and I’m reminded of examples I gave in chapter three of Twitter making changes to the platform (changes to “lists” and altering the default profile photo) that demonstrate ignorance for how harassment proliferates and circulates. Also, she brings up one of the, in my view, biggest drawbacks to having a locked account: it prevents users from interacting . Being locked has proven to be an important strategy for women who are avoiding harassment and users who want some modicum of privacy on the platform. But addressing the ways in which being locked cuts users off from a significant part of the network might help Twitter establish new functionality that enables users to protect themselves while also interacting with public accounts. Another participant wrote, “Twitter doesn't seem to be interested in enforcing their Terms of Service in a timely or meaningful way. That in itself would be a solution,” before saying that perhaps an entirely new platform might be easier and more feasible to create than retroactively fixing Twitter’s harassment problem. She suggests that an alternative platform be created “with a team of women (/of color) and other marginalized people.” This would have the desired effect of having a space created where women’s unique concerns are at the fore, not considered late in the process. She also mentions workaround tools like Block Together67 as a “good workaround in the meantime.”

67 Block Together is a web app created in response to Twitter’s harassment epidemic and is designed to help victims of harassment block potential harassers en masse. Using Twitter’s program interface, Block Together allows users to automatically block accounts that have mentioned them and were created less than seven days ago and/or have fewer than fifteen followers. Block lists are also easily shareable, and users can use Block Together to automatically block accounts that their friends or others have also blocked. For more, see blocktogether.org.

130 Of course, none of the proposed solutions to online harassment, whether enacted through policy, design, or otherwise will have great effect without systemic cultural change to our attitudes about women, people of color, LGBTQ people, and other marginalized identities that are markedly affected by harassment. Cultural change, while a lofty order, was noted by many. For instance, one woman responded that we need a “global change” in what is considered acceptable forms of discourse. Another suggested that “a full­scale cultural change in how we communicate with each other” is necessary, and part of this involves teaching people “how to identify and self­correct when we are communicating in a way that is insensitive.” Another woman noted that Twitter had much responsibility to “be more responsive” once a report is filed, but “there also needs to be more preventative measures to curb harassers from harassing, like civil discourse education.” Another woman responded, “I think it's a conversation for real life,” because Twitter is a reflection of the offline culture. She suggested a first step is to “dismantle gender roles,” and “teach feminism to people and children.” Among these responses are both small and large suggestions of what can be done. And while this particular question on the survey was pointed specifically to Twitter, the responses show that women are concerned with how this issue is not taken seriously by both platforms and the larger culture. Of course, as I’ve discussed and as many women pointed to in their responses, economic concerns for platforms seemingly get in the way of staunch commitments to systemic change. In short, we cannot count on platforms to lead the way of putting an end of online harassment. In the next section, I offer pedagogical suggestions for how educators can begin to address this issue with their students.

Pedagogies of Online Harassment: Anna’s Story “Anna” is a scholar in composition and rhetoric whose research area is in digital literacies and social media, and she frequently teaches digital literacies courses to undergraduates. She understands that harassment is part of the ecology of social media; she often sees instances of it online and has had experiences with offline harassment in her daily life. Having worked in bars before, she describes to me how harassment of women workers in these environments is daily and almost expected, a sentiment I can relate to as I worked as a cocktail waitress and bartender from the time I was nineteen until I started my graduate program at twenty­six. Harassment at work, in my experience, was definite and constant. As Anna puts it, “if you work in that environment it’s just so sad how much harassment you’re expected to put up with as part of the job.” Even outside of work in bars, Anna recalls the normalcy, throughout her life, of being harassed. She says, “There’s just so much of being a woman in the world, even just walking to school, because I couldn’t walk to class without people honking at me and it was just… I don’t know. It’s everything, everything about being a woman in the world.” As a teacher­scholar of social media, Anna has encountered stories of online harassment with frequency. Her knowledge of online harassment, coupled with her personal experiences with offline harassment, have led her to have a “really neutral online identity” in avoidance of

131 potential abuse. She doesn’t post anything too personal, and she avoids participating in certain conversations. She tells me, “I am very frustrated with myself about that sometimes because I read these conversations on Twitter and I want to get involved, but I know that my being involved in that will probably lead to harassment.” As someone who is committed to helping students navigate social media spaces, the irony is not lost on her that her social media presence is limited. Anna shares with me a story about something that happened during a digital literacies course she was teaching. The course required students do on­going work throughout the semester with Twitter—engaging in public conversations there, live­tweet class events, and other activities of that nature. One of the major projects in the course asked students to do a “current event presentation,” in which they research a recent event and analyze the ways in which it has manifested through social media, like, for instance, how the event has been reported on and how the public has responded. As students presented their findings, the rest of the class was to have a conversation about the topic via Twitter. “Because they were using names of people who were in the news a lot,” she explains, “the Twitter trolls” found them and “attacked my students, and only the female students.” Two women in particular were targeted, and Anna explains, “I just felt so awful because by virtue of my asking them to do this they were now being harassed.” Anna was alerted to what was happening when she checked the feed during a presentation and saw the harassment. She felt it important to stop the presentation, as a result. She told her class, “‘ok, I see that some of you are receiving unsolicited comments. You don’t have to engage with them. In fact, please don’t engage with them. Just ignore what they’re saying.’” In that moment, she says, she felt the need to segue into a larger conversation about harassment and online antagonism. “Unfortunately,” she says, “this is a huge part of our engagement on the web.” She did not anticipate this kind of thing happening, and while the moment turned into a productive conversation for the class, Anna still felt a tension between wanting to require students to engage in these spaces critically and protecting them from harmful discourses like harassment. She says, I feel such responsibility, as a teacher, that if I’m asking students to produce work on this public platform, how do I adequately prepare them? Can I adequately prepare them to engage with people on the internet who might harass them by virtue of conversations that we’re having in this really public setting? To be honest, I still feel so much guilt over that experience that I’m not sure how to use Twitter in the future, because there’s nothing you can do to shield your students from it. And she’s right. As digital writing and literacies teachers, we want our students to understand the nuanced and complicated aspects of online discourses and communities. But we also don’t want to expose them to danger. It’s a tricky situation, and one that the field has not yet adequately addressed. The semester following this happening, the semester in which Anna and I had our conversation, she opted to not require students use social media platforms. I ask her, “would you

132 say that this is a direct result of having had that experience?” Her answer is complicated. On one hand, it’s a different course. Granted, it’s still a course that’s very much rooted in social online discourses, but nonetheless it’s not explicitly about digital literacies. On the other, “I think the experience has led me to be a lot more aware of the potential repercussions that I could be asking for if I compel them just to maintain Twitter accounts.” She did, she says, become motivated to talk with students about harassment, but having seen harassment play out in real­time directed towards students caused her to rethink her positioning of social media in the classroom. She tells me, “If I ever require students to use Twitter accounts again, I think I’m going to be a lot more direct at the beginning of the semester and to try to have some sort of policy in place if they’re targeted by a troll.” She would do this by perhaps giving students suggestions for ways they can handle that kind of situation, but Anna struggles to come up with what those suggestions might look like beyond “don’t engage the troll.” Our conversation shifts to scholarship from the field, and I ask Anna if she’s come across anything that she feels has helped prepare her for addressing topics like online harassment with students. I’m wondering what those conversations would look like, and what readings she might assign to students. She tells me that while she would love to draw from the field in talking about this issue, she senses a gap in the scholarship and feels under­supported to navigate these potentially difficult conversations with students. She says she’s excited about my project in particular because she’d like more academic references to draw from in order “to have an explicit conversation somehow about harassment and how students are going to use this public voice to, in some way, confront the harassment that they might face. But also to know when to not engage with that—when it won’t be a productive conversation and what it can lead to.” Anna surprises me when she says, “maybe it would be good to share with you my most horrifying moment in using Twitter in a classroom space, because I think it was sort of an instance where I felt kind of harassed by one of my students.” She primes me by saying, “It’s like the huge, most terrifying horror story of using Twitter in a classroom space that I think has probably ever existed.” Anna tells me that her students, again in a section of the digital literacies course she teaches, were doing an in­class activity where they were to walk around campus and, using a class hashtag, live­tweet ethnographic reflections about what they observed regarding technology, such as how many times they saw someone walking and texting or where they saw the most people using their laptops. After a set amount of time, students were to meet back in the classroom to talk about their observations. The way Anna structured it, half of the students would stay in the classroom to work on another piece to the activity independently while the other half ventured out into campus to observe and tweet. The students who were still in the classroom were on computers working quietly while Anna projected the live feed populated by the other half of the class, awaiting their return. As she monitored the hashtag, something happened: “One of my students posted a picture that… well,

133 so he, in the live feed with the hashtag, posted a dick pic.68 Like a very graphic dick pic.” Absolutely unprepared for her to say this, I shrieked, “what?!” “Yep,” she replies, “in the live feed.” She tells me that before sending students out to live tweet, she made sure to tell them that if they wanted to take pictures of what they were seeing around campus, they should make sure “you don’t have anyone’s faces in it,” but this was obviously not what she had in mind. She tells me that the caption the student tweeted pertained to the activity at hand, but the picture that accompanied it, obviously, did not. Reflecting back on it now, she still doesn’t know if the student meant to direct it at her specifically, if it was supposed to be a joke, or if it was an accident, but in the moment, there was no time to think about intention. “I immediately took the feed off the screen, pulled it up on my phone, and immediately reported the post and blocked him,” while the rest of the students in the classroom continued to work quietly at their computers. She couldn’t tell if anyone else saw the picture or not. By chance, Anna had a mentor observing her that day, and she asked him after class if he saw what had happened. He hadn’t. Anna still wonders if her reporting the tweet caused Twitter to take it down immediately or if the student deleted the post. Either way, it seemed to have disappeared pretty quickly. This particular student was someone that Anna “felt sort of uncomfortable with throughout the semester.” He would, for example, “linger in my office uncomfortably.” Anna wasn’t sure what to do beyond talk to her mentor about it, though he didn’t offer any suggestions for how she might handle the situation. The student, she says, “never mentioned it to me,” and she didn’t bring it up to him either. “None of the other students ever brought it up,” she says. “So, either the other students knew how awkward it was and didn’t report it to me, or no one else saw it. None of them came back and were giggling or anything. There’s a chance that I was the only one who saw it. I, to this day, have no idea, but again, the student never mentioned it to me. It was the most uncomfortable I have ever felt.” Because Anna was so horrified with the situation and didn’t know what to do, she says her immediate instinct was to just block and report him to make it go away. Had it been apparent that students in the class saw the tweet, Anna might have done more, but she’s still not even sure what she should have done or how. “How would I have handled that,” she asks. “Would I have needed to have a larger conversation about being careful about what you tweet? Do I need to have a conversation about Twitter and what it does and doesn’t allow? But I think it was really telling that my immediate response was, I have to get rid of this now and I will never bring it up unless I have to respond to it.” This story that Anna shares with me exemplifies an important aspect of online harassment that my project hasn’t touched on explicitly given my main focus hasn’t been on classroom dynamics: harassment perpetrated by students towards teachers. Contrapower, or “harassment of those with more organizational power by those with less,” is influenced by power

68 “Dick pic” is slang for a picture of a man’s penis, often one that he himself has taken and sent to someone else via text message or direct message on social media.

134 structures maintained by societal norms (Buchanan & Bruce, 2005, n.p.). As NiCole T. Buchanan and Tamara A. Bruce (2005) explain, “while a female professor may have more formal power than a male student, because society still conveys more power and authority to men, the male student has more informal power due to his gender. Parallel situations can occur when discussing differences in race/ethnicity, age, sexual orientation, and ” (n.p.). Contrapower is not a topic that composition and rhetoric has take up richly in our scholarship, with a few notable exceptions. In 1995, for example, Julie Jung, Tilly Warnock, and Julia Ferganchick­Neufang surveyed women writing teachers across the United States about student­to­teacher harassment. 60% of respondents said they’ve had experiences of gender­specific student­to­teacher harassment, and 62% said their awareness of these kinds of issues affect their teaching (Ferganchick­Neufang, 1997). In 1999, Susan Hunter and Ray Wallace edited a special issue of Dialogue: A Journal for Writing Specialists themed around contrapower, with articles by Julia Ferganchick­Neufang, Hyoejin Yoon, Jennifer Bay, and Eileen Schell. But research in higher education in general has done more to examine contrapower and its effects on teachers, particularly women teachers like Anna. For example, Eros DeSouza and A. Gigi Fansler’s 2003 survey of both college students and teachers found that male students were more likely than female students to sexually harass a professor, and over 50% of professors reported having been sexually harassed by a student at least once. Their study also found that men and women professors experienced similar rates of harassment, but the effects were far worse for women than they were for men. Similarly, Lampman et al.’s 2009 survey of professors about contrapower found that women suffered greater negative impacts than men on their health and professional lives as a result of being harassed by a student. Contrapower is clearly an important, prevalent, and gendered issue, and one that I hope to explore further in future research. I don’t mean to suggest that what happened to Anna, both the harassment towards the women in her class as well as the explicit picture tweeted by her student, will happen to every digital writing teacher, nor that we should expect it to happen. However, I do argue that as the pervasiveness of online harassment grows without proper intervention and prevention strategies, these kinds of instances will become more frequent and potentially more severe. As educators are oftentimes students’ entry point into critically examining rhetorical contexts of online environments and interactions, we should provide students with, if not intervention and prevention techniques, a better critical understanding of the ways that individual actors contribute to larger networks online. In her work on public writing pedagogy and online harassment, Leigh Gruwell (2017) argues that it is “well worth questioning how online harassment limits access to internet publics” for many reasons, one of which being that “we increasingly ask our students to enter public spaces online.” It’s our duty, then, to “consider what barriers exist to full, equal participation and recognize online harassment as one such barrier” (n.p.). Working with students to understand harassment and its impacts on our digital environments helps them not only become more critical

135 users of social media, but it also helps them to understand these environments as “networked publics.” As danah boyd explains, networked publics are different from other kinds of publics because “the ways in which technology structures them introduces distinct affordances that shape how people engage with these environments,” (boyd, 2010, p. 39). These publics are ones that are “a complex assemblage comprising not only human relationships but a whole range of logical and physical resources,” (Tierney, 2013, p. 52). Gruwell (2017) writes, “rhetorical agency is never stable or complete; it is instead the ever­evolving product of affinities between multiple, sometimes unknown, actors and texts,” (n.p.). Unquestionably, it’s important for students to poke at the boundaries of these spaces, trying to grasp just how many actors, human and non­human, textual and otherwise, converge in the networked environments that they frequent day­to­day, hour­to­hour, minute­to­minute. So what can we do in the digital and public writing classrooms to help students explore where those boundaries begin and end, especially as they relate to students’ own lives as learners, writers, professionals, amateurs, citizens, activists, and all of the other identities they might simultaneously embody? For the teacher who is already overwhelmed by condensed semesters and jam­packed syllabi, it may seem daunting to take on yet another curricular responsibility. However, concerns of online harassment and its rhetorical dimensions pair well with topics we already take on in the writing classroom—topics like agency, power, and contexts. Discussing harassment with students can easily be prompted by looking at how terms of service statements function for social media. Such activities build time and room into the semester for students (and, selfishly, myself) to actually read these policy documents that structure so much of what happens where we interact with friends and family, get our news, publish our writing, and more. I begin this activity by asking students to select the social media they’re most familiar with or use most often. I have, in the past, given students a list of platforms to choose from, including highly social ones like Facebook, Twitter, Instagram, Pinterest, Snapchat, Tinder, , YouTube, as well as more consumer­based spaces like Venmo, Poshmark, and Lyft. I stress how important it is to read the terms for an app or website that they use most often—it gives them a chance to become more literate users. However, I also see value in students assessing the terms for a space they’re completely unfamiliar with, because their outsider status might make it easier for them to see something a regular user might not. I ask students to keep track of their reactions as they read through the document, noting what they learn about how the rules and regulations shape the kinds of information that is allowed to circulate in the ecology of this particular space. What can users access, discuss, post, etc.? Do any of the terms seem strict, arbitrary, vague, or in the best interest of any particular entity? I also ask students to note if anything shocks or scares them. Typically, students come up with a whole host of reflections about what they learned and were shocked by. Less often, though still common, students also note items listed in the terms of service that didn’t shock them at all. These kinds of reactions help frame a conversation about why we’re not surprised about some

136 terms but are about others. What do we expect these documents to say and mandate? What have we become enculturated to as regular users of the internet and social media? What are we willing to accept about the regulation of social media and what they do with our data? This activity is also a moment to talk about how policies dictate rhetorical activity in these writing spaces—how policies embolden some cultures or activities and stifle others. Who is left vulnerable to these policies, and what happens when a platform chooses not to enforce the rules? Answers to these question enables us to think about how they’re applicable to many writing contexts, even those outside of social media: how do the conditions of a writing situation, space, or tool structure what a writer or a rhetor can and cannot do? While I don’t always present this activity as being explicitly connected to concerns of online harassment, it never fails that students bring this issue up organically, both because harassment and abuse is typically directly addressed in social platforms’ policies and because students often see this kind of activity in their own feeds. For example, in the Fall of 2017, after having read the policies for Twitter, many students wanted to talk about how it was possible that Donald Trump was still allowed to use the platform in the ways that he does, as he has clearly violated many stated rules. Learning more specifics about governing principles of platforms allows students to think more critically about the social media cultures they encounter, because at its core, this activity is largely about culture: who creates and shapes it, how cultures are enabled or constrained by platforms and their terms of service, and how policies are or aren’t enforced. This activity also presents the opportunity to talk about technical communication, style, design, and presentation of writing. The terms of service for Pinterest, for instance, is usually a favorite of students’ because it is aesthetically pleasing, short, and written in accessible language. Each clause of the policy includes a “more simply put” section where Pinterest distills the policy into brief and easy to understand basics, as seen in figure 5.3:

Figure 5.3: Section 2a of Pinterest’s terms of service notes rules for posting content to the platform.

137

Although the individual policies themselves are relatively short, Pinterest, through this “more simply put” section, provides the reader with a plain language version, bolded and in a different color, for easier scan­ability. Pinterest also injects light humor into the writing, which, as I talk about with students, has an effect on the reader. We talk about what that effect might be, if it’s the one desired by Pinterest, and how the general ethos of Pinterest makes humor and an informal tone more readily appropriate for this platform over others. An example of their humorous style can be seen in figure 5.4:

Figure 5.4: Section 11 of Pinterest’s terms of service outlines information regarding governing law and jurisdiction.

Again, this activity isn’t explicitly connected to online harassment, but it’s a thread that can (and should) come up in the discussion. Further, when students familiarize themselves with the language and policies associated with platform governance, they become more aware, critical social media users. Asking them to reflect on their own uses of these platforms can lead to surprising insights about how they fit into the larger network. Another activity or larger project that leads to critical reflection asks students to conduct an autoethnography of their own social media use. This activity invites them to think about how they themselves contribute to online cultures and how their digital writing practices are influenced by their preferred communities. In this project, students track daily use of/participation in social media, employing ethnographic methods to “notice” by taking field­notes about these spaces—their cultures, uses, designs, governances, and policies. Beyond taking notes that track their own use, they also compose analytic memos and gather artifacts such as screenshots and terms of service statements. After reflecting on all of the data they’ve gathered as a whole, students are asked to arrive at conclusions about their personal relationship to social media and digital cultures. These kinds of activities give way to critical discussions and

138 reflections on what kinds of internet users students want to be as they begin to see themselves not just as passive consumers of internet content but as creators of it—users that actively shape the cultures on these platforms through their participation. Perhaps, what might have also been useful to Anna and her students is a central resource they could visit for information about online harassment: its effects, how to respond to it, and safety guides. HeartMob, for example, is a project of Hollaback, a nonprofit aimed at ending harassment in public spaces. HeartMob’s goal “is to reduce trauma for people being harassed online by giving them the immediate support they need—and in doing that work, create an army of good so powerful that it can disrupt and ultimately transform the hearts and of those perpetuating online harassment” (“About HeartMob,” 2017, n.p.). They provide a wealth of resources such as technical and social media safety guides, legal information, and self­care guides. Their social media safety guides, for example, detail how to use reporting and privacy tools for Twitter, Facebook, Tumblr, Reddit, and YouTube, and provide information about what kinds of content each platform does and does not allow. Another area on HeartMob’s site titled “Know Your Right” educates victims of harassment about what they can do about their harassment within the parameters of the law. They provide information on federal and state laws pertaining to online harassment, how to speak to police officers about online harassment, legal definitions of online harassment, differences between criminal cases and civil lawsuits, how to prepare a legal case, and they also provide a directory of lawyers for people experiencing online stalking and harassment. Beyond HeartMob being a practical and informational site to visit, it also doubles as a fantastic case study to use in technical or digital writing classes when talking about writing assembling guidebooks, FAQs, or infographics. As a field, we assume interest and responsibility in researching and defining what it means to be a writer and rhetor, especially in light of the new environments digital platforms have created. Ultimately, Gruwell (2017) says, “Although we may not be able to stop online harassment entirely, we are still obligated to consider its effects on our theories, pedagogies, and, above all, our students” (n.p.). These pedagogical ideas just scratch the surface, but they provide us with a starting place to begin addressing issues related to online harassment in our work with students as we prepare them for personal, professional, and civic interventions made through writing. In the final section, I’ll offer suggestions for what else we can do, beyond the writing classroom, to intervene in digital culture norms and online harassment.

Our Digital Futures: Further Reflections on Action Throughout this dissertation, I’ve demonstrated the scope and severity of online harassment, how it matters to composition and rhetoric, the ways in which harassment impacts women’s daily lives and our digital environments, and the role that platforms play in mediating these exchanges. Although much of my discussion has been primarily contained to a single platform, Twitter, there are broader implications for how we think about how social inequalities are upheld through digital platforms. Attention to the nuanced ways that use, policy, design, and

139 economics all interact with one another help us to make sense of our own uses of social media. I turn now to some suggestions for how we might make interventions into online harassment as scholars, rhetoricians, and citizens.

As teachers and researchers… As teachers and researchers with commitments to advocacy and inclusivity, we should seek out places on our campuses and in our profession where we can make meaningful interventions in online harassment and the threats it poses to our students and our research practices. There are three areas I recommend as starting points. The first is our campus mental health and counseling centers that service so many of our undergraduate and graduate students struggling with a variety of health issues. Taking into account how likely it is that online harassment causes stress, anxiety, depression, and trauma, our mental health centers should absolutely be prepared to counsel on these issues. Working with these important campus entities to, for example, offer workshops on how to handle instances of online harassment will help raise campus awareness about online harassment and its effects while simultaneously offering a necessary resources for students, many of whom have likely experienced harassment themselves. The second area of campus that I see as being an untapped resource for helping advocate for and protect victims of online harassment are campus legal teams. As educators face an increased amount of surveillance from fringe­groups, and as watch lists are created and circulated online (Knott, 2016; Mele, 2016), online harassment is a danger many of us face, particularly those whose work is aimed at exposing online harassment and other forms of hate. Recently, while giving a talk about online harassment at a professional conference, an attendee asked how my institution supported me in doing my work and whether or not I had considered how I may need to take legal action against potential harassers. The short answer to my response is that, echoing much of my discussion from chapter two, there aren’t many ways in which the institution provides formal support mechanisms for researchers experiencing online harassment. There are several reasons for this: first, as discussed in chapter two, many review boards screening research plans are, rightfully, looking for potential harms and risks against participants in research. This is an important function of an institutional review board, and I’m not sure they should also be saddled with assessing protocols for potential researcher harms. Second, online harassment, while a regular occurance, isn’t something that academic researchers are prepared for or readily think of as a possibility when publishing or presenting on our work. Working with our campus legal teams to investigate how the institution might support a researcher facing severe online harassment is a good first step to readying ourselves for doing this kind of work and advising our undergraduate and graduate students who want to do this work. Developing heuristics or trainings that researchers on our campuses could use when faced with online harassment would be an invaluable resource for anyone who wants to take on scholarly work that is likely to draw the attention of online harassers.

140 Finally, as teachers and researchers who are part of a larger network of higher education institutions and sites for our professional gatherings, we must work with conference organizers to design best practices for professional social media use at conferences. With conference live­tweeting and hashtags becoming more normalized, it’s important that conference attendees understand how social media can be used for archiving, connecting, networking, and engagement, but how that use is also complicated by the presence of online harassment in these networks. Some years back, I attended a panel on digital laboring in which one of the talks mentioned GamerGate. A well­intentioned audience member who was live­tweeting the session used the phrase “GamerGate” in a tweet in which the conference and the speaker were also tagged, and immediately the conference tag was infiltrated with harassers and trolls, jeopardizing the safety of the audience member, the presenter, and other conference attendees, even those who weren’t even attending that specific panel. This audience member surely didn’t intend to invite vitriol into the conference tag, but had they known about certain trigger phrases or other social media etiquette as it pertains to tagging individuals, this occurrence might not have happened. If we, as a field, are committed to leveraging the power of social media to enhance our professional conferences, we must also be committed to helping attendees understand what social media use that’s sensitive to online harassment looks like.

As rhetoricians and designers... As rhetoricians, we possess specialized knowledge about how rhetoric, design, and language can work together to powerful ends. We also know, then, how these things can also work together to suppress and stifle voices. Scholars in our field have described the necessity of rhetoricians understanding coding languages, particularly as they are situated in social and historical contexts (Brooks & Lindgren, 2015; Vee, 2017) so that we can both contribute to software and internet design as well as teach our students about computational literacies. In learning how to code and how our social media are built from the backend out, we are better positioned to triangulate these skills with our specialized knowledges about rhetoric, language, and design. With a vested interest in anti­harassment designs and policies, we can put this triangulation to work in either building tools, communities, or platforms that are harassment sensitive or to serve in an advisory role to groups that already do. Further, as rhetoricians and designers, we can lend our skill sets to nonprofits or community groups that work to protect and empower women online. The first step in doing so is to recognize where this work is already being done and find out where you can best serve them. There are two organizations that come to mind that could use our support. The first is the aforementioned HeartMob. In addition to providing a wealth of resources related to online harassment, they also work to build an online network of people who are willing to intervene when another member is being harassed, providing real­time support. People experiencing harassment can document the abuse via the site and opt to make the report “public” to the rest of

141 the HeartMob community, which allows them to define how they want bystanders to support them. To join the network of bystanders, you first have to apply (all applicants are vetted by HeartMob to ensure a safe and supportive community). Once approved, bystanders will receive public reports along with details on how to help the harassed, whether through direct intervention methods or emotional support. They also offer online harassment bystander intervention training, which is a good place to start for those who want to learn more about ways to support victims of harassment safely and effectively. The second organization that is in need of volunteers is Girls Who Code, whose mission is to end the gender gap in the technology industry. They have developed many programs aimed at girls to help them develop coding skills and foster an interest in computer sciences, including a Summer Immersion Program as well as local clubs. One of the ways we can use our time and skills is by volunteering to facilitate a club. This involves delivering a “project­based curriculum that teaches girls to use computer science to impact their community and help provide a supportive sisterhood of peers and role models within the Club to sustain their interest in computer science” (“Volunteer,” 2018, n.p.). Facilitators often have no technical skills or experience in computer science. Instead, they’re there to learn alongside the girls. Girls Who Code supports facilitators by providing the space, equipment, curriculum, and troubleshooting guides. Facilitators also have access to how­to guides, training webinars, and technical consultants. It’s an amazing organization that has increased girls’ interest in computer science and secured commitments from leading technology companies to hire Girls Who Code alum. Lastly, volunteermatch.org is an easy­to­use tool to search for volunteer opportunities by cause in your immediate area. I recommend learning more about your local context because gender­justice work isn’t just being done by large, national organizations, but also by smaller, local ones that need our support. Are there writing tasks that you can help with? Can you help to communicate the goals and mission to the greater public? Can you help with promotion and recruitment of more volunteers with specialized skills? Can you help tutor girls and women in areas such as coding, inclusive­design, or writing? Can you lead or help facilitate workshops or trainings? Can you lend your time to lobby lawmakers about online harassment? What training opportunities are available to you that might help prepare you to work with girls and women in your area? Gender­justice is community work, not just academic work, and it’s crucial we seek out or create places where anti­harassment and girls/women advocacy can reach more than just those inside the oftentimes impenetrable walls of the academy.

As citizens... There is much to keep up with as it pertains to our rights and the laws that define them. But, as citizens, we should do more to understand what our rights are. Educating ourselves about the online harassment laws that do exist will help us identify which laws don’t. Equally important is our responsibility to do more to understand how our rights as individuals, both legal and cultural, don’t always extend to others. We should get angry when we learn about the

142 inequalities that thrive in the cracks of our laws and discourses. It’s our responsibilities as citizens to learn who our elected representatives are and which lawmakers we can voice our concerns to about online harassment with the goal of making progress towards legal protection of victims and equality. Beyond online harassment, as many people I’ve discussed in the pages of this dissertation point out, systemic change to inequality is a lofty but necessary step towards ending online harassment. As such, we should all be actively working to dismantle systems of oppression that govern so much of our daily lives. Crucial to this work is the reminder that the responsibility of dismantling does not fall on one set of shoulders alone, but rather the collective shoulders of people committed to ending oppressions that are based on race, gender, sexuality, ability, age, and class. There are various levels of dismantling, but it’s incumbent on us all to figure out where we can have the most impact. When we work together, we can accomplish so much. On perhaps a smaller yet still meaningful scale is our understanding the cultures of the social media spaces we inhabit. Similar to my suggestion for educating conference­goers about social media etiquette, we would all do well to listen in these spaces, expand our , and learn more about how our uses may be harming others, even if it’s just something as simple as keeping a friend tagged in the replies to their harasser or using keywords in our dialogues with women online that harassers are likely to search for. It’s my hope that in reading this work, readers will walk away with a better sense of what they themselves can do to help reduce the amount of harassment that is taking over our digital communities. And it’s my intention that the lines between these three identities I discuss here, teacher/researchers, rhetoricians/designers, and citizens be blurry, as many of the suggestions or opportunities are applicable to all facets of our being. My main hope, however, is that readers walk away knowing that online harassment creates overpowering boundaries for the identities, ideas, and behaviors that diverge from the ideologies that define the groups readily enfranchised in our culture. These boundaries present an enormous obstruction to us and our right to equitable and inclusive digital spheres. And to the women who know all too well what the inside of those boundaries look like, women like those who responded to my survey, Tracy, Kate, Olivia, Ella, and Anna,

I’m sorry. I hear you. I see you.

There is much work to be done.

143 References

“2102 Cyberstalking Statistics.” (2012). Working to Halt Online Abuse . Retrieved from http://www.haltabuse.org/resources/stats/2012Statistics.pdf Abbate, J. (2012). Recoding gender: Women’s changing participation in computing. Cambridge, MA: MIT Press. “About HeartMob.” (2017). HeartMob . Retrieved from https://iheartmob.org/about “About Us.” (2016). Women’s Media Center . Retrieved from http://www.womensmediacenter.com/about/learn­more­about­wmc#wmc­research­and­re ports Adee, S. (2016, August 3). Troll hunters: the Twitterbots that fight against online abuse. New Scientist . Retrieved from https://www.newscientist.com/article/mg23130851­300­troll­hunters­the­twitterbots­that­ fight­against­online­abuse/ Ahktar, A. (2016, August 9). Is Pokémon Go racist? How the app may be communities of color. USA Today . Retrieved from https://www.usatoday.com/story/tech/news/2016/08/09/pokemon­go­racist­app­redlining­ communities­color­racist­pokestops­gyms/87732734/ Ahmed, S. (2017). Living a feminist life . Durham, NC: Duke University Press. Ahmed, S. & Marco, T. (2014, October 15). forced to cancel Utah State speech after mass shooting threat. CNN . Retrieved from http://www.cnn.com/2014/10/15/tech/utah­anita­sarkeesian­threat/ Alcindor, Y. & Welch, W. M. (2014, May 25). Parents read shooting suspect's manifesto too late. USA Today . Retrieved from https://www.usatoday.com/story/news/nation/2014/05/25/santa­barbara­shootings­and­sta bbings/9565513/ Alderman, J. (2017, March 15). Trump supporters on launch harassment campaign against reporter. Media Matters for America . Retrieved from https://www.mediamatters.org/blog/2017/03/15/trump­supporters­8chan­launch­harassme nt­campaign­against­reporter/215690 Almjeld, J., & Blair, K. L. (2012). Multimodal methods for multimodal literacies: Establishing a technofeminist research identity. In K. L. Arola & A. F. Wysocki (Eds.), Composing (media) = Composing (embodiment) (97­109). Boulder, CO: Utah State University Press. Almjeld, J. (2014). A rhetorician's guide to love: Online dating profiles as remediated commonplace books. Computers and Composition, 32 , 71­83. Ambron, P. (2018). How Google can keep you from getting a job (and how to change that). The Muse . Retrieved from https://www.themuse.com/advice/how­google­can­keep­you­from­getting­a­job­and­how­ to­change­that

144 “Are 1 in 5 Women Raped in College?” (2016, April 11). PragerU . Retrieved from https://www.prageru.com/videos/are­1­5­women­raped­college Arola, K. (2010). The design of Web 2.0: The rise of the template, the fall of design. Computers and Composition, 27 (1), 4­14. Bailey, M. (2014, April 27). More on the origins of misogynoir. Moyazb . Retrieved from http://moyazb.tumblr.com/post/84048113369/more­on­the­origin­of­misogynoir Balakrishnan, A. (2017). 2 billion people now use Facebook each month, CEO Mark Zuckerberg says. CNBC . Retrieved from https://www.cnbc.com/2017/06/27/how­many­users­does­facebook­have­2­billion­a­mon th­ceo­mark­zuckerberg­says.html Banks, W. & Eble, M. (2007). Digital spaces, online environments, and human participant research: Interfacing with IRBs. In H. A. McKee & D. N. DeVoss (Eds.), Digital writing research: Technologies, methodologies, and ethical issues (27­47). Cresskill, NJ: Hampton Press, Inc. Banks, A. J. (2006). Race, rhetoric, and technology: Searching for higher ground . Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Barak, A. (2005). Sexual harassment on the internet. Social Science Computer Review, 23 (1), 77–92. Barrios, B. (2004). Of flags: Online queer identities, writing classrooms, and action horizons. Computers and Composition, 21 , 341–361. Bartow, A. (2009). Internet as profit center: The monetization of online harassment. Harvard Journal of Law and Gender, 32 , 383­429. Battersby, L. (2013, July 29). Twitter criticised for failing to respond to Caroline Criado­Perez rape threats. The Age . Retrieved from https://www.theage.com.au/technology/twitter­criticised­for­failing­to­respond­to­carolin e­criadoperez­rape­threats­20130729­2qu8d.html Baym, N. (2010). Personal connections in the digital age . Cambridge, UK: Polity Press. Beard, M. (2014). The public voice of women. London Review of Books, 36 (6), 11­16. Beaulieu, A. (2004). Mediating ethnography: Objectivity and the making of ethnographies of the internet. Social Epistemology, 18 (2­3), 139–163. Beck, E. (2015). The invisible digital identity: Assemblages in digital networks. Computers and Composition, 35 , 125­140. Beck, E., Crow, A., McKee, H. A., Reilly, C. A., Vie, S., Gonzales, L., & DeVoss, D. N. (2016). Writing in an age of surveillance, privacy, and net neutrality. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 20 (2), n.p. Retrieved from http://kairos.technorhetoric.net/20.2/topoi/beck­et­al/index.html Becker, J., Goldman, A., & Apuzzo, A. (2017, July 11). “Russian dirt on Clinton? ‘I love it,’ Donald Trump Jr. said.” . Retrieved from https://www.nytimes.com/2017/07/11/us/politics/trump­russia­email­clinton.html

145 Beekman, D. (2014, May 26). Elliot Rodger wrote manifesto on his hate for women and his vindictive scheme prior to deadly rampage. . Retrieved from http://www.nydailynews.com/news/national/maniac­writes­manifesto­prior­deadly­rampa ge­article­1.1805474 Berger, M. T., & Guidroz, K. (2009). The intersectional approach: Transforming the academy through race, class, and gender . Chapel Hill, NC: University of North Carolina Press. Benner, K. (2017, July 3). A backlash builds against sexual harassment in Silicon Valley . The New York Times . Retrieved from https://www.nytimes.com/2017/07/03/technology/silicon­valley­sexual­harassment.html “Ben Roethlisberger, Quarterback, Twice Accused of Sexual Assault.” (2015, December 8). Broadly . Retrieved from https://broadly.vice.com/en_us/article/bmwe8w/ben­roethlisberger­quarterback­twice­acc used­of­sexual­assault Blair, K. (1998). Literacy, dialogue, and difference in the ‘electronic contact zone.’ Computers and Composition, 15 (3), 317­329. Blair, K. L. (2012). A complicated geometry: Triangulating feminism, activism, and technological literacy. In L. Nickoson, M. P. Sheridan, & G. E. Kirsch (Eds.), Writing studies research in practice: Methods and methodologies (63–72). Carbondale, IL: Southern University Press. Bomberger, A. M. (2004). Ranting about race: Crushed eggshells in computer­mediated communication. Computers and Composition, 21 , 197­216. Borchers, C. (2016, June 7). The Bernie Bros are out in full force harassing female reporters. The Washington Post . Retrieved from https://www.washingtonpost.com/news/the­fix/wp/2016/06/07/the­bernie­bros­are­out­in ­full­force­harassing­female­reporters/?utm_term=.892dce9dee05 Borsook, P. (1996). The memoirs of a token: An aging Berkeley feminist examines Wired. In L. Cherny, E. R. Weise (Eds.), Wired women: Gender and new realities in cyberspace (24­41). Berkeley, CA: Seal Press. Bowden, M. (2014). Tweeting an ethos: Emergency messaging, social media, and teaching technical communication. Technical Communication Quarterly, 23 (1), 35­54. boyd, d. (2014). It’s complicated: The social lives of networked teens . New Haven, CT: Yale University Press. boyd, d. (2010). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (Ed.), Networked self: Identity, community, and culture on social network sites (39­58). New York, NY: Routledge. Brail, S. (1996). The price of admission: Harassment and free speech in the wild, wild west. In L. Cherny, E. R. Weise (Eds.), Wired women: Gender and new realities in cyberspace (141­157). Berkeley, CA: Seal Press.

146 Brock, A. (2012). From the blackhand side: Twitter as a cultural conversation. Journal of Broadcasting & Electronic Media, 56 (4), 529­549. Brooks, K., & Lindgren,C. (2015). Responding to the coding crisis: From code year to computational literacy. In L. E. Lewis (Ed.), Strategic discourse: The politics of (new) literacy crises , (n.p.). Computers and Composition Digital Press. Buchanan, N. T., & Bruce, T. A. (2005). Contrapower harassment and the professorial archetype: Gender, race, and authority in the classroom. On Campus with Women, 34 (1­2), n.p. Retrieved from http://archive.aacu.org/ocww/volume34_1/feature.cfm?section=2 Buck, A. (2012). Examining digital literacy practices on social network sites. Research in the Teaching of English, 47 (1), 9­38. Buckels, E. E., Trapnell, P. D., & Paulhus, D. L. (2014). Trolls just want to have fun. Personality and Individual Differences , 67 , 97–102. Campbell, K. K. (1989). Man cannot speak for her vol. 1: A critical study of early . Westport, CT: Greenwood Press, Inc. Carmon, I. (2016, January 20). Sanders dismisses major women’s group as ‘establishment.’ MSNBC . Retrieved from http://www.msnbc.com/msnbc/sanders­dismisses­major­womens­group­establishment Cassell, H. (2007). Study spotlights sexual harassment of women who defy gender . The Bay Area Reporter . Retrieved from http://www.ebar.com/news/article.php?sec=news&article=1875 Castleberry­Singleton, C. (2018, March 2). Growing together at Twitter. Twitter . Retrieved from https://blog.twitter.com/official/en_us/topics/company/2018/growingtogetherattwitter.html “CBC Announces End…” (2016, March 17). CBC announces end to anonymous online comments. CBC News . Retrieved from http://www.cbc.ca/news/canada/new­brunswick/cbc­comments­policy­anonymous­1.3496 350 Charmez, K. (2008). Grounded theory as an emergent method. In S. N. Hesse­Biber & P. Leavy (Eds.), Handbook of emergent methods (155­172). New York, NY: The Guilford Press. Chemaly, S. (2014, September 9). There’s no comparing male and female harassment online. Time . Retrieved from http://time.com/3305466/male­female­harassment­online/ Chun, W. (2016). Updating to remain the same: Habitual new media. Cambridge, MA: MIT Press. Citron, D. (2014). Hate crimes in cyberspace . Cambridge, MA: Harvard University Press. Citron, D. (2014, October 23). Defining online harassment. Forbes . Retrieved from https://www.forbes.com/sites/daniellecitron/2014/10/23/defining­online­harassment/#27e4 7bbd28de Clark, J. E. (2010). The digital imperative: Making the case for a 21st­century pedagogy. Computers and Composition, 27 (1), 27­35.

147 Coleman, G. (2014). Hacker, hoaxer, , spy: The many faces of anonymous . New York, NY: Verso. Collier, A. (2012). A “living internet:” Some context for the cyberbullying discussion. In J. W. Patchin & S. Hinduja (Eds.), Cyberbullying prevention and response: Expert perspectives (1­12). New York, NY: Routledge. “Cornell International Survey on Street Harassment.” (2015). Hollaback! Retrieved from https://www.ihollaback.org/cornell­international­survey­on­street­harassment/ “Cover Exclusive: Jennifer Lawrence Calls Photo Hacking a ‘Sex Crime.’” (2014). Vanity Fair . Retrieved from https://www.vanityfair.com/hollywood/2014/10/jennifer­lawrence­cover Cowan, L. (2012, April 20). Hedy Lamarr: Movie star, inventor of WiFi. CBS News . Retrieved from https://www.cbsnews.com/news/hedy­lamarr­movie­star­inventor­of­wifi/ Crash Override Network . (2015). Retrieved from http://www.crashoverridenetwork.com/ Crenshaw, K. W. (1989). Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory, and antiracist politics. Legal Forum, 1986 (1), 139­167. Crenshaw, K. W. (1991). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43 (6), 1241­1299. Creswell, J. W. (2009). Research design: Qualitative, quantitative and mixed methods approaches . , CA: SAGE Publications.. Cushman, E. (1996). The rhetorician as an agent of social change. College Composition and Communication, 47 (1), 7–28. Dale, M. (2017, March 27). Prosecutors fight Cosby bid to grill up to 2,000 potential jurors for trial. USA Today . Retrieved from https://www.usatoday.com/story/life/tv/2017/03/27/prosecutors­fight­cosby­bid­to­query­ 2000­potential­jurors/99702568/ Daly, M. (1985). Beyond God the father: Toward a philosophy of women's liberation . Boston, MA: Beacon Press. Daniels, J. (2009a). Cyber racism: White supremacy online and the new attack on civil rights . New York, NY: Rowman & Littlefield Publishers. Daniels, J. (2009b). Rethinking (s): Race, gender, and embodiment. Women’s Studies Quarterly, 37 (1), 101–124. Daniels, J. (2016). Trouble with white feminism: Whiteness, digital feminism and the intersectional internet. In S. U. Noble & B. M. Tynes (Eds.), The intersectional internet: Race, sex, class, and culture online (41­60). New York, NY: Peter Lang Publishing, Inc. Davies, M. (2015, September 25). Amelia Bonow explains how #ShoutYourAbortion 'just kicked the patriarchy in the dick.’ . Retrieved from https://jezebel.com/amelia­bonow­explains­how­shoutyourabortion­just­kicke­17323791 55

148 Day, K. (2001). Constructing masculinity and women’s fear in public space in Irvine, California. Gender, Place & Culture, 8 (2), 109­127. DiAngelo, R. (2011). White fragility. The International Journal of Critical Pedagogy, 3 (3), 54­70. Dibbell, J. (1994). A rape in cyberspace: Or, how an evil clown, a Haitian trickster spirit, two wizards, and a cast of dozens turned a database into a society. In M. Dery (Ed.), Flame wars: The discourse of cyberculture (237–261). Durham, NC: Duke University Press. DeSouza, E., & Fansler, A. G. (2003). Contrapower sexual harassment: A survey of students and faculty members. Sex Roles: A Journal of Research, 48 (11­12), 529­542. Detrow, S. (2016, November 2). KKK paper endorses Trump; Campaign calls outlet ‘repulsive.’ NPR . Retrieved from https://www.npr.org/2016/11/02/500352353/kkk­paper­endorses­trump­campaign­calls­o utlet­repulsive DeWitt, S. L. (1997). Out there on the web: Pedagogy and identity in face of opposition. Computers and Composition, 14 , 229­243. Dietel­McLaughlin, E. (2009). Remediating democracy: Irreverent composition and the vernacular rhetorics of web 2.0. Computers and Composition Online , n.p. Retrieved from http://cconlinejournal.org/Dietel/ Dixon, K. (2014). Feminist online identity: Analyzing the presence of hashtag feminism. Journal of Arts and Humanities, 3 (7), 34–40. Dorpat, T. L. (1994). On the double whammy and gaslighti ng. Psychoanalysis & Psychotherapy, 11 (1), 91­96. Douglas, D. (2016, August 25). The Leslie Jones hack proves the internet still can’t accept successful dark­skinned women. Vice . Retrieved from https://www.vice.com/en_us/article/3b4dn8/the­leslie­jones­hack­proves­the­internet­still ­cant­accept­successful­dark­skinned­women Duggan, M. (2014). Online harassment. Pew Research Center . Retrieved from http://www.pewinternet.org/2014/10/22/online­harassment/ Duggan, M. (2017). Online harassment 2017. Pew Research Center . Retrieved from http://www.pewinternet.org/2017/07/11/online­harassment­2017/ Ehrenkranz, M. (2017, February 9). Trolls keep outsmarting anti­harassment tools. Will Twitter’s new system actually work? . Retrieved from https://mic.com/articles/168041/trolls­keep­outsmarting­anti­harassment­tools­will­twitte rs­new­system­actually­work Elise, A. (2014, October 13). What is the GamerGate scandal? Female game developer flees home amid online threats. The International Business Times . Retrieved from http://www.ibtimes.com/what­gamergate­scandal­female­game­developer­flees­home­am id­online­threats­1704046

149 Elm, M. S. (2008). How do various notions of privacy influence decisions in qualitative internet research? In A. N. Markham, & N. K. Baym (Eds.), Internet inquiry: Conversations about method (69­88). Thousand Oaks, CA: SAGE Publications. Ennis, D. (2017, April 29). Twitter suspends Milo—again—just in time. LGBTQ Nation. Retrieved from https://www.lgbtqnation.com/2017/04/twitter­suspends­milo/ Eveleth, R. (2014). A new harassment policy for Twitter. . Retrieved from https://www.theatlantic.com/technology/archive/2014/12/new­harassment­policy­for­twit ter/383344/ “Facebook’s Internal Manual on Non­Sexual Child Abuse Content.” (2017, May 21). . Retrieved from https://www.theguardian.com/news/gallery/2017/may/21/facebooks­internal­manual­on­n on­sexual­child­abuse­content “Facebook’s Manual on Credible Threats of Violence.” (2017, May 21). The Guardian . Retrieved from https://www.theguardian.com/news/gallery/2017/may/21/facebooks­manual­on­credible­t hreats­of­violence “Facebook Rules on Showing Animal Abuse.” (2017, May 21). The Guardian . Retrieved from https://www.theguardian.com/news/gallery/2017/may/21/facebook­rules­on­showing­cru elty­to­animals Ferenstein, G. (2014, July 23). Twitter diversity report reveals company leadership is 79% male, 72% white. VentureBeat . Retrieved from https://venturebeat.com/2014/07/23/twitter­diversity­report­reveals­company­leadership­i s­79­male­72­white/ Ferganchick­Neufang, J. (1997). Harassment on­line: Consideration for women and webbed pedagogy. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 2 (2), n.p. Retrieved from http://english.ttu.edu/kairos/2.2/binder2.html?coverweb/julia/honline.html Fessenden, M. (2014, October 22). What happened to all the women in computer science? Smithsonian Magazine . Retrieved from https://www.smithsonianmag.com/smart­news/what­happened­all­women­computer­scie nce­1­180953111/ Fichman, P., & Sanfilippo, M. R. (2014). The bad boys and girls of cyberspace: How gender and context impact perception of and reaction to trolling. Social Science Computer Review , 33 (2). 163­180. Fife, J. M. (2010). Using Facebook to teach rhetorical analysis. Pedagogy 10 (3), 555–562. Filipovic, J. (2007). Blogging while female: How internet misogyny parallels real­world harassment. Yale Journal of Law and Feminism 19 , 295­303. Fleckenstein, K. S. (2005). Faceless students, virtual places: Emergence and communal accountability in online classrooms. Computers and Composition, 22 (2), 149­176.

150 French, D. (2016, October 21). The price I’ve paid for opposing Donald Trump. The . Retrieved for https://www.nationalreview.com/2016/10/donald­trump­alt­right­internet­abuse­never­tru mp­movement/ “Frequently Asked Questions.” (2017). PragerU . Retrieved from https://www.prageru.com/frequently­asked­questions Frost, E. (2011). Why teachers must learn: Student innovation as a driving factor in the future of the web. Computers and Composition, 28 (4), 269­275. Foran, C. (2016, September 22). Donald Trump and the rise of anti­Muslim violence. The Atlantic . Retrieved from https://www.theatlantic.com/politics/archive/2016/09/trump­muslims­islamophobia­hate­ crime/500840/ Garber, M. (2017, February 8). 'Nevertheless, she persisted' and the age of weaponized meme. The Atlantic . Retrieved from https://www.theatlantic.com/entertainment/archive/2017/02/nevertheless­she­persisted­an d­the­age­of­the­weaponized­meme/516012/ Garcia, F. (2016, September 3). White nationalist movement growing much faster than Isis on Twitter, study finds. Independent . Retrieved from http://www.independent.co.uk/news/world/americas/white­nationalist­movement­twitter­ faster­growth­isis­islamic­state­study­a7223671.html Gardner, E. (2017, October 28). Google responds to lawsuit accusing YouTube of censoring conservatives. The Hollywood Reporter . Retrieved from https://www.hollywoodreporter.com/thr­esq/heres­googles­response­lawsuit­accusing­yo utube­censoring­conservatives­1052745 Gibson, M. (2011, November 8). #Mencallmethings: Twitter trend highlights sexist abuse online. Time . Retrieved from http://newsfeed.time.com/2011/11/08/mencallmethings­twitter­trend­highlights­sexist­ab use­online/ Gillespie, T. (2018). Regulation of and by platforms. In J. Burgess, A. Marwick, & T. Poell, The SAGE handbook of social media (254­278). Thousand Oaks, CA: SAGE Publications. Glaser, B. G. & A. L. Strauss. (2017). Discovery of Grounded Theory: Strategies for Qualitative Research . New York, NY: Routledge. Glenn, C. (2004). Unspoken: A rhetoric of silence. Carbondale, IL: Southern Illinois University Press. Goldenberg, D. (2013, July 12). We trolled the ESPN comments one last time before they’re overhauled. . Retrieved from https://deadspin.com/we­trolled­the­espn­comments­one­last­time­before­they­756054733

151 Goldman, D. (2016, March 21). 10 years later, Twitter still isn't close to making money. CNN Money . Retrieved from: http://money.cnn.com/2016/03/21/technology/twitter­10th­anniversary/ Goldwag, A. (2012, March 1). Leader’s suicide brings attention to men’s rights movement. Southern Poverty Law Center . Retrieved from https://www.splcenter.org/fighting­hate/intelligence­report/2012/leader%E2%80%99s­sui cide­brings­attention­men%E2%80%99s­rights­movement Greenwood, S., Perrin, A., & Duggan, M. (2016, November 11). Social media update 2016. Pew Research Center . Retrieved from http://www.pewinternet.org/2016/11/11/social­media­update­2016/ Griffith, E. (2018, February 21). This startup’s test shows how harassment targets women online. Wired . Retrieved from https://www.wired.com/story/this­startups­test­shows­how­harassment­targets­women­onl ine/ Grinberg, E. (2014, May 27). Why #YesAllWomen took off on Twitter. CNN . Retrieved from http://www.cnn.com/2014/05/27/living/california­killer­hashtag­/ Gries, L. (2015). Still life with rhetoric: A new materialist approach for visual rhetorics . Boulder, CO: University Press of Colorado. Gruwell, L. (2017). Writing against harassment. Public writing pedagogy and online hate. Composition Forum, 36 , n.p. Retrieved from http://compositionforum.com/issue/36/against­harassment.php Gruwell, L. (2015). Wikipedia's politics of exclusion: Gender, epistemology, and feminist rhetorical (in)action. Computers and Composition, 37 , 117­131. Guilbeault, D., & Woolley, S. (2016, November 1). How Twitter bots are shaping the election. The Atlantic . Retrieved from https://www.theatlantic.com/technology/archive/2016/11/election­bots/506072/ Haas, A. (2009). Wired wombs: a rhetorical analysis of online infertility support communities. In K. Blair, R. Gajjala, & C. Tulley (Eds.), Webbing cyberfeminist practice: Communities, pedagogies and social action (61­84). New York, NY: Hampton Press. “Hateful Conduct Policy.” (2018). Twitter . Retrieved from https://help.twitter.com/en/rules­and­policies/hateful­conduct­policy Hayes, T. (2017). #MyNYPD: Transforming Twitter into a public place for protest. Computers and Composition, 43 , 118­134. Henn, S. (2014, October 21). When women stopped coding. NPR . Retrieved from https://www.npr.org/sections/money/2014/10/21/357629765/when­women­stopped­coding Hemmings, C. (2011). Why stories matter: The political grammar of feminist theory. Durham, NC: Duke University Press.

152 Herbst, C. (2009). Masters of the house: Literacy and the claiming of space on the internet. In K. Blair, R. Gajjala, & C. Tulley (Eds.). Webbing cyberfeminist practice: Communities, pedagogies and social action (135­152). New York, NY: Hampton Press. Herd, W. W. (2017, November 30). You reported sexual harassment, now what? Bumble’s Whitney Wolfe Herd offers advice. Harper’s Bazaar . Retrieved from https://www.harpersbazaar.com/culture/features/a13395335/reporting­sexual­harassment­ advice­whitney­wolfe­bumble/ Herring, S., Job­Sluder, K., Scheckler, R., & Barab, S. (2002). Searching for safety online: Managing ‘trolling’ in a feminist forum. The Information Society, 18 (5), 371–384. Hess, A. (2014). Why women aren’t welcome on the internet. Pacific Standard Magazine . Retrieved from http://www.psmag.com/health­and­behavior/women­arent­welcome­internet­72170 Higgin, T. (2013). /b/lack up: What trolls can teach us about race. The Fibreculture Journal, 13 , n.p. Retrieved from http://twentytwo.fibreculturejournal.org/fcj­159­black­up­what­trolls­can­teach­us­about­ race/ Hine, C. (2000). Virtual ethnography . London: SAGE Publications. hooks, b. (1981). Ain't I a woman: Black women and feminism. Cambridge, MA: South End Press. hooks, b. (1984). Feminist theory: From margin to center . Cambridge, MA: South End Press. Hopkins, N. (2017, May 21). Revealed: Facebook's internal rulebook on sex, terrorism and violence. The Guardian . Retrieved from https://www.theguardian.com/news/2017/may/21/revealed­facebook­internal­rulebook­se x­terrorism­violence Houston, M. (1992). The politics of difference: Race, class, and women’s communication. In L. F. Rakow (Ed.) Women making meaning: New feminist directions in communication (45­59). New York, NY: Routledge. Hsu, T. (2018, March 5). Bumble dating app bans gun images after mass shootings. The New York Times . Retrieved from https://www.nytimes.com/2018/03/05/business/bumble­dating­app­gun­images.html Hunter, S., & Wallace, R. (1999). Contrapower harassment [Special issue]. Dialogue: A Journal for Writing Specialists, 5 (1). Jackman, T. (2013, March 18). ‘SWATing,’ the seamy ‘underweb,’ and award­winning Fairfax journalist Brian Krebs. The Washington Post . Retrieved from http://wapo.st/Z8A15k?tid=ss_mail&utm_term=.1d396b1165c1 Jane, E. A. (2014a). ‘Back to the kitchen, cunt’: Speaking the unspeakable about online misogyny. Journal of Media and Cultural Studies, 28 (4), 558­570. Jane, E. A. (2014b). ‘Your a ugly, whorish, Slut’: Understanding e­bile. Feminist Media Studies, 14 (4), 531­546.

153 Jangelo, J. (1991). Technopower and technoppression: Some abuses of power and control in computer­assisted writing environments. Computers and Composition, 9 (1), 47­64. Jenkins, H. (2007, December 4). Reconsidering digital immigrants. Confessions of an Aca­Fan . Retrieved from http://henryjenkins.org/blog/2007/12/reconsidering_digital_immigran.html Jones, J. M. (2016, November 15). 'Shameless' star Emmy Rossum harassed by Trump supporters online. USA Today . Retrieved from https://www.jsonline.com/story/life/people/2016/11/15/emmy­rossum­trump­supporters­ online­threats/93902888/ Jones, S. (2017, June 9). White men account for 72% of corporate leadership at 16 of the Fortune 500 companies. Fortune . Retrieved from http://fortune.com/2017/06/09/white­men­senior­executives­fortune­500­companies­diver sity­data/ Johnson, N. (2002). Gender and rhetorical space in American life, 1866­1910. Carbondale, IL: Southern Illinois University Press. Kentish, B. (2016, December 12). Donald Trump has lost popular vote by greater margin than any US President. Independent . Retreived from http://www.independent.co.uk/news/world/americas/us­elections/donald­trump­lost­popu lar­vote­hillary­clinton­us­election­president­history­a7470116.html Kissling, E. A., & Kramarae, C. (1991). Stranger compliments: The interpretation of street remarks. Women’s Studies in Communication, 14 (1), 75­93. Knott, K. (2016, November 23). What it’s like to be named to a watch list of ‘anti­America’ professors. The Chronicle of Higher Education . Retrieved from https://www.chronicle.com/article/What­It­s­Like­to­Be­Named/238486 Koerber, A. (2000). Toward a feminist rhetoric of technology. Journal of Business and Technical Communication, 14 (1), 58­73. Kolko, B. (2000). Erasing@ race: Going white in the (inter)face. In B. Kolko, L. Nakamura, & G. B. Rodman (Eds.), Race in cyberspace (213­232) . New York, NY: Routledge. Kosoff, M. (2018, February 19). “Just an ass­backward tech company”: How Twitter lost the internet war. Vanity Fair . Retrieved from https://www.vanityfair.com/news/2018/02/how­twitter­lost­the­internet­war Kozinets, R. V. (2010). Netnography: Doing ethnographic research online . London: SAGE Publications. Krogstad, J. M. (2015, February 3). Social media preferences vary by race and ethnicity. Pew Research Center . Retrieved from http://www.pewresearch.org/fact­tank/2015/02/03/social­media­preferences­vary­by­race ­and­ethnicity/ Labarre, S. (2013, September 24). Why we're shutting off our comments. Popular Science . Retreived from https://www.popsci.com/science/article/2013­09/why­were­shutting­our­comments

154 Laflen, A., & Fiorenza, B. (2012). “Okay, my rant is over”: The language of emotion in computer­mediated communication. Computers and Composition, 29 (4), 296­308. Lampman, C., Phelps, A., Bancroft, S., & Beneke, M. (2009). Contrapower harassment in academia: A survey of faculty experience with student , , and sexual attraction. Sex Roles: A Journal of Research, 60 (5­6), 331­346. Lanier, J. (2010). You are not a gadget: A manifesto . New York, NY: Vintage Books. Larson, S. (2017, December 1). Twitter has a new reason for why it didn't delete Trump's anti­Muslim retweets. CNNTech . Retrieved from http://money.cnn.com/2017/12/01/technology/twitter­reason­trump­delete­anti­muslim­tw eets/index.html Lather, P. A. (1988). Feminist perspectives on empowering research methodologies. Women’s Studies International Forum, 11 , 569–81. Lather, P. A., & Smithies, C. S. (1997). Troubling the angels: Women living with HIV/AIDS . Boulder, CO: Westview Press. Lecher, C. (2017, December 14). Read the dissenting statements of the Democratic FCC commissioners slamming net neutrality repeal. The Verge . Retrieved from https://www.theverge.com/2017/12/14/16776712/fcc­commissioners­democrat­statements ­net­neutrality Leight, E. (2016, October 14). Danny Elfman scores creepy 'Trump stalks Hillary' clip. . Retrieved from https://www.rollingstone.com/politics/news/danny­elfman­scores­creepy­trump­stalks­hill ary­clip­w445037 Lewin, S. (2015, October 14). In celebration of Ada Lovelace, the first computer programmer. . Retrieved from https://www.scientificamerican.com/article/in­celebration­of­ada­lovelace­the­first­compu ter­programmer/ Levintova, H. (2016). This congresswoman has plans to stop online harassment. Mother Jones . Retrieved from http://www.motherjones.com/politics/2016/09/katherine­clark­fight­against­internet­trolls­ gamergate Medina, J. (2014, May 26). Campus killings set off anguished conversation about the treatment of women. The New York Times . Retrieved from https://www.nytimes.com/2014/05/27/us/campus­killings­set­off­anguished­conversation ­about­the­treatment­of­women.html Mele, C. (2016). Professor Watchlist is seen as threat to . The New York Times . Retrieved from https://www.nytimes.com/2016/11/28/us/professor­watchlist­is­seen­as­threat­to­academi c­freedom.html

155 Meyer, R. (2018, March 8). The grim conclusion of the largest­ever study of . The Atlantic . Retrieved from https://www.theatlantic.com/technology/archive/2018/03/largest­study­ever­fake­news­m it­twitter/555104/ Levy, K. (2017, September 2). Game developers are finally stepping up to change their hate­filled industry. Business Insider. Retrieved from http://www.businessinsider.com/fed­up­game­developers­sign­open­letter­2014­9 Levy, S. (2017, April 26). Jack Dorsey on Donald Trump. Wired . Retrieved from https://www.wired.com/2017/04/jack­dorsey­on­donald­trump/ Lewis, R. C. (2016, August 25). The Leslie Jones hack is proof black women are targets for violence. Teen Vogue . Retrieved from https://www.teenvogue.com/story/leslie­jones­website­hack­black­women­violence­racis m­misogyny Lorde, A. (2007). Age, race, class, and sex: Women redefining difference. Sister outsider (114­124). New York, NY: Ten Speed Press. Lovett, I., & Nagourney, A. (2014, May 24). Video rant, then deadly rampage in California town. The New York Times . Retrieved from https://www.nytimes.com/2014/05/25/us/california­drive­by­shooting.html?_r=0 Loza, S. (2014). Hashtag feminism, #SolidarityIsForWhiteWomen, and the other #FemFuture. Ada: A Journal of Gender, New Media and Technology, 5 , n.p. Retrieved from http://adanewmedia.org/2014/07/sloza/1499/ Lush, T. (2016, September 27). For many women, watching Trump interrupt Clinton 51 times was unnerving but familiar. PBS . Retrieved from https://www.pbs.org/newshour/politics/for­many­women­watching­trump­interrupt­clinto n­51­times­was­unnerving­but­familiar Lussos, R. G. (2018). Twitter bots as digital writing assignments. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 22 (2), n.p. Retrieved from http://praxis.technorhetoric.net/tiki­index.php?page=PraxisWiki:_:twitterbots#Why_Twit ter_Bots_ Lyon, A. (2004). Confucian silence and remonstration: A basis for deliberation? In C. S. Lipson & R. A. Binkley (Eds.), Rhetoric before and beyond the Greeks (131­136). Albany, NY: State University of New York Press. Madden, S. (2014). Obsolescence in/of digital writing studies. Computers and Composition, 33 , 29­ 39. Mantilla, K. (2015). Gendertrolling: How misogyny went viral . Santa Barbara, CA: ACB­CLIO, LLC. Mantilla, K. (2013). Gendertrolling: Misogyny adapts to new media. Feminist Studies , 39 (2), 563–570.

156 MacKinnon, C. A. (1987). : Discourses on life and law . Cambridge, MA: Harvard University Press. Maranto, G., & Barton, M. (2010). Paradox and promise: MySpace, Facebook, and the sociopolitics of social networking in the writing classroom. Computers and Composition, 27 (1), 36­47. Marcotte, A. (2016, June 13). Overcompensation nation: It’s time to admit that toxic masculinity drives gun violence. Salon . Retrieved from https://www.salon.com/2016/06/13/overcompensation_nation_its_time_to_admit_that_tox ic_masculinity_drives_gun_violence/ Markham, A. N. (2013). Fieldwork in social media: What would Malinowski do? Qualitative Communication Research, 2 (4), 434–446. Markham, A. N., & Baym, N. K. (2008). Internet inquiry: Conversations about method. Thousand Oaks, CA: SAGE Publications. Markoff, J. (2016, November 17). Automated pro­Trump bots overwhelmed pro­Clinton messages, researchers say. The New York Times . Retrieved from https://www.nytimes.com/2016/11/18/technology/automated­pro­trump­bots­overwhelme d­pro­clinton­messages­researchers­say.html Martin, C. E., & Valenti, V. (2013). #FemFuture: Online revolution. Barnard Center for Research on Women . Retrieved from http://bcrw.barnard.edu/wp­content/nfs/reports/NFS8­FemFuture­Online­Revolution­Repo rt­April­15­2013.pdf Matias, J. N., Johnson, A., Boesel, W. E., Keegan, B., Friedman, J., & DeTar, C. (2015, May 13). Reporting, reviewing, and responding to harassment on Twitter. Women, Action, and the Media . Retrieved from http://womenactionmedia.org/twitter­report Matsakis, L. (2017, March 31). Twitter just killed one of its iconic features. Mashable . Retrieved from http://mashable.com/2017/03/31/twitter­kills­eggs­profile­photo/ May C. (2017, July 5). Silicon Valley’s sexism problem: Another tech executive resigns over sexual harassment charges. Salon . Retrieved from https://www.salon.com/2017/07/05/silicon­valleys­sexism­problem­another­tech­executi ve­resigns­over­sexual­harassment­charges/ McArdle, M. (2015, April 21). Twitter’s harassment problem is just business. Bloomberg. Retreived from https://www.bloomberg.com/view/articles/2015­04­21/twitter­s­harassment­problem­is­j ust­business McKee, H. A., & Porter, J. E. (2009). Ethics of internet research: A rhetorical, case­based process. New York, NY: Peter Lang Publishing Inc. McKee, H. (2002). ‘YOUR VIEWS SHOWED TRUE IGNORANCE!!!’: (Mis)communication in an online interracial discussion forum. Computers and Composition, 19 (4), 411­434.

157 McKinney, K. (2014, May 15). Here's why women have turned the "not all men" objection into a meme. . Retrieved from https://www.vox.com/2014/5/15/5720332/heres­why­women­have­turned­the­not­all­me n­objection­into­a­meme McMillan, R. (2015, October 13). Her code got humans on the moon—and invented software itself. Wired . Retrieved from https://www.wired.com/2015/10/margaret­hamilton­nasa­apollo/ McMorris­Santoro, E. (2016, January 29). The Bernie bros are a problem and the Sanders campaign is trying to stop them. BuzzFeed . Retrieved from https://www.buzzfeed.com/evanmcsan/the­bernie­bros?utm_term=.nhzMP7zAv#.jub5O KdY4 Meisner, J. (2017, January 24). Chicagoan gets prison for 'Celebgate' nude­photo hacking that judge calls 'abhorrent.' Chicago Tribune . Retrieved from http://www.chicagotribune.com/news/local/breaking/ct­celebgate­hacking­scandal­senten cing­met­20170123­story.html Megarry, J. (2014). Online incivility or sexual harassment? Conceptualizing women’s experiences in the digital age. Women’s Studies International Forum, 47 , 46­55. Miller, G., Nakashima, E., & Entous, A. (2017). Obama’s secret struggle to punish Russia for Putin’s election assault. The Washington Post . Retrieved from https://www.washingtonpost.com/graphics/2017/world/national­security/obama­putin­ele ction­hacking/ Miller, C. C. (2015, July 9). When algorithms discriminate. The New York Times . Retrieved from https://www.nytimes.com/2015/07/10/upshot/when­algorithms­discriminate.html?_r=0 Millhiser, I. (2016, February 7). Bernie Sanders tells Berniebros to knock it off—‘we don’t want that crap.’ ThinkProgress . Retrieved from https://thinkprogress.org/bernie­sanders­tells­berniebros­to­knock­it­off­we­dont­want­th at­crap­dac49275602f/ Molina, B. (2017). Twitter rolls back safety feature tied to lists. USA Today . Retrieved from http://www.usatoday.com/story/tech/talkingtech/2017/02/14/twitter­rolls­back­safety­feat ure­tied­lists/97889958/ Moore, J. L., Rosinski, P., Peeples, T., Pigg, S., Rife, M. C., Brunk­Chavez, B., Lackey, D., Rumsey, S. K., Tasaka, R., Curran, P., & Grabill, J. T. (2016). Revisualizing composition: How first­year writers use composing technologies. Computers and Composition, 39 , 1­13. Mountford, R. (2003). The gendered pulpit: Preaching in American Protestant space s. Carbondale, IL: Southern Illinois University Press. Munger, K. (2016, November 17). This researcher programmed bots to fight racism on Twitter. It worked. The Washington Post . Retrieved from

158 https://www.washingtonpost.com/news/monkey­cage/wp/2016/11/17/this­researcher­pro grammed­bots­to­fight­racism­on­twitter­it­worked/?utm_term=.a9c3ed44a0fa Nakamura, L. (2008). Digitizing race: Visual cultures of the internet . Minneapolis, MN: University of Minnesota Press. Nakamura, L. (2012). ‘It’s a nigger in here! Kill the nigger!’: User­generated media campaigns against racism, sexism, and homophobia in digital games. In A. N. Valdivia & K. Gates (Eds.), The international encyclopedia of media studies (2­15). Malden, MA: Blackwell Publishing Ltd. Naughton, J. (2018, March 18). Extremism pays. That’s why Silicon Valley isn’t shutting it down. The Guardian . Retrieved from https://www.theguardian.com/commentisfree/2018/mar/18/extremism­pays­why­silicon­ valley­not­shutting­it­down­ “New Ways to Control Your Experience on Twitter.” (2016, August 18). Twitter . Retrieved from https://blog.twitter.com/official/en_us/a/2016/new­ways­to­control­your­experience­on­t witter.html Noble, S. (2016). Safiya Noble: Challenging the algorithms of oppression. Personal Democracy Forum . Retrieved from https://www.youtube.com/watch?v=iRVZozEEWlE Noble, S. U. (2013). Google search: Hyper­visibility as a means of rendering black women and girls invisible. InVisible Culture, 19 , n.p. Retrieved from http://ivc.lib.rochester.edu/google­search­hyper­visibility­as­a­means­of­rendering­black­ women­and­girls­invisible/ Ohmann, R. (1985). Literacy, technology, and monopoly capital. College English, 47 (7), 675­689. Ohrnberger, J., Fichera, E., & Sutton, M. (2017). The relationship between physical and mental health: A mediation analysis. Social Science & Medicine, 195 , 42­49. Oluo, I. (2015, November 19). Why we don’t have comments. The Establishment. Retrieved from https://theestablishment.co/why­we­dont­have­a­comments­section­4b491cc4fab Ortega, M. (2006). Being lovingly, knowingly ignorant: White feminism and women of color. Hypatia, 21 (3), 56­74. “Our approach to policy…” (2017). Twitter . Retrieved from https://help.twitter.com/en/rules­and­policies/enforcement­philosophy Padmaja, S. (2016). Empowering women: Access to public spaces. Journal of Governance & Public Policy, 6 (2), 98­104. Papacharissi, Z. (2014). Affective publics: Sentiment, technology, and politics . New York: Oxford University Press. Patrick, C. (2013). Perelman, Foucault, and social networking: How Facebook and audience perception can spark critical thinking in the composition classrooms. Computers and Composition Online . Retrieved from from http://cconlinejournal.org/spring2013_special_issue/Patrick/

159 Penney, J., & Dadas, C. (2014). (Re)Tweeting in the service of protest: Digital composition and circulation in the movement. New Media & Society, 16 (1), 74­ 90. Penny, L. (2013). Cybersexism: Sex, gender and power on the internet . New York. NY: Bloomsbury Publishing. Perez, S. (2017, February 14). Twitter quickly kills a poorly thought out anti­abuse measure. Tech Crunch . Retrieved from https://techcrunch.com/2017/02/14/twitter­quickly­kills­a­poorly­thought­out­anti­abuse­ measure/ Phillips, W. (2013, June 10). Don’t feed the trolls? It’s not that simple. The Daily Dot . Retrieved from https://www.dailydot.com/via/phillips­dont­feed­trolls­antisocial­web/ Phillips, W. (2015a, May 10). Let’s call trolling what it really is. The Kernel . Retrieved from http://kernelmag.dailydot.com/issue­sections/staff­editorials/12898/trolling­stem­tech­sex ism/ Phillips, W. (2015b). This is why we can't have nice things: Mapping the relationship between online trolling and mainstream culture . Cambridge, MA: MIT Press. Piggott, S. (2016, November 9). White nationalists and the so­called "alt­right" celebrate Trump's victory. Southern Poverty Law Center . Retrieved from https://www.splcenter.org/hatewatch/2016/11/09/white­nationalists­and­so­called­alt­righ t­celebrate­trumps­victory Piner, C. (2016, July 28). Feminist writer Jessica Valenti takes a break from social media after threat against her daughter. Slate . Retrieved from http://www.slate.com/blogs/xx_factor/2016/07/28/feminist_writer_jessica_valenti_takes_ a_break_from_social_media_after_threat.html Poland, B. (2015). Haters: Harassment, abuse, and violence online . Lincoln, NE. University of Nebraska Press. Postill, J., & Pink, S. (2012). Social media ethnography: The digital researcher in a messy web. Media International Australia, 145 (1), 123­134. Prager University vs. Google Inc. (2017). Browne George Ross LLP. Retrieved from http://www.bgrfirm.com/wp­content/uploads/2017/10/PRAGER_U­_v_GOOGLE­YOU TUBE_complaint_10­23­2017_FILED.pdf Puwar, N. (2004). Space invaders: Race, gender and bodies out of place . Oxford: Berg Publishers. Quodling, A. (2015, April 21). Doxxing, swatting and the new trends in online harassment. The Conversation . Retrieved from http://theconversation.com/doxxing­swatting­and­the­new­trends­in­online­harassment­4 0234 Rappeport, A. (2016, October 10). What story did debate night body language tell? The New York Times . Retrieved from https://www.nytimes.com/2016/10/11/us/politics/body­language­debate.html

160 Ratcliffe, K. (2005). Rhetorical listening: Identification, gender, and whiteness . Carbondale, IL: Southern Illinois University Press. Rawlinson, K., & Peachey, P. (2012, April 12). Hackers step up war on security services. Independent. Retrieved from http://www.independent.co.uk/news/uk/crime/hackers­step­up­war­on­security­services­7 640780.html Redden, M., & Pengelly, M. (2017, June 17). Prosecutors vow to retry Bill Cosby after sexual assault case ends in mistrial. The Guardian . Retrieved from https://www.theguardian.com/world/2017/jun/17/bill­cosby­sexual­assault­case­ends­mis trial­hung­jury Reid, Jean. (2011). “We don’t Twitter, we Facebook”: An alternative pedagogical space that enables critical practices in relation to writing. English Teaching: Practice and Critique, 10 (1), 58–80. “Rethinking our default profile photo.” (2017, March 31). Twitter . Retrieved from https://blog.twitter.com/en_us/topics/product/2017/rethinking­our­default­profile­photo.h tml Rhodes, J. (2005). Radical feminism, Writing, and critical agency: From manifesto to modem. Albany: SUNY Press. Rich, A. (1995). Toward a woman­centered university. On Lies, Secrets, and Silence: Selected Prose (126­156). New York: Norton. Ritchie, J., & Ronald, K. (2001). Introduction. In J. Ritchie, & Ronald, K. (Eds.), Available means: An anthology of women's rhetoric(s) , (xv­xxxi). , PA: University of Pittsburgh Press. Roberts, S. T. (2016). Commercial content moderation: Digital laborers’ dirty work. In S. U. Noble & B. M. Tynes (Eds.), The intersectional internet: Race, sex, class, and culture online (147­160). New York, NY: Peter Lang Publishing, Inc. Robinson, J. (2016, December 29). The troll who helped torment Leslie Jones off Twitter just landed a massive book deal. Vanity Fair . Retrieved from http://www.vanityfair.com/style/2016/12/milo­yiannopoulos­leslie­jones­book­deal Rogers, K. (2017, June 13). Kamala Harris is (again) interrupted while pressing a senate witness. The New York Times . Retrieved from https://www.nytimes.com/2017/06/13/us/politics/kamala­harris­interrupted­jeff­sessions.h tml Ross, L. (2011). Understanding reproductive justice. Trust Black Women . Retrieved from https://www.trustblackwomen.org/our­work/what­is­reproductive­justice/9­what­is­repro ductive­justice Royster, J. J., & Kirsch, G. E. (2012). Feminist rhetorical practices: New horizons for rhetoric, composition, and literacy studies. Carbondale, IL: Southern Illinois University Press.

161 Santana, A. D. (2014). Virtuous or vitriolic: The effect of anonymity on civility in online newspaper reader comment boards. Journalism Practice, 8 (1), 18­33. Selfe, C. L., & Hawisher, G. E. (2012). Exceeding the bounds of the interview: Feminism, mediation, narrative, and conversations about digital literacy. In L. Nickoson, & M. P. Sheridan, (Eds.), Writing studies research in practice: Methods and methodologies (36­50). Carbondale, IL: Southern Illinois University Press. Selfe, C. L., & Selfe, R. J. (1994). The politics of the interface: Power and its exercise in electronic contact zones. College Composition and Communication, 45 (4), 480­504. Selfe, C. L., & Meyer, P. R. (1991). Testing claims for on­line conferences. Written Communication, 2 , 163­192. Selfe, C. L. (1999). Technology and literacy: A story about the perils of not paying attention. College Composition and Communication, 50 (3), 411­436. Shachaf, P., & Hara, N. (2010). Beyond vandalism: Wikipedia trolls. Journal of Information Science, 36 (3), 357­370. Shapiro, R. (2017, February 8). Democrat defies GOP, reads part of Coretta Scott King’s letter on senate floor. Huffington Post . Retrieved from https://www.huffingtonpost.com/entry/jeff­merkley­reads­part­of­coretta­scott­kings­lette r­on­senate­floor_us_589adbc4e4b09bd304bedc35 Shepherd, R. (2015). FB in FYC: Facebook use among first­year composition students. Computers and Composition, 35 , 86–107. Shepherd, R. (2016). Men, women, and Web 2.0 writing: Gender difference in Facebook composing. Computers and Composition, 39 , 14­26. Sim, S. (2007). Manifesto for silence: Confronting the politics and cultures of noise . Edinburgh, Scotland: Edinburgh University Press. Siminoff, J. (2017, January 19). Building a more inclusive Twitter in 2016. Twitter . Retrieved from https://blog.twitter.com/en_us/topics/company/2017/building­a­more­inclusive­twitter­in­ 2016.html Smith, M. A., Raine, L., Shneiderman, B, & Himelboin, I. (2014, February 20). Mapping twitter topic networks: From polarized crowds to community clusters. Pew Research Center . Retrieved from http://www.pewinternet.org/2014/02/20/mapping­twitter­topic­networks­from­polarized­ crowds­to­community­clusters/ Smith, M. D. (2013). #SolidarityIsForWhiteWomen, #BlackPowerIsForBlackMen, but many are still brave. Feministing . Retrieved from http://feministing.com/2013/08/14/solidarityisforwhitewomen­blackpowerisforblackmen­ but­many­are­still­brave/ Solon, O. (2017, May 25). Underpaid and overburdened: The life of a Facebook moderator. The Guardian. Retrieved from

162 https://www.theguardian.com/news/2017/may/25/facebook­moderator­underpaid­overbur dened­extreme­content Soni, J. (2013, August 26). The reason HuffPost is ending anonymous accounts. The Huffington Post . Retrieved from https://www.huffingtonpost.com/jimmy­soni/why­is­huffpost­ending­an_b_3817979.html Suler, John. (2004). The online disinhibition effect. CyberPsychology & Behavior, 7 (3), 321–326. Surtees, P., Wainwright, N. M., Luben, R. N., Wareham, N. J., Bingham, S. A., & Khaw, K. T. (2008). Psychological distress, major depressive disorder, and risk of stroke. Neurology, 70 (10), 788­794. Syfret, W. (2015, September 23). We spoke to a founder of # about rejecting shame. Vice . Retrieved from https://www.vice.com/en_uk/article/kwxg9x/we­spoke­to­a­founder­of­shoutyourabortio n­about­rejecting­shame Takayoshi, P. (1994). Building new networks from the old: Women’s experiences with electronic communications. Computers and Composition, 1 1 , 21­35. Tedlock, B. (2003). Ethnography and ethnographic representation. In N. K. Denzin, & Y. S. Lincoln (Eds.), Strategies of qualitative inquiry (65–213). Thousand Oaks, CA: SAGE Publications. Thompson, D. M. (1993). "The Woman in the Street:" the Public Space from Sexual Harassment. Yale Journal of Law and Feminism, 6 (2), 313­348. Thompson, N. (2018, March 15). Susan Wojcicki on YouTube’s fight against misinformation. Wired . Retrieved from https://www.wired.com/story/susan­wojcicki­on­youtubes­fight­against­misinformation/ “The Twitter Rules.” (2018). Twitter . Retrieved from https://help.twitter.com/en/rules­and­policies/twitter­rules Tierney, T. F. (2013). The public space of social media: Connected cultures of the network society . New York, NY: Routledge. Tiku, N., & Newton, C. (2015, February 4). Twitter CEO: 'We suck at dealing with abuse.' The Verge . Retrieved from http://www.theverge.com/2015/2/4/7982099/twitter­ceo­sent­memo­taking­personal­resp onsibility­for­the Titcomb, J. (2017, December 18). Twitter bans leaders after anti­Muslim videos shared by Donald Trump. The Telegraph . Retrieved from https://www.telegraph.co.uk/technology/2017/12/18/twitter­bans­britain­first­account­wh ose­anti­muslim­videos/ Tobin, A. (2013, August 14). Q&A with #SolidarityIsForWhiteWomen creator Mikki Kendall. Bustle . Retrieved from

163 http://www.bustle.com/articles/3612­qa­with­solidarityisforwhitewomen­creator­mikki­k endall Totilo, S. (2014, October 11). Another woman in gaming flees home following death threats. Kotaku. Retrieved from https://kotaku.com/another­woman­in­gaming­flees­home­following­death­thre­1645280 338 Tran, M. (2015). Combatting gender privilege and recognizing a women’s in public spaces: Arguments to criminalize catcalling and creepshots. Hastings Women’s Law Journal, 26 (2), 185­206. Trice, M., & Potts, L. (2018). Building dark patterns into platforms: How GamerGate perturbed Twitter’s user experience. Present Tense: A Journal of Rhetoric in Society, 6 (3), n.p. Retrieved from http://www.presenttensejournal.org/volume­6/building­dark­patterns­into­platforms­how­ gamergate­perturbed­­user­experience/ Update: 1,094 bias­related incidents in the month following the election. (2016, December 16). Southern Poverty Law Center . Retrieved from https://www.splcenter.org/hatewatch/2016/12/16/update­1094­bias­related­incidents­mont h­following­election Uhrmacher, K., & Gamio, L. (2016, October 10). What two body language experts saw at the second presidential debate. The Washington Post . Retrieved from https://www.washingtonpost.com/graphics/politics/2016­election/second­debate­body­lan guage/ Vance, C. S. (1984). More pleasure, more danger: A decade after the Barnard Sexuality Conference. In C. S. Vance (Ed.), Pleasure and danger: Exploring female sexuality (xvi­xxxix). London: Pandora Press. Varol, O., Ferrara, E., Clayton, D. A., Filippo, M., & Flammini, A. (2017, March 27). Online human­bot interactions: Detection, estimation, and characterization. Retrieved from https://arxiv.org/pdf/1703.03107.pdf Vee, A. (2010). Carving up the commons: How software patents are impacting our digital composition environments. Computers and Composition, 27 (3), 179­192. Vee, A. (2017). Coding literacy: How computer programming is changing writing . Cambridge, MA: The MIT Press. Vera­Gray, F. (2016). Men's stranger intrusions: Rethinking street harassment. Women's Studies International Forum, 58 , 9­17. Victor, D. (2017, February 8). ‘Nevertheless, she persisted’: How senate’s silencing of Warren became a meme. The New York Times . Retrieved from https://www.nytimes.com/2017/02/08/us/politics/elizabeth­warren­republicans­facebook­ twitter.html

164 Vie, S. (2008). Digital divide 2.0: “Generation M” and online social networking sites in the composition classroom. Computers and Composition, 25 , 9–23. Vie, S. (2015). What's going on?: Challenges and opportunities for social media use in the writing classroom. The Journal of Faculty Development, 29 (2), 33­44. “Volunteer.” (2018). Girls Who Code . Retrieved from https://girlswhocode.com/volunteer/ Wagner, K., & Molla, R. (2018, March 2). Twitter claims it was more diverse in 2017, but that’s not what the data shows. Recode . Retrieved from https://www.recode.net/2018/3/2/17069188/twitter­diversity­report­workforce­women­mi norities­percent­white­asian­data Wang, C. (2016, July 26). Jack Dorsey said online harassment 'has no place on Twitter.’ CNBC. Retrieved from https://www.cnbc.com/2016/07/26/jack­dorsey­said­online­harassment­has­no­place­on­t witter.html Warnick, B., & Heineman, D. S. (2012). Rhetoric online: The politics of new media. New York, NY: Peter Lang Publishing, Inc. Warshauer, S. C. (1995). Rethinking teacher authority to counteract homophobic in the networked classroom: A model of teacher response and overview of classroom methods. Computers and Composition, 12 , 97­111. Warzel, C. (2016a, September 22). 90% of the people who took BuzzFeed News’ survey say Twitter didn’t do anything when they reported abuse. BuzzFeed News . Retrieved from https://www.buzzfeed.com/charliewarzel/90­of­the­people­who­took­buzzfeed­news­surv ey­say­twitter­d?utm_term=.vjQDOpXQj#.lhjmQpoGe Warzel, C. (2016b, August 11). “A Honeypot For Assholes": Inside Twitter’s 10­year failure to stop harassment. BuzzFeed News . Retrieved from https://www.buzzfeed.com/charliewarzel/a­honeypot­for­assholes­inside­twitters­10­year ­failure­to­s?utm_term=.xrr4z1ONQ#.qaMr6K3qL Warzel, C. (2017, July 18). Twitter is still dismissing harassment reports and frustrating victims. BuzzFeed News . Retrieved from https://www.buzzfeed.com/charliewarzel/twitter­is­still­dismissing­harassment­reports­an d?utm_term=.jyxlMjQa8#.kv8EojdQW “What We Do.” PragerU . Retrieved from https://www.prageru.com/what­we­do Williams, C. (2011, December 28). Anonymous 'Robin Hood' hacking attack hits major firms. The Telegraph. Retrieved from https://www.telegraph.co.uk/technology/news/8980453/Anonymous­Robin­Hood­hackin g­attack­hits­major­firms.html Wittenstein, J. (17 August, 2017). What is Trump worth to Twitter? One analyst estimates $2 billion. Bloomberg Technology . Retrieved from https://www.bloomberg.com/news/articles/2017­08­17/what­is­trump­worth­to­twitter­on e­analyst­estimates­2­billion

165 Wu, B. (2014, October 20). Rape and death threats are terrorizing female gamers. Why haven't men in tech spoken out? The Washington Post . Retrieved from https://www.washingtonpost.com/posteverything/wp/2014/10/20/rape­and­death­threats­ are­terrorizing­female­gamers­why­havent­men­in­tech­spoken­out/?utm_term=.4281efe 8a390 Wysocki, A. F., & Jasken, J. I. (2004). What should be an unforgettable face…. Computers and Composition, 21 (1), 29­48. Young, H. (2014). Race in online fantasy fandom: Whiteness on Westeros.org. Continuum, 28 (5), 737­747.

166 Appendix A Survey Questions

● How do you describe your gender? ● What are your racial and/or ethnic identifications? ● How do you describe your sexuality? ● About how often do you tweet? ○ Never ○ Monthly ○ Every few weeks ○ Weekly ○ Once daily ○ Multiple times per day ○ Other (write in response) ● What is your account privacy setting? ○ Locked ○ Unlocked ○ Not sure ● What other social media do you use on a regular basis besides Twitter? Check all that apply. ○ Facebook ○ Tumblr ○ Instagram ○ Pinterest ○ Snapchat ○ YouTube ○ ○ Other (write in response) ● Have you ever experienced harassment while using Twitter? ○ Yes ○ No ○ Not sure ● How many times have you experienced harassment while using Twitter? ○ Never ○ 1­2 isolated times ○ About once a month ○ About once a week ○ Daily ○ Other (write in response) ● To what degree would you rate the severity of this harassment? (Rate 1­5, with 1 being “not severe” and 5 being “extremely severe”) ● Does harassment alter how you use Twitter? ○ Yes ○ No ○ Not sure

167 ○ Other (write in response) ● What strategies have you used to deal with harassment on Twitter? Check all that apply. ○ Ignore the user ○ Mute the user ○ Block the user ○ Report the user ○ Unfollow the user ○ Reply to the user ○ Have friends reply to the user ○ Tweet a screenshot of the harassment ○ Subtweet the user ○ Take a break from Twitter ○ Ask friends to flush out your mentions ○ Self­censor ○ Other (write in response) ● In your experience, what identities, actions, or discussions provoke harassment? ● Have you experienced harassment on Twitter that you would consider to be based, at least in part, on your gender? ○ Yes ○ No ○ Not sure ○ Other (write in response) ● What do you think can and/or should be done to curb the problem of harassment on Twitter? ● Do you consider yourself a feminist? ○ Yes ○ No ○ Not sure ○ I don’t identify with the label “feminist,” but I believe women face discrimination ○ Other (write in response) ● If you do identify with feminism, do you make this identity known on Twitter? ○ All the time ○ Sometimes ○ No ○ Not sure ○ Other (write in response) ● If you identify as a feminist, in what ways, if any, has Twitter influenced your feminism? ● If you identify as a feminist, in what ways, if any, has your feminism influenced your use of Twitter?

168