Detecting Nastiness in Social Media

Detecting Nastiness in Social Media

Detecting Nastiness in Social Media Niloofar Safi Samghabadi♠, Suraj Maharjan♠, Alan Sprague♣, Raquel Diaz-Sprague♣, Thamar Solorio♠ ♠Department of Computer Science, University of Houston ♣Department of Computer & Information Sciences, University of Alabama at Birmingham [email protected], [email protected], [email protected], [email protected], [email protected] Abstract According to a High School Youth Risk Behav- ior Survey, 14.8% of students surveyed nation- Although social media has made it easy wide in the United States (US) reported being bul- for people to connect on a virtually un- lied electronically (nobullying.com, 2015). An- limited basis, it has also opened doors to other research done by the Cyberbullying Re- people who misuse it to undermine, ha- search Center (Patchin, 2015) from 2007 to 2015 rass, humiliate, threaten and bully others. shows that on average, 26.3% of middle and high There is a lack of adequate resources to de- school students from across the United States have tect and hinder its occurrence. In this pa- been victims of cyberbullying at some point in per, we present our initial NLP approach their lives. Also, on average, about 16% of the to detect invective posts as a first step to students have admitted that they have cyberbul- eventually detect and deter cyberbullying. lied others at some point in their lives. Studies We crawl data containing profanities and have shown that cyberbullying victims face social, then determine whether or not it contains emotional, physiological and psychological disor- invective. Annotations on this data are ders that lead them to harm themselves (Xu et al. improved iteratively by in-lab annotations (2012)). and crowdsourcing. We pursue different In this research we perform the initial step to- NLP approaches containing various typi- wards detecting invective in online posts from so- cal and some newer techniques to distin- cial media sites used by teens, as we believe it guish the use of swear words in a neutral can be the starting point of cyberbullying events. way from those instances in which they are We first create a data set that includes highly used in an insulting way. We also show negative posts from ask.fm. ask.fm is a semi- that this model not only works for our data anonymous social network, where anyone can post set, but also can be successfully applied to a question to any other user, and may choose different data sets. to do so anonymously. Given that people tend 1 Introduction to engage in cyberbullying behavior under the cover of anonymity (Sticca and Perren, 2013), As the internet has become the preferred means the anonymity option in ask.fm, as in other so- of communication worldwide1, it has introduced cial media platforms, allows attackers the power new benefits as well as new dangers. One of the to freely harass users by flooding their pages with most unfortunate effects of online interconnect- profanity-laden questions and comments. Seeing edness is Cyberbullying – defined as the deliber- a lot of vile messages in one’s profile page often ate use of information/communication technolo- disturbs the user. Several teen suicides have been gies (ICT’s) to cause harm to people by causing attributed to cyberbullying in ask.fm (Healy, 2014; a loss of both self-esteem and the esteem of their Shute, 2013). This phenomenon motivated us to friendship circles (Patchin and Hinduja, 2010). crawl a number of ask.fm accounts and to analyze The groups most affected by this phenomenon them manually to ascertain how cyberbullying is are teens and pre-teens (Livingstone et al., 2010). carried out in this particular site. We learned that victims have their profile page flooded with abu- 1The New Era of Communication Among Americans http://www.gallup.com/poll/179288/new- sive posts. Then from identifying victims of cyber- era-communication-americans.aspx bullying, we switched to looking for word patterns 63 Proceedings of the First Workshop on Abusive Language Online, pages 63–72, Vancouver, Canada, July 30 - August 4, 2017. c 2017 Association for Computational Linguistics that make a post abusive. Since, abusive posts are accuracy for detecting cyberbullying in YouTube rare compared to the rest of online posts, in order comments. Xu et al.(2012) identify several key to ensure that we would obtain enough invective problems in using the social media data sources to posts, we decided to focus exclusively on posts study bullying and formulate them as NLP tasks. that contain profanity. This is analogous to the In one of their approaches, they use latent topic method used in data collection by Xu et al.(2012); modeling to analyze the topics people commonly they limited their Twitter data to tweets containing talk about in bullying comments, however they the words bully, bullied, bullying. find most topics were hard to interpret. Van Hee The main contributions of this paper are as fol- et al.(2015) study ask.fm Dutch posts, and de- lows: We create a new resource to investigate neg- velop a new scheme for cyberbullying annotation ative posts in a social media platform used pre- based on the presence and severity of cyberbully- dominantly by teens, and make our data set public. ing, the role of the post’s author, and a number of The most noticeable difference of our data set with fine-grained categories associated with cyberbul- previous similar corpora is that it provides a gener- lying. They use the same two class classification alized view of invective posts, which is not biased tasks as the previous studies to automatically de- towards a specific topic or target group. In our tect cyberbullying posts and achieve an F-score of data, each post is judged by three different annota- 55.39%. Kansara and Shekokar(2015) combine tors. Then we perform experiments with both typi- text and image analysis techniques and propose cal features (e.g. linguistic, sentiment and domain a framework for detecting potential cyberbullying related) and newer features (e.g. embedding and threats that analyze texts and images using a bag topic modeling), and combinations of these fea- of words and a bag of visual word models respec- tures to automatically identify potential invective tively. posts. We also show the robustness of our model There is also some research in the field of on- by evaluating it on different data sets (Wikipedia line harassment and hate speech detection. Yin Abusive Language Data Set, and Kaggle). Finally, et al.(2009) apply a supervised machine learn- we do an analysis of bad word distributions in our ing approach to the automatic detection of on- data that, among other things, reflects a sexualized line harassment. They combine content, senti- teen culture. ment, and contextual features and achieve an F- score of 44%. Nobata et al.(2016) use data 2 Related Research gathered from Yahoo! Finance and News, then Since our research goal is to detect nastiness in present a hate speech detection framework using social media as an initial step to detect cyber- n-gram, linguistic, syntactic and distributional se- bullying, we analyze previous works focusing on mantic features and get an F-score of 81% for a cyberbullying detection. Researchers (Macbeth combination of all features. et al., 2013; Lieberman et al., 2011) have reported In this study, we present a data set containing that cyberbullying posts are contextual, personal- question-answer pairs from ask.fm, which are la- ized and creative, which make them harder to de- beled as positive (neutral), or negative (invective). tect than detecting spam. Even without using bad Our data is conversational data from teenagers. words, the post can be hostile to the receiver. On We also have metadata containing information the other hand, the use of negative words does about the users that eventually can help us to focus not necessarily have a cyberbullying effect (al- on users who are being bullied with frequent pro- Khateeb and Epiphaniou, 2016). Researchers have fanity and also in analyzing the patterns used by used different approaches to find cyberbullying attackers. Moreover, compared to previous work, traces. we apply a wider range of different types of typical Dinakar et al.(2012) concentrate on sexual- and newer NLP features, and their combinations ity, race and culture, and intelligence as the pri- to improve the classification performance. Fol- mary categories related to cyberbullying. Then, lowing this approach, we reach F-scores of 59% they construct a common sense knowledge base - for identifying invective posts in our own data BullySpace - with knowledge about bullying sit- set. Applying our classification model on Kaggle uations and a wide range of everyday life topics. and Wikipedia data (we will introduce them later) The overall success of this experiment is 66.7% shows that our method is robust and applicable to 64 other data. We also do an analysis of bad word 3.1 Crowdsourcing Annotations distribution in our data set that shows that most of With our small gold annotated data, we started these bad words are often used in a casual way, so a crowdsourcing task of annotating around 600 detecting cases in which there are potential invec- question-answer pairs in CrowdFlower. We pro- tive requires careful feature engineering. vided a simple annotation guideline for contribu- 3 Data Collection and Annotation tors with some positive and negative examples to ease their task. Each question-answer pair was an- Since most of the abusive posts we observed on notated by three different contributors. Figure1 our small scale study contained profanities, we shows the interface we designed for the task. decided to analyze the occurrence of bad words in a random collection of social media data.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us