Italian Hate Speech Detection Through Ensemble Learning and Deep Neural Networks

Italian Hate Speech Detection Through Ensemble Learning and Deep Neural Networks

HanSEL: Italian Hate Speech detection through Ensemble Learning and Deep Neural Networks Marco Polignano Pierpaolo Basile University of Bari Aldo Moro University of Bari Aldo Moro Dept. Computer Science Dept. Computer Science via E. Orabona 4, 70125 Bari, Italy via E. Orabona 4, 70125 Bari, Italy [email protected] [email protected] Abstract maggioranza: Support Vector Machine (Hearst et al., 1998) (SVM con kernel English. The detection of hate speeches, RBF), Random Forest (Breiman, 2001), over social media and online forums, is Deep Multilayer Perceptron (Kolmogorov, a relevant task for the research area of 1992) (MLP). Ogni classificatore e` stato natural language processing. This interest configurato utilizzando una strategia is motivated by the complexity of the task greedy di ottimizzazione degli iper- and the social impact of its use in real parametri considerando il valore di scenarios. The task solution proposed in ”F1” calcolato su una suddivisione this work is based on an ensemble of three casuale in 5-fold del set di training. classification strategies, mediated by a Ogni frase e` stata pre-elaborata affinche` majority vote algorithm: Support Vector fosse trasformarta in formato word em- Machine (Hearst et al., 1998) (SVM with beddings e TF-IDF. I risultati ottenuti RBF kernel), Random Forest (Breiman, tramite cross-validation sul training set 2001), Deep Multilayer Perceptron (Kol- hanno mostrato un valore F1 pari a mogorov, 1992) (MLP). Each classifier 0.8034 per le frasi estratte da Facebook has been tuned using a greedy strategy of e 0.7102 per quelle di Twitter. Il codice hyper-parameters optimization over the sorgente del sistema proposto puo` essere ”F1” score calculated on a 5-fold random scaricato tramite GitHub: https: subdivision of the training set. Each sen- //github.com/marcopoli/ tence has been pre-processed to transform haspeede_hate_detect it into word embeddings and TF-IDF bag of words. The results obtained on the cross-validation over the training sets have 1 Introduction and background shown an F1 value of 0.8034 for Facebook sentences and 0.7102 for Twitter. The In the current digital era, characterized by the large code of the system proposed can be use of the Internet, it is common to interact with downloaded from GitHub: https: others through chats, forums, and social networks. //github.com/marcopoli/ Common is also to express opinions on public haspeede_hate_detect pages and online squares. These places of discus- sion are frequently transformed into ”fight clubs” Italiano. L’individuazione di discorsi where people use insults and strong words in or- di incitamento all’odio sui social media der to support their ideas. The unknown identity e sui forum on-line e` una sfida rile- of the writer is used as an excuse to fell free of vante per l’area di ricerca riguardante consequences derived by attacking people only for l’elaborazione del linguaggio naturale. their gender, race or sexual inclinations. A gen- Tale interesse e` motivato della complessita` eral absence of automatic moderation of contents del processo e dell’impatto sociale del can cause the diffusion of this phenomenon. In suo utilizzo in scenari reali. La soluzione particular, consequences on the final user could be proposta in questo lavoro si basa su un psychological problems such as depression, rela- insieme di tre strategie di classificazione tional disorders and in the most critical situations mediate da un algoritmo di votazione per also suicidal tendencies. A recent survey of state of the art approaches that indicates the presence and absence of hate for hate speech detection is provided by (Schmidt speeches. The task is organized into three sub- and Wiegand, 2017). The most common systems tasks, based on the dataset used for training and of speech detection are based on algorithms of text testing the participants’ systems: classification that use a representation of contents based on ”surface features” such as them available • Task 1: HaSpeeDe-FB, where only the in a bag of words (BOW) (Chen et al., 2012; Xu et Facebook dataset can be used to classify the al., 2012; Warner and Hirschberg, 2012; Sood et Facebook test set al., 2012). A solution based on BOW is efficient • Task 2: HaSpeeDe-TW, where only the and accurate especially when n-grams have been Twitter dataset can be used to classify the extended with semantic aspects derived by the Twitter test set analysis of the text. (Chen et al., 2012) describe an increase of the classification performances when • Task 3: Cross-HaSpeeDe, which can be fur- features such as the number of URLs, punctua- ther subdivided into two sub-tasks: tions and not English words are added to the vec- torial representation of the sentence. (Van Hee et 1. Task 3.1: Cross-HaSpeeDe FB, where al., 2015) proposed, instead, to add as a feature only the Facebook dataset can be used the number of positive, negative and neutral words to classify the Twitter test set found in the sentence. This idea demonstrated 2. Task 3.2: Cross-HaSpeeDe TW, that the polarity of sentences positively supports where only the Twitter dataset can be the classification task. These approaches suffer used to classify the Facebook test set from the lack of generalization of words contained into the bag of words, especially when it is cre- The Facebook and Twitter datasets released for ated through a limited training set. In particular, the task consist of a total amount of 4,000 com- terms found in the test sentences are often missing ments/tweets, randomly split into development in the bag. More recent works have proposed word and test set, of 3,000 and 1,000 messages respec- embeddings (Le and Mikolov, 2014) as a possi- tively. Data are encoded in a UTF-8 with three ble distributional representation able to overcome tab-separated columns, each one representing the this problem. This representation has the advan- sentence id, the text and the class (Fig. 1). tage to transform semantically similar words into id text hs a similar numerical vector. Word embeddings are 8 Io votero NO NO E NO 0 consequently used by classification strategies such 36 Matteo serve un colpo di stato. 1 as Support Vector Machine and recently by deep learning approaches such as deep recurrent neural Table 1: Examples of annotated sentences. networks (Mehdad and Tetreault, 2016). The so- lution proposed in this work reuse the findings of (Chen et al., 2012; Mehdad and Tetreault, 2016) 3 Description of the system for creating an ensemble of classifiers, including a deep neural network, which works with a com- The system proposed in this work is HanSEL: bined representation of word embeddings and a a system of Hate Speech detection through En- bag of words. semble Learning and Deep Neural Networks. We decided to approach the problem using a classic 2 Task and datasets description natural language processing pipeline with a final task of sentences classification into two exclusive The hate speech detection strategy proposed in classes: hate speech and not hate speech. The data HAnSEL has been developed for HaSpeeDe (Hate provided by task organizers are obtained crawling Speech Detection) task organized within Evalita social network, in particular, Facebook and Twit- 2018 (Caselli et al., 2018), which is going to ter. The analysis of the two data sources showed be held in Turin, Italy, on December 12th-13th, many possible difficulties to face in the case of us- 2018 (Bosco et al., 2018). HaSpeeDe consists in ing approaches based on Italian lexicons of hate the annotation of messages from social networks speeches. In particular, we identified the follow- (Twitter and Facebook) with a boolean label (0;1) ing issues: • Repeated characters: many words in- We follow the same step of pre-processing ap- cludes characters repeated many times for plied by Tripodi (Tripodi and Li Pira, 2017) to emphasizing the semantic meaning of the transform the sentence of the task datasets into word. As an example, the words ”nooooo”, word embeddings. In particular, we applied the ”Grandeeeee”, ”ccanaleeeeeeeeeeeeeeee” following Natural Language Processing pipeline: are found in the training of Facebook • Reduction of repeated characters: we scan messages. each sentence of the datasets (both training • Emoji: sentences are often characterized by and test). For each sentence, we obtain words emoji such as hearts and smiley faces that are merely splitting it by space. Each word is an- often missing in external lexicons. alyzed, and characters repeated three times or more are reduced to only two symbols, try- • Presence of links, hashtags and mentions: ing to keep intact word that naturally includes this particular elements are typical of the doubles. social network language and can introduce noise in the data processing task. • Data cleansing: we transformed the words is lowercase and following we removed from • Length of the sentences: many sentences are each sentences links, hashtags, entities, and composed by only one word or in general, emoji they are very short. Consequently, they are not expressive of any semantic meaning. The normalized sentences are consequently to- The complexity of the writing style used in hate kenized using the TweetTokenizer of the NLTK 1 speech sentences guided us through the idea library . For each sentence we averaged the to do not use an approach based on standard word2vec vectors correspondent of each token, re- lexicons and to prefer supervised learning strate- moving during each sum the centroid of the whole gies on the dataset provided by the task organizers. distributional space. This technique is used for mitigating the problems of loss of information due Sentence processing. to the operation of averaging the semantic vectors. We decide to represent each sentence as a con- The two bags of words (Facebook and Twitter) catenation of a 500 features word embedding are, instead, created directly on the sentences vector and a 7,349 size bag of words for Facebook without any pre-processing, also if during the messages and 24,866 size bag of words for Twitter tuning of the architecture we had tried some con- messages.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us