Jigsaw Multilingual Toxic Comment Classification Kaggle Competition Gabriela Ožegovic´ Sven Celin Graz University of Technology Graz University of Technology [email protected] [email protected] ABSTRACT to select specific types of toxicity and focus on them, since Great expansion of the internet has produced wide range of some sites might be fine with one type of toxicity (e.g. severe people to come online. All this different cultures have a lot of toxicity) and not others. things different from political views, religion, economics or In 2019, another competition on similar topic was held. In even their favourite singer or actress, and with this differences the "Unintended Bias in Toxicity Classification Challenge" people start to act irrationally and start to fight online. This [4], focus was on detecting the toxicity while minimizing things could happen in daily matter from internet bullying, the unintended model bias. The unintended model bias is online harassment or personal attacks. With this project idea happening while trying to detect the toxicity of comments was to limit toxic comments written by the users and flag them which contained the names of identities which are frequently as inappropriate. In this work, we will show how we flagged attacked or used in offensive ways. Training a model on this toxic comments from large dataset we have been provided by type of data ends with classifying a text as toxic just because Kaggle’s competition regarding this topic. they contain certain identity, even though it is not toxic itself. Author Keywords Text classification, Text mining, Toxic text classification, Word embeddings, word2vec, Feature extraction KAGGLE Kaggle is one of biggest data science and machine learning INTRODUCTION communities where users are publishing datasets and kernels All platforms that serves a lot of people will, in one point for everyone to see [11]. This web-based system allows data of their existence, have disagreements, abuse and harassment scientists and machine learning engineers to enter competi- from certain groups of people that disagree with their ideas. tions where they try to solve data science challenges. In the Only one comment is enough to derail an online discussion. newest instance of Kaggle they allowed usage of TPU and To counter that flow of non constructive comments this com- GPU cores on their cloud servers for challengers to use. That petition was made to yield the best algorithm for flagging sped up many machine learning algorithms and brought data toxic comments. As platforms struggle to effectively enable science closer for people that have no GPU or TPU cores in conversations in their comment sections, many limit or com- their computers. pletely shut down user comments. This project was made to focus machine learning models to identify toxicity in online conversations and flag them as rude, disrespectful or likely to make someone leave discussion. If this comments could Competitions be identified that would lead to safe and more collaborative Kaggle competitions sole purpose is to improve data science thread. and machine learning in particular field or in particular topic from the industry[1]. Every competition is made of overview Related work where competition maker write description of the work and In 2018, a competition was held on Kaggle called "Toxic data, evaluation, timeline, prizes and code requirements. Sec- Comment Classification Challenge" [3]. In that competition, ond, and the most important thing is the dataset that is provided competitors were asked to build models not only to recognize by the competition maker, in this part of the competition con- toxicity, but to also detect few types of toxicity. The types of testants can access all the data that competition requires to toxicity which had to be detected are: severe toxicity, obscene, make your model and assumptions. For contestant, notebooks threat, insult and identity hate. The goal was to enable users section is go to place when submitting their result. In this sec- Permission to make digital or hard copies of all or part of this work for personal or tion, contestant needs to upload (or make in online editor) their classroom use is granted without fee provided that copies are not made or distributed Python notebook and run it on Kaggle servers with all data for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the provided. After execution of the notebooks in cloud servers author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or user is being placed in Leaderboard by their score number. In republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. competitions there are often more than few thousand teams with maximum number of 5 contestants per team. We could © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. say that this competitions attracts large number of people from ISBN 978-1-4503-2138-9. all over the world. DOI: 10.1145/1235 JIGSAW MULTILINGUAL TOXIC COMMENT CLASSIFICA- possibility of false negatives after the training, we decided to TION balance the dataset and downsample it. That seemed like the In this year’s competition, main challenge was to build mul- best solution, since the algorithm that uses unbalanced training tilingual models for toxicity classification with English-only set will most likely not learn to discriminate from the features. training data [5]. The ratio we opted for is 1:2. So, for each toxic comment there are 2 not-toxic ones. Jigsaw developed Perspective API, which uses machine learn- ing models to determine an impact a comment could have After that, we did some feature extraction processes, which are on the conversation [8]. It is served in multiple languages explained in the next chapter. In the end, we used a classifier of (English, Spanish, French, German, Portuguese and Italian). choice to train the model and predict the toxicity probabilities Recently, there are impressive multilingual capabilities, in- of test data. cluding few- and zero-shot learning, and the goal is to see it these results are applicable to toxicity classification. FEATURE EXTRACTION Feature extraction is a process of distillation of characteristics The host team shared two notebooks to help competitors start from the raw dataset and reducing it to manageable groups for running BERT. BERT (Bidirectional Encoder Representations processing [2]. Characteristic of datasets is that the number from Transformers) is a method of pre-training language rep- of features which they provide is not always appropriate for a resentations developed by Google, which presented state-of- task for which the same dataset is used. In the situation with the-art results in a wide variety of NLP tasks. datasets provided for this competition, there are no available The competitors had a week to work on their models and to features which could help determining the toxicity of a com- try get the best solution. The expected result is the probability ment. Only provided features are comment itself, it’s language that given comment is toxic, a value which is between 0 and 1. and whether or not is the comment toxic. For this reason, it is important to find the possible characteristics which are closely related to the toxicity level and extract them as separate fea- tures. We did some brainstorming, and though about in which way would a user type a comment if they are, e.g., angry. Extracted features and their significance will be explained in further sections. Comment length Figure 1. Competition When thinking about toxic comment and toxic behaviour in writing, we thought about the length of each comment. We Dataset assumed that people, who are willing to be toxic, and angrier Provided training data is English-only, and it is the same data and have less nerves than the ones who did not plan on being which was provided for last two competitions (mentioned in toxic. We can assume that they put less thought into their 1.1.). Both training datasets are given in unprocessed and comments, and often, possibly in heat of an argument, can processed format. The unprocessed format is just the data rush into them. which was given before, and the processed format is done For this reason we checked the length of comment, and found in a way that it is ready for input to BERT. Data consists of out that there is a difference between average length of toxic columns such as id, comment text, toxic (whether the toxic and non-toxic comments. Toxic comments ended up indeed is comment or not, or the probability of the comment to be being shorter than the non-toxic ones. The correlation is a toxic), and types of toxicity each one in separate column. negative value because the higher the comment length is, the Test data is consisted of comments from Wikipedia talk page lesser the toxicity is. in several different languages (Italian, Spanish, Turkish, Por- tuguese, Russian and French). Test data consists of few columns: id, comment content and language of the content. Other than that, competitors were provided validation data, Figure 2. Point-biserial correlation with toxicity and comment length which is, just as the test data, just in non-English languages. It consists of columns: id, comment text, language and toxic column. Punctuation count Similar to the previous idea about length of comments, here our idea was that people who are toxic are more prone to Problem approach using multiple punctuation signs, like "???" or "!!!". This is The first step was to inspect the data.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-