electronics Article An Assessment of Deep Learning Models and Word Embeddings for Toxicity Detection within Online Textual Comments Danilo Dessì 1,2,*,† , Diego Reforgiato Recupero 3,† and Harald Sack 1,2,† 1 FIZ Karlsruhe–Leibniz Institute for Information Infrastructure, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany; harald.sack@fiz-karlsruhe.de 2 Karlsruhe Institute of Technology, Institute AIFB, Kaiserstraße 89, 76133 Karlsruhe, Germany 3 Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy;
[email protected] * Correspondence: danilo.dessi@fiz-karlsruhe.de † The authors equally contributed to the research performed in this paper. Abstract: Today, increasing numbers of people are interacting online and a lot of textual comments are being produced due to the explosion of online communication. However, a paramount inconve- nience within online environments is that comments that are shared within digital platforms can hide hazards, such as fake news, insults, harassment, and, more in general, comments that may hurt some- one’s feelings. In this scenario, the detection of this kind of toxicity has an important role to moderate online communication. Deep learning technologies have recently delivered impressive performance within Natural Language Processing applications encompassing Sentiment Analysis and emotion detection across numerous datasets. Such models do not need any pre-defined hand-picked features, but they learn sophisticated features from the input datasets by themselves. In such a domain, word Citation: Dessì, D.; Reforgiato Recupero, D.; Sack, H. An embeddings have been widely used as a way of representing words in Sentiment Analysis tasks, Assessment of Deep Learning Models proving to be very effective.