Mawdoo3 AI at MADAR Shared Task: Arabic Tweet Dialect Identification
Total Page:16
File Type:pdf, Size:1020Kb
Mawdoo3 AI at MADAR Shared Task: Arabic Tweet Dialect Identification Bashar Talafha Wael Farhan Ahmed Altakrouri Hussein T. Al-Natsheh Mawdoo3 Ltd, Amman, Jordan fbashar.talafha, wael.farhan, ahmed.altakrouri, [email protected] Abstract et al., 2018)(Bouamor et al., 2019). The task of Arabic dialect identification is an inherently user dialect identification can be seen as a text complex problem, as Arabic dialect taxonomy classification problem, where we predict the prob- is convoluted and aims to dissect a continu- ability of a dialect given a sequence of words and ous space rather than a discrete one. In this other features provided by the task organizers. Be- work, we present machine and deep learning sides reporting the results from different models, approaches to predict 21 fine-grained dialects we show how the provided dataset for the task is form a set of given tweets per user. We adopted not straightforward and requires additional analy- numerous feature extraction methods most of which showed improvement in the final model, sis, feature engineering, and post-processing tech- such as word embedding, Tf-idf, and other niques. tweet features. Our results show that a sim- In the next sections, we describe the methods ple LinearSVC can outperform any complex followed to achieve our best model. Section2 deep learning model given a set of curated fea- lists previous work done, Section3 analyses the tures. With a relatively complex user voting dataset, while Section4 describes models and dif- mechanism, we were able to achieve a Macro- Averaged F1-score of 71.84% on MADAR ferent approaches. Section5 compares and dis- shared subtask-2. Our best submitted model cusses empirical results and finally conclude in ranked second out of all participating teams. Section6. 1 Introduction 2 Related Work In recent years, an extensive increase in social media platforms usages, such as Facebook and Recent work in the Arabic language tackles the Twitter, led to an exponential growth in the user- task of dialect identification. Fine-grained dialect base generated content. The nature of this data identification models proposed by Salameh et al. is diverse. It comprises different expressions, lan- (2018) to classify 26 specific Arabic dialects with guages, and dialects which attracted researchers to an emphasis on feature extraction. They trained understand and harness language semantics such multiple models using Multinomial Naive Bayes as sentiment, emotion, dialect identification, and (MNB) to achieve a Macro-Averaged F1-score of many other Natural Language Processing (NLP) 67.9% for their best model. tasks. Arabic is one of the most spoken languages In addition to traditional models, deep learn- in the world, being used by many nations and ing methods tackle the same problem. The re- spread across multiple geographical locations led search proposed by Elaraby and Abdul-Mageed to the generation of language variations (i.e., di- (2018), shows an enhancement in accuracy when alects) (Zaidan and Callison-Burch, 2014). compared to machine learning methods. In Huang In this paper, we tackle the problem of predict- (2015), they used weakly supervised data or dis- ing the user dialect from a set of his given tweets. tance supervision techniques. They crawled data We describe our work on exploring different ma- from Facebook posts combined with a labeled chine and deep learning methods in our attempt to dataset to increase the accuracy of dialect identi- build a classifier for user dialect identification as fication by 0.5%. part of MADAR (Multi-Arabic Dialect Applica- In this paper, we build on top of methods from tions and Resources) shared subtask-2 (Bouamor Salameh et al.(2018) and Elaraby and Abdul- 239 Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 239–243 Florence, Italy, August 1, 2019. c 2019 Association for Computational Linguistics Train Dev Test Available 195227 26750 43918 Unavailable 22365 3119 6044 Total 217592 29869 49962 Retweet 135388 17612 29868 Table 1: Distribution of the Train, Dev and Test sets used in our experiments. Mageed(2018), by exploring machine and deep learning models to tackle the problem of fine- grained Arabic dialect identification. Figure 1: The distribution of 21 Arabic dialects in MADAR Twitter corpus 3 Dataset tweet ”RT @Bedoon120: @Y » PðA « h. Q jÖÏ@ The dataset used in this work represents infor- https://t.co/sIKqXCUSAn for the user mation about tweets posted from the Arabic re- @abushooooq8 is an Egyptian tweet but it has a gion, where each tweet is associated with its label of Kuwait because the user who retweeted is dialect label (Bouamor et al., 2018)(Bouamor Kuwaiti (i.e. @abushooooq8), where the original et al., 2019). This dataset is collected from 21 author is Egyptian (i.e. @Bedoon120). countries which are Algeria, Bahrain, Djibouti, Table1 shows the distribution of available and Egypt, Iraq, Jordan, Kuwait, Lebanon, Libya, unavailable data across different sets. It is also Mauritania, Morocco, Oman, Palestine, Qatar, worth mentioning that around 10% of the data is Saudi Arabia, Somalia, Sudan, Syria, Tunisia, missing; some tweets are not accessible because United Arab Emirates, Yemen. they were deleted by the author or owner account was deactivated. As shown in Figure1, the distribution of tweets among countries is unbalanced. Around 35% of 4 Models the tweets belong to Saudi Arabia, where only 0.08% belong to Djibouti. In this section, we explain our feature extraction methodology then we go over the various experi- The dataset contains 6 features for each user; mented approaches. username of the Twitter user, tweet ID, the lan- guage of the tweet as automatically detected by 4.1 Feature Extraction Twitter, a probability scores of 25 city dialects and As a preprocessing step, normalization of Ara- MSA (Modern Standard Arabic) for each tweet bic letters is common when it comes to deal with obtained by running the best model described in Arabic text. We adopted the same preprocessing (Salameh et al., 2018) and most importantly the methodology used in (Soliman et al., 2017). tweet text. Aravec: A pre-trained word embedding mod- Each user has at most 100 tweets, labeled with els proposed by (Soliman et al., 2017) for the Ara- the same dialect, and exists in one set. For ex- bic language using three different datasets: Twit- ample, if a user is listed in the training set then ter tweets, World Wide Web pages, and Wikipedia that user will not exist in development nor test set. Arabic articles. Those models were trained us- Moreover, the maximum length of each tweet is ing Word2Vec skip-gram and CBOW (Mikolov 280 characters including spaces, URLs, hashtags, et al., 2013). In our experiments, we used the 300- and mentions which makes it challenging to iden- dimensional Twitter Skip-gram AraVec model. tify the dialects automatically (Twitter). fastText: An extension to Word2Vec model Another challenge of the dataset is that around proposed by (Bojanowski et al., 2017). The 61% of the tweets are retweets, as shown in Table model feeds an input based on sub-words rather 1. This means that the majority of the tweets are than passing the entire words. In our experi- a re-post of other Twitter users. For example, the ments, a model with 300 dimension was trained 240 on a combination of 5 different datasets: Au- features and lm is the language model probabili- toTweet (Almerekhi and Elsayed, 2015), Ara- ties. pTweet (Zaghouani and Charfi, 2018), DART (Al- sarsour et al., 2018), PADIC (Parallel Arabic DI- 4.2.2 Deep Learning Approaches alectal Corpus) (Harrat et al., 2014), MADAR fastText Classification: The word embedding of shared tasks (Bouamor et al., 2018)(Bouamor the words in an input sentence that is fed into a et al., 2019). weighting average layer. Then, it is fed to a linear Tf-idf: It has been proven that Tf-idf is efficient classifier with softmax output layer (Joulin et al., to encode textual information into real-valued vec- 2017). tors that represent the importance of a word to SepCNN: Stands for Separable Convolutional a document (Salameh et al., 2018). One of the Neural Networks (Denk, 2017), and is composed drawbacks of Tf-idf vectorized representation of of two consecutive convolutional layers. The first the text is that it looses the information of the is operating on the spatial dimension and per- word order (i.e., syntactical information). Consid- formed on channels separately, while the second ering n-grams, for both levels word and charac- layer convolutes over the channel dimension only. ters, reduces the effect of that drawback. Accord- Word embedding of the sentences is looked up ingly, unigram and bigram word level Tf-idf vec- from AraVec(Soliman et al., 2017). Then, the tors were extracted in addition to a character level embedding of each word in the sentence (i.e., Tf-idf vectors with n-gram values ranging from 1 the tweet) are passed into a number of SepCNN to 5. blocks followed by a max pooling layer. Features specific to tweets: There are features Word-level LSTM: A traditional deep learn- that are unique to Twitter such as user mentions, ing classification method. The word sequence is (e.g., @abushooooq8) and emojis. It has been passed into an AraVec layer to look up word em- found that using username as a feature can help bedding and then fed into a number of LSTM lay- the model understand the user dialect, for instance, ers. The final word sequence is used as an input it can easily find that the users @7abeeb ksa, to a softmax layer to predict the dialect (Liu et al., @a ksa2030 @alanzisaudi have a Saudi Arabia 2016). dialect. Character level unigram Tf-idf has been extracted from each of the mentioned features.