
Multilingual Code-switching Identification via LSTM Recurrent Neural Networks Younes Samih Suraj Maharjan Dept. of Computational Linguistics Dept. of Computer Science Heinrich Heine University, University of Houston Düsseldorf, Germany Houston, TX, 77004 [email protected] [email protected] Mohammed Attia Laura Kallmeyer Thamar Solorio Google Inc. Dept. of Computational Linguistics Dept. of Computer Science New York City Heinrich Heine University, University of Houston NY, 10011 Düsseldorf, Germany Houston, TX, 77004 [email protected] [email protected] [email protected] Abstract tweets, SMS messages, user comments on the ar- ticles, blogs, etc., this phenomenon is becoming This paper describes the HHU-UH-G system more pervasive. Code-switching does not only occur submitted to the EMNLP 2016 Second Work- shop on Computational Approaches to Code across sentences (inter-sentential) but also within the Switching. Our system ranked first place for same sentence (intra-sentential), adding a substan- Arabic (MSA-Egyptian) with an F1-score of tial complexity dimension to the automatic process- 0.83 and second place for Spanish-English ing of natural languages (Das and Gambäck, 2014). with an F1-score of 0.90. The HHU-UH- This phenomenon is particularly dominant in multi- G system introduces a novel unified neural lingual societies (Milroy and Muysken, 1995), mi- network architecture for language identifica- grant communities (Papalexakis et al., 2014), and tion in code-switched tweets for both Spanish- in other environments due to social changes through English and MSA-Egyptian dialect. The sys- tem makes use of word and character level rep- education and globalization (Milroy and Muysken, resentations to identify code-switching. For 1995). There are also some social, pragmatic and the MSA-Egyptian dialect the system does not linguistic motivations for code-switching, such as rely on any kind of language-specific knowl- the the intent to express group solidarity, establish edge or linguistic resources such as, Part Of authority (Chang and Lin, 2014), lend credibility, or Speech (POS) taggers, morphological analyz- make up for lexical gaps. ers, gazetteers or word lists to obtain state-of- the-art performance. It is not necessary for code-switching to oc- cur only between two different languages like Spanish-English (Solorio and Liu, 2008), Mandarin- 1 Introduction Taiwanese (Yu et al., ) and Turkish-German (Özlem Code-switching can be defined as the act of al- Çetinoglu, 2016), but it can also happen between ternating between elements of two or more lan- three languages, e.g. Bengali, English and Hindi guages or language varieties within the same ut- (Barman et al., 2014), and in some extreme cases terance. The main language is sometimes re- between six languages: English, French, German, ferred to as the ‘host language’, and the embed- Italian, Romansh and Swiss German (Volk and ded language as the ‘guest language’ (Yeh et al., Clematide, 2014). Moreover, this phenomenon can 2013). Code-switching is a wide-spread linguis- occur between two different dialects of the same lan- tic phenomenon in modern informal user-generated guage as between Modern Standard Arabic (MSA) data, whether spoken or written. With the advent and Egyptian Dialect (Elfardy and Diab, 2012), of social media, such as Facebook posts, Twitter or MSA and Moroccan Arabic (Samih and Maier, 50 Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 50–59, Austin, TX, November 1, 2016. c 2016 Association for Computational Linguistics 2016a; Samih and Maier, 2016b). The current inant language. Their system obtained an accuracy shared task is limited to two scenarios: a) code- of 96.9% at word-level language identification task. switching between two distinct languages: Spanish- The task of detecting code-switching points is English, b) and two language varieties: MSA- generally cast as a sequence labeling problem. Its Egyptian Dialect. difficulty depends largely on the language pair be- With the massive increase in code-switched writ- ing processed. ings in user-generated content, it has become im- Several projects have treated code-switching be- perative to develop tools and methods to handle and tween MSA and Egyptian Arabic. For example, El- process this type of data. Identification of languages fardy et al. (2013) present a system for the detec- used in the sentence is the first step in doing any kind tion of code-switching between MSA and Egyptian of text analysis. For example, most data found in so- Arabic which selects a tag based on the sequence cial media produced by bilingual people is a mixture with a maximum marginal probability, considering of two languages. In order to process or translate this 5-grams. A later version of the system is named data to some other language, the first step will be to AIDA2 (Al-Badrashiny et al., 2015) and it is a more detect text chunks and identify which language each complex hybrid system that incorporates different chunk belongs to. The other categories like named classifiers and components such as language mod- entities, mixed, ambiguous and other are also impor- els, a named entity recognizer, and a morphological tant for further language processing. analyzer. The classification strategy is built as a cas- cade voting system, whereby a conditional Random 2 Related Works Field (CRF) classifier tags each word based on the decisions from four other underlying classifiers. Code-switching has attracted considerable attention The participants of the “First Workshop on Com- in theoretical linguistics and sociolinguistics over putational Approaches to Code Switching” had ap- several decades. However, until recently there has plied a wide range of machine learning and sequence not been much work on the computational pro- learning algorithms with some using additional cessing of code-switched data. The first compu- online resources like English dictionary, Hindi- tational treatment of this linguistic phenomenon Nepali wiki, dbpedia, online dumps, LexNorm, can be found in (Joshi, 1982). He introduces a etc. to tackle the problem of language detec- grammar-based system for parsing and generating tion in code-switched tweets on Nepali-English, code-switched data. More recently, the detection of Spanish-English, Mandarin-English and MSA Di- code-switching has gained traction, starting with the alects (Solorio et al., 2014). For MSA-Dialects, work of (Solorio and Liu, 2008), and culminating in two CRF-based systems, a system using language- the first shared task at the “First Workshop on Com- independent extended Markov models, and a system putational Approaches to Code Switching” (Solorio using a CRF autoencoder have been presented; the et al., 2014). Moreover, there have been efforts latter proved to be the most successful. in creating and annotating code-switching resources The majority of the systems dealing with word- (Özlem Çetinoglu, 2016; Elfardy and Diab, 2012; level language identification in code-switching rely Maharjan et al., 2015; Lignos and Marcus, 2013). on linguistic resources (such as named entity Maharjan et al. (2015) used a user-centric approach gazetteers and word lists) and linguistic informa- to collect code-switched tweets for Nepali-English tion (such as POS tags and morphological analysis), and Spanish-English language pairs. They used two and they use machine learning methods that have methods, namely a dictionary based approach and been typically used with sequence labeling prob- CRF GE and obtained an F1 score of 86% and 87% lems, such as support vector machine (SVM), con- for Spanish-English and Nepali-English respectively ditional random fields (CRF) and n-gram language at word level language identification task. Lig- models. Very few, however, have recently turned nos and Marcus (2013) collected a large number of to recurrent neural networks (RNN) and word em- monolingual Spanish and English tweets and used bedding with remarkable success. (Chang and Lin, ratio list method to tag each token with by its dom- 2014) used a RNN architecture and combined it 51 with pre-trained word2vector skip-gram word em- beddings, a log bilinear model that allows words with similar contexts to have similar embeddings. The word2vec embeddings were trained on a large Twitter corpus of random samples without filtering by language, assuming that different languages tend to share different contexts, allowing embeddings to provide good separation between languages. They showed that their system outperforms the best SVM- based systems reported in the EMNLP’14 Code- Switching Workshop. Vu and Schultz (2014) proposed to adapt the re- current neural network language model to different code-switching behaviors and even use them to gen- erate artificial code-switching text data. Adel et al. (2013) investigated the application of RNN lan- guage models and factored language models to the task of identifying code-switching in speech, and re- ported a significant improvement compared to the traditional n-gram language model. Our work is similar to that of Chang and Lin (2014) in that we use RNNs and word embed- dings. The difference is that we use long-short- term memory (LSTM) with the added advantage of the memory cells that efficiently capture long- Figure 1: System Architecture. distance dependencies. We also combine word- level with character-level representation to obtain morphology-like information on words. where ht is the hidden states vector, W denotes weight matrix, b denotes bias vector and f is the ac- 3 Model tivation function of the hidden layer. Theoretically RNN can learn long distance dependencies, still in In this section, we will provide a brief description practice they fail due the vanishing/exploding gra- of LSTM, and introduce the different components dient (Bengio et al., 1994). To solve this problem , of our code-switching detection model. The archi- Hochreiter and Schmidhuber (1997) introduced the tecture of our system, shown in Figure 1, bears re- long short-term memory RNN (LSTM). The idea semblance to the models introduced by Huang et al.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-