
Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots Samson Tanx\ Shafiq Jotyxz xSalesforce Research Asia \National University of Singapore zNanyang Technological University fsamson.tan,[email protected] Abstract Multilingual models have demonstrated im- pressive cross-lingual transfer performance. However, test sets like XNLI are monolingual at the example level. In multilingual commu- (a) Aligned words across sentences nities, it is common for polyglots to code-mix when conversing with each other. Inspired by this phenomenon, we present two strong black- box adversarial attacks (one word-level, one phrase-level) for multilingual models that push their ability to handle code-mixed sentences to the limit. The former uses bilingual dictionar- ies to propose perturbations and translations of the clean example for sense disambiguation. (b) Extracted candidate perturbations The latter directly aligns the clean example with its translations before extracting phrases as perturbations. Our phrase-level attack has (c) Final multilingual adversary a success rate of 89.75% against XLM-Rlarge, Figure 1: BUMBLEBEE’s three key stages of adver- bringing its average accuracy of 79.85 down sary generation: (a) Align words in the matrix (En- to 8.18 on XNLI. Finally, we propose an effi- glish) and embedded sentences (top: Indonesian, cient adversarial training scheme that trains in bottom: Chinese); (b) Extract candidate perturba- the same number of steps as the original model and show that it improves model accuracy.1 tions from embedded sentences; (c) Construct final adversary by maximizing the target model’s loss. 1 Introduction multilingual societies (e.g., Singapore, Papua New The past year has seen incredible breakthroughs in Guinea, etc.), it is common for multilingual inter- cross-lingual generalization with the advent of mas- locutors to produce sentences by mixing words, sive multilingual models that aim to learn universal phrases, and even grammatical structures from the language representations (Pires et al., 2019; Wu languages in their repertoires (Matras and Sakel, and Dredze, 2019; Conneau et al., 2020b). These 2007). This is known as code-mixing (Poplack models have demonstrated impressive cross-lingual et al., 1988), a phenomenon common in casual con- transfer abilities: simply fine-tuning them on task versational environments such as social media and data from a high resource language such as English text messages.2 Hence, it is crucial for NLP sys- after pretraining on monolingual corpora was suffi- tems serving multilingual communities to be robust cient to manifest such abilities. This was observed to code-mixing if they are to understand and estab- even for languages with different scripts and no lish rapport with their users (Tay, 1989; Bawa et al., vocabulary overlap (K et al., 2020). 2020) or defend against adversarial polyglots. However, transferring from one language to an- Although gold standard data (Bali et al., 2014; other is insufficient for NLP systems to understand Patwa et al., 2020) is important for definitively multilingual speakers in an increasingly multilin- evaluating code-mixed text processing ability, such gual world (Aronin and Singleton, 2008). In many datasets are expensive to collect and annotate. The 1Code: github.com/salesforce/adversarial-polyglots 2Examples of real code-mixing in Appendix A. dizzying range of potential language combinations on task data from a high-resource language such as further compounds the immensity of such an effort. English. This is known as cross-lingual transfer. We posit that performance on appropriately crafted adversaries could act as a lower bound of Code-mixed text processing. Previous research a model’s ability to generalize to the distribution on code-mixed text processing focused on con- simulated by said adversaries, an idea akin to worst- structing formal grammars (Joshi, 1982) and token- case analysis (Divekar, 1984). For example, Tan level language identification (Bali et al., 2014; et al.(2020b) showed that an NLP system that was Solorio et al., 2014; Barman et al., 2014), be- robust to morphological adversaries was less per- fore progressing to named entity recognition and plexed by dialectal text exhibiting morphological part-of-speech tagging (Ball and Garrette, 2018; variation. Likewise, if a system is robust to code- AlGhamdi and Diab, 2019; Aguilar and Solorio, mixed adversaries constructed from some set of lan- 2020). Recent work explores code-mixing in guages, it is reasonable to expect it to also perform higher-level tasks such as question answering and better on real code-mixed text in those languages. task-oriented dialogue (Chandu et al., 2019; Ahn While they may not fully model the intricacies of et al., 2020). Muller et al.(2020) demonstrate real code-mixing (Sridhar and Sridhar, 1980), we mBERT’s ability to transfer to an unseen dialect by believe that they can be useful in the absence of exploiting its speakers’ tendency to code-mix. appropriate evaluation data. Hence, we: A key challenge of developing models that are robust to code-mixing is the availability of code- • Propose two strong black-box adversarial attacks mixed datasets. Hence, Winata et al.(2019) use a targeting the cross-lingual generalization ability pointer-generator network to generate synthetically of massive multilingual representations (Fig. 1), code-mixed sentences while Pratapa et al.(2018) demonstrating their effectiveness on state-of-the- explore the use of parse trees for the same purpose. art models for natural language inference and Yang et al.(2020) propose to improve machine question answering. To our knowledge, these are translation with “code-switching pretraining”, re- the first two multilingual adversarial attacks. placing words with their translations in a simi- • Propose an efficient adversarial training scheme lar manner to masked language modeling (Devlin that takes the same number of steps as standard et al., 2019). These word pairs are constructed supervised training and show that it creates more from monolingual corpora using cosine similarity. language-invariant representations, improving ac- Sitaram et al.(2019) provide a comprehensive sur- curacy in the absence of lexical overlap. vey of code-mixed language processing. 2 Related Work Word-level adversaries. Modified inputs aimed at disrupting a model’s predictions are known as Multilingual classifiers. Low resource lan- adversarial examples (Szegedy et al., 2014). In guages often lack support due to the high cost of an- NLP, perturbations can be applied at the character, notating data for supervised learning. An approach subword, word, phrase, or sentence levels. to tackle this challenge is to build cross-lingual rep- Early word-level adversarial attacks (Ebrahimi resentations that only need to be trained on task et al., 2018; Blohm et al., 2018) made use of the data from a high resource language to perform well target model’s gradients to flip individual words to on another under-resourced language (Klementiev trick the model into making the wrong prediction. et al., 2012). Artetxe and Schwenk(2019) pre- However, while the perturbations were adversarial sented the first general purpose multilingual rep- for the target model, perturbed word’s original se- resentation using a BiLSTM encoder. Following mantics was often not preserved. This could result the success of Transformer models (Vaswani et al., in the expected prediction changing and making 2017), recent multilingual models like mBERT (De- the model appear more brittle than it actually is. vlin et al., 2019), Unicoder (Huang et al., 2019), Later research addressed this by searching for ad- and XLM-R (Conneau et al., 2020a) take the versarial rules (Ribeiro et al., 2018) or by constrain- pretraining!fine-tuning paradigm into the multi- ing the candidate perturbations to the k nearest lingual realm by pretraining Transformer encoders neighbors in the embedding space (Alzantot et al., on unlabeled monolingual corpora with various lan- 2018; Michel et al., 2019; Ren et al., 2019; Zhang guage modeling objectives before fine-tuning them et al., 2019; Li et al., 2019; Jin et al., 2020). Zang Original P: The girl that can help me is all the way across town. H: There is no one who can help me. Adversary P: olan girl that can help me is all the way across town. H: dw§ ¯ one who can help me. Prediction Before: Contradiction After: Entailment Original P: We didn’t know where they were going. H: We didn’t know where the people were traveling to. Adversary P: We didn’t know where they were going. H: We didn’t know where les gens allaient. Prediction Before: Entailment After: Neutral Original P: Well it got to where there’s two or three aircraft arrive in a week and I didn’t know where they’re flying to. H: There are never any aircraft arriving. Adversary P: общем, дошло до mahali there’s two or three aircraft arrive in a week and I didn’t know where they’re flying to. H: Îe¡有 aircraft arriving. Prediction Before: Contradiction After: Entailment Table 1: BUMBLEBEE adversaries found for XLM-R on XNLI (P: Premise; H: Hypothesis). et al.(2020) take another approach by making use rules, from different languages in a single sentence. of a annotated sememes to disambiguate polyse- This is distinguished from code-switching, which mous words, while Tan et al.(2020a) perturb only occurs at the inter-sentential level (Kachru, 1978). the words’ morphology and encourage semantic Extreme code-mixing. Inspired by the prolifer- preservation via a part-of-speech constraint. Other ation of real-life code-mixing and polyglots, we approaches make use of language models to gener- propose POLYGLOSS and BUMBLEBEE, two multi- ate candidate perturbations (Garg and Ramakrish- lingual adversarial attacks that adopt the persona of nan, 2020; Han et al., 2020). Wallace et al.(2019) an adversarial code-mixer. We focus on the lexical find phrases that act as universally adversarial per- component of code-mixing, where some words in a turbations when prepended to clean inputs.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-