
Zero-Shot Cross-lingual Semantic Parsing Tom Sherborne and Mirella Lapata School of Informatics, University of Edinburgh [email protected], [email protected] Abstract Liang, 2016; Dong and Lapata, 2016) with further modeling advances in multi-stage decoding (Dong Recent work in crosslingual semantic parsing has successfully applied machine translation and Lapata, 2018; Guo et al., 2019), schema linking to localize accurate parsing to new languages. (Shaw et al., 2019; Wang et al., 2019) and grammar However, these advances assume access to based decoding (Yin and Neubig, 2017; Lin et al., high-quality machine translation systems, and 2019). In addition to modeling developments, re- tools such as word aligners, for all test lan- cent work has also expanded to multi-lingual pars- guages. We remove these assumptions and ing. However, this has primarily required that par- study cross-lingual semantic parsing as a zero- allel training data is either available (Jie and Lu, shot problem without parallel data for 7 test 2014), or requires professional translation to gen- languages (DE, ZH, FR, ES, PT, HI, TR). We propose a multi-task encoder-decoder model erate (Susanto and Lu, 2017a). This creates an to transfer parsing knowledge to additional entry barrier to localizing a semantic parser which languages using only English-Logical form may not be necessary. Sherborne et al.(2020) and paired data and unlabeled, monolingual utter- Moradshahi et al.(2020) explore the utility of ma- ances in each test language. We train an en- chine translation, as a cheap alternative to human coder to generate language-agnostic represen- translation, for training data but found translation tations jointly optimized for generating logical artifacts as a performance limiting factor. Zhu et al. forms or utterance reconstruction and against language discriminability. Our system frames (2020) and Li et al.(2021) both examine zero-shot zero-shot parsing as a latent-space alignment spoken language understanding and observed a sig- problem and finds that pre-trained models can nificant performance penalty from cross-lingual be improved to generate logical forms with transfer from English to lower resource languages. minimal cross-lingual transfer penalty. Ex- Cross-lingual generative semantic parsing, as perimental results on Overnight and a new opposed to the slot-filling format, has been under- executable version of MultiATIS++ find that our zero-shot approach performs above back- explored in the zero-shot case. This challenging translation baselines and, in some cases, ap- task combines the outstanding difficulty of struc- proaches the supervised upper bound. tured prediction, for accurate parsing, with a la- tent space alignment requirement, wherein mul- 1 Introduction tiple languages should encode to an overlapping semantic representation. Prior work has identified arXiv:2104.07554v1 [cs.CL] 15 Apr 2021 Executable semantic parsing translates a natural language utterance to a logical form for execution this penalty from cross-lingual transfer (Artetxe in some knowledge base to return a denotation. The et al., 2020; Zhu et al., 2020; Li et al., 2021) that parsing task realizes an utterance as a semantically- is insufficiently overcome with pre-trained mod- identical, but machine-interpretable, expression els alone. While there has been some success in grounded in a denotation. The transduction be- machine-translation based approaches, we argue tween natural and formal languages has allowed that inducing a shared multilingual space without semantic parsers to become critical infrastructure parallel data is superior because (a) this nullifies in the pipeline of human-computer interfaces for the introduction of translation or word alignment question answering from structured data (Berant errors and (b) this approach scales to low-resource et al., 2013; Liang, 2016; Kollar et al., 2018). languages without reliable machine translation. Sequence-to-sequence approaches have proven In this work, we propose a method of zero-shot capable in producing high quality parsers (Jia and executable semantic parsing using only mono- lingual data for cross-lingual transfer of parsing erative trees (Lu, 2014; Susanto and Lu, 2017b) knowledge. Our approach uses paired English- and LSTM-based sequence-to-sequence models logical form data for the parsing task and adapts to (Susanto and Lu, 2017a). This work has largely additional languages using auxiliary tasks trained affirmed the benefit of “high-resource” multi- on unlabeled monolingual corpora. Our motivation language ensemble training (Jie and Lu, 2014). is to accurately parse languages, for which paired Code-switching in multilingual parsing has also training data is unseen, to examine if any transla- been explored through mixed-language training tion is required for accurate parsing. The objective datasets (Duong et al., 2017; Einolghozati et al., of this work is to parse utterances in some language, 2021). For adapting a parser to a new language, l, without observing paired training data, (xl; y), machine translation has been explored as mostly suitable machine translation, word alignment or reasonable proxy for in-language data (Sherborne bilingual dictionaries between l and English. Us- et al., 2020; Moradshahi et al., 2020). However, ing a multi-task objective, our system adapts pre- machine translation, in either direction, can intro- trained language models to generate logical forms duce limiting artifacts (Artetxe et al., 2020) and from multiple languages with a minimized penalty generalisation is limited due to a “parallax” error for cross-lingual transfer from English to German between gold test utterances and “translationese” (DE), Chinese (ZH), French (FR), Spanish (ES), training data (Riley et al., 2020). Portuguese (PT), Hindi (HI) and Turkish (TR). Zero-shot parsing has primarily focused on ‘cross-domain’ challenges to improve parsing 2 Related Work across varying query structure and lexicons (Herzig Cross-lingual Modeling This area has recently and Berant, 2018; Givoli and Reichart, 2019) or gained increasing interest across a range of natural different databases (Zhong et al., 2020; Suhr et al., language understanding settings, with benchmarks 2020). The Spider dataset (Yu et al., 2018) for- such as XGLUE (Liang et al., 2020) and XTREME malises this challenge with unseen tables at test (Hu et al., 2020) providing classification and gen- time for zero-shot generalisation. Combining zero- eration benchmarks across a range of languages shot parsing with cross-lingual modeling has been (Zhao et al., 2020). There has been significant inter- examined for the UCCA formalism (Hershcovich est in cross-lingual approaches to dependency pars- et al., 2019) and for task-oriented parsing (see be- ing (Tiedemann et al., 2014; Schuster et al., 2019), low). Generally, we find that cross-lingual exe- sentence simplification (Mallinson et al., 2020) and cutable parsing has been under-explored in the zero- spoken-language understanding (SLU; He et al., shot case. 2013; Upadhyay et al., 2018). Within this, pre- Executable parsing contrasts to more abstract training has shown to be widely beneficial for cross- meaning expressions (AMR, λ-calculus) or hybrid lingual transfer using models such as multilingual logical forms, such as decoupled TOP (Li et al., BERT (Devlin et al., 2019) or XLM-Roberta (Con- 2021), which cannot be executed to retrieve deno- neau et al., 2020a). tations. Datasets such as ATIS (Dahl et al., 1994) Broadly, pre-trained models trained on massive have both executable parsing and spoken language corpora purportedly learn an overlapping cross- understanding format. We focus on only the for- lingual latent space (Conneau et al., 2020b) but mer, as a generation task, and note that results for have also been identified as under-trained for some the latter classification task are not comparable. tasks (Li et al., 2021). A subset of cross-lingual Dialogue Modeling Usage of MT has also been modeling has focused on engineering alignment of extended to a task-oriented setting using the MTOP multi-lingual word representations (Conneau et al., dataset (Li et al., 2021), employing a pointer- 2017; Artetxe and Schwenk, 2018; Hu et al., 2021) generator network (See et al., 2017) to generate for tasks such as dependency parsing (Schuster dialogue-act style logical forms. This has been et al., 2019; Xu and Koehn, 2021) and word align- combined with adjacent SLU tasks, on MTOP ment (Jalili Sabet et al., 2020; Dou and Neubig, and MultiATIS++ (Xu et al., 2020), to combine 2021). cross-lingual task-oriented parsing and SLU clas- Semantic Parsing For parsing, there has been sification. Generally, these approaches addition- recent investigation into multilingual parsing us- ally require word alignment to project annotations ing multiple language ensembles using hybrid gen- between languages. Prior zero-shot cross-lingual work in SLU (Li et al., 2021; Zhu et al., 2020; Kr- tion space to minimize the penalty of cross-lingual ishnan et al., 2021) similarly identifies a penalty transfer and improve parsing of languages without for cross-lingual transfer and finds that pre-trained training data. To generate this latent space, we models and machine translation can only partially posit that only unpaired monolingual data in the mitigate this error. target language, and some pre-trained encoder, are Compared with similar exploration of cross- required. We remove the assumption that machine lingual parsing such as Xu et al.(2020) and Li translation is suitable and study the inverse case et al.(2021), the zero-shot case is our primary fo- wherein only paired data in English and monolin- cus. Our study assumes a case of no paired data gual data in target languages are available. This in the test and our proposed approach is more sim- frames the cross-lingual parsing task as one of la- ilar to Mallinson et al.(2020) and Xu and Koehn tent representation alignment only, to explore a (2021) in that we objectify the convergence of pre- possible upper bound of parsing accuracy without trained representations for a downstream task. Our errors from external dependencies.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-