
Efficient Bilingual Generalization from Neural Transduction Grammar Induction Yuchen YAN, Dekai WU, Serkan KUMYOL Department of Computer Science and Engineering, Human Language Technology Center, The Hong Kong University of Science and Technology {yyanaa|dekai|skumyol}@cs.ust.hk Abstract ing bilingual relationships by recursively composing bi- constituents with one extra degree of ordering flexibil- We introduce (1) a novel neural network structure ity. for bilingual modeling of sentence pairs that allows effi- We propose a new training strategy based on what cient capturing of bilingual relationship via biconstituent we call a biparsing-backpropagation training loop, composition, (2) the concept of neural network bipars- inspired by our hypothesis that good biparse trees lead ing, which applies to not only machine translation (MT) to better models, and better models will compute bet- but also to a variety of other bilingual research areas, ter biparse trees, forming a faster feedback loop to ef- and (3) the concept of a biparsing-backpropagation train- ficiently capture bilingual relationships in low resource ing loop, which we hypothesize that can efficiently learn scenarios. When biparsing a corpus, desirable tree pat- complex biparse tree patterns. Our work distinguishes terns tend to show up repeatedly because they explain from sequential attention-based models, which are more more bilingual phenomenon, which will be learned dur- traditionally found in neural machine translation (NMT) ing backpropagation. In the next epoch, these learned in three aspects. First, our model enforces composi- patterns will help compute more accurate biparse trees, tional constraints. Second, our model has a smaller search revealing more complex and desirable tree patterns. space in terms of discovering bilingual relationships from bilingual sentence pairs. Third, our model produces ex- The paper is divided into two main parts. We begin plicit biparse trees, which enable transparent error anal- in the first part below by laying out the basic formal- ysis during evaluation and external tree constraints dur- ism of soft transduction grammars and soft biparse ing training. trees, which are the neural network analogous to sym- bolic transduction grammars and symbolic biparse trees. 1. Introduction We then introduce several competitive neural network designs and explains how the design decisions we make In this paper, we introduce a neural network structure are suitable in terms of the generalizablity of the neural for modeling of bilingual sentence pairs that features network structure and the expressiveness of the trans- efficient capturing of bilingual relationships by learn- duction grammar. In the second part, we explain how ing explicit compositional biconstituent structures, as our neural network model implements the formalism of opposed to conventional attention-based NMT models, a soft transduction grammar, together with a pipeline which learn flat token-to-token bilingual relationships for the biparsing-backpropagation training loop. After- requiring numerous parallel corpora. Token-to-token wards, a small experiment is presented demonstrating bilingual relationship formalism is inefficient in two as- how this feedback loop discovers biparse tree patterns pects. First, it lacks compositional structure, which is by showing how biparse trees evolve over time. the key to generalizing biphrases from bitokens and gen- eralizing bisentences from biphrases. Second, there are 2. Related works no constraints on the space of all possible token align- ments, resulting in the attention layer inefficiently ex- To our knowledge there is no published research in NMT ploring strategys for such a huge search space. Our model that works with biparse trees besides the basic TRAAM skips directly to the compositional structure, represent- model (Addanki and Wu, 2014), which is an incomplete model that can perform neither MT nor biparsing on its constituent embedding as input, and returns a bilex- own. However, there have been many attempts to incor- icon. porate monolingual trees (mostly syntactic trees) into • A function bi_lexicon_evaluate that takes a bi- MT systems. constituent as input, and returns a degree of good- Most of the related work started from the seq2seq ness based on whether the given biconstituent is architecture, linearizing a syntactic parse tree into a flat a valid bilexicon. sequence in depth first search order, which we hypoth- • A function bi_compose that takes (1) a list of bi- esize is a non-optimal representation since linearizing constituent embeddings in output language order, inevitably separates related sibling tree nodes apart, re- (2) the list of the same biconstituent embeddings sulting in unnecessary distant dependencies. Vinyals in input language order as input, and returns a et al. (2015) trained a seq2seq model translating from composed biconstituent embedding. monolingual sentences to their linearized parse trees (with- • A function bi_decompose that takes a biconstituent out tokens), effectively building a neural network syn- embedding as input, and returns (1) a list of bicon- tactic parser. Aharoni and Goldberg (2017) proposed stituent embeddings in the output language order, linearizing the target parse tree with tokens, resulting in (2) a list of the same biconstituent embeddings in a model that can translate from a source sentence to a the input language order. target parse tree (with tokens). Furthermore, Ma et al. • A function bi_compose_evaluate that takes (1) a (2018) proposed a way to linearize an entire weighted list of biconstituent embeddings in the output lan- parse forest into a sequence. Another variation of this guage order, (2) a list of the same biconstituent was proposed by Wang et al. (2018) introducing addi- embeddings in the input language order as input, tional connections to LSTM, so that when generating a and returns a degree of goodness based on whether tree node, an LSTM unit has direct access to output from the given biconstituents ”do compose nicely.” its parent node. A soft transduction grammar is capable of perform- Another approach is to use a recursive unit natu- ing a variety of bilingual tasks using algorithms simi- rally following the tree structure, so that linearization lar to those of a symbolic transduction grammar. These is no longer required at the encoder side, which is an bilingual tasks include: improvement but still requiring syntactic parse trees as • Parallel sentence embedding: Takes a bilingual additional input. Eriguchi et al. (2017) proposed us- sentence pair as input, and returns an biconstituent ing Tree-LSTM (Tai et al., 2015) to encode source sen- embedding. tences along the topology of a syntactic parse tree. • Parallel sentence generation: Takes a biconstituent embedding as input, and generates a bilingual sen- 3. Soft transduction grammars tence pair. Note that getting back the exact origi- nal sentence is unlikely for a long sentence if bi- We propose a new concept called a soft transduction constituent composition is lossy. However, we grammar, which uses soft biparse trees to explain sen- hypothesize that a good soft transduction gram- tence pairs, in contrast to traditional transduction gram- mar will try to preserve the syntactic/semantic struc- mars which use biparse trees. Our new soft transduc- ture of the original sentence pair. tion grammar has the advantage of not having to keep • Tree recognition: Takes a biparse tree as input, track of a combinatorically exploding number of nonter- and calculates a degree of goodness. minal categories and rules, thus significantly reducing • Biparsing: Takes a bilingual sentence pair as in- computational complexity while retaining its expressive- put, and finds the best biparse tree. ness of bilingual relationships. • Transduction: Takes an input language sentence Formally, a soft transduction grammar consists of: as input, and returns a sentence in output langauge • An output language vocabulary and an input lan- (or vise versa). guage vocabulary. • A function bi_lexicon_embed that takes a bicon- 3.1. Inversion transduction grammar stituent as input, and returns a biconstituent em- bedding. We are working with a special type of transduction gram- • A function bi_lexicon_readout that takes a bi- mar, called ITGs, or inversion transduction grammars (Wu, 1997) which has an empirically appropriate order- the fact that for a natural language parse tree, shallower ing flexibility, small enough to retain efficient compu- terminals (often representing main sentence structures) tation and general enough to explain almost every align- should be prioritized over deeper ones (often represent- ment in natural language transductions. When an ITG ing supplementary modifiers or even nested clauses). composes child biconstituents into a parent biconstituent, To solve this problem, our new decaying unfolding loss the input language constituents may read in either the has different weightings for leaves of different depths. same order as the output language constituents or the re- At each level deeper into the tree, the reconstruction verse order with the output language constituents. When loss is scaled by a decaying factor γ. In our model, we working with soft transduction grammars, all biparse choose γ = 0:5. trees will have their nonterminal categories and preter- Along with the new loss metric, we propose a pair minal categories be replaced with biconstituent embed- of new composer and decomposer designs, solving the
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-