
Head-modifier Relation based Non-lexical Reordering Model for Phrase-Based Translation Shui Liu1, Sheng Li1, Tiejun Zhao1, Min Zhang2, Pengyuan Liu3 1School of Computer Science and Technology, Habin Institute of Technology {liushui,lisheng,tjzhao}@mtlab.hit.edu.cn 2Institute for Infocomm Research [email protected] 3Institute of Computational Linguistics, Peking University [email protected] To tackle this problem, n-best parse trees and Abstract parsing forest (Mi and Huang, 2008; Zhang, 2009) are proposed to relieve the error propaga- Phrase-based statistical MT (SMT) is a tion brought by linguistic analysis. Secondly, milestone in MT. However, the transla- some phrases which violate the boundary of tion model in the phrase based SMT is linguistic analysis are also useful in these mod- structure free which greatly limits its els ( DeNeefe et al., 2007; Cowan et al. 2006). reordering capacity. To address this is- Thus, a tradeoff needs to be found between lin- sue, we propose a non-lexical head- guistic sense and formal sense. modifier based reordering model on On the other hand, instead of using syntactic word level by utilizing constituent based translation rules, some previous work attempts parse tree in source side. Our experi- to learn the syntax knowledge separately and mental results on the NIST Chinese- then integrated those knowledge to the original English benchmarking data show that, constraint. Marton and Resnik (2008) utilize the with a very small size model, our me- language linguistic analysis that is derived from thod significantly outperforms the base- parse tree to constrain the translation in a soft line by 1.48% bleu score. way. By doing so, this approach addresses the challenges brought by linguistic analysis 1 Introduction through the log-linear model in a soft way. Syntax has been successfully applied to SMT to Starting from the state-of-the-art phrase based improve translation performance. Research in model Moses ( Koehn e.t. al, 2007), we propose applying syntax information to SMT has been a head-modifier relation based reordering model carried out in two aspects. On the one hand, the and use the proposed model as a soft syntax syntax knowledge is employed by directly inte- constraint in the phrase-based translation grating the syntactic structure into the transla- framework. Compared with most of previous tion rules i.e. syntactic translation rules. On this soft constraint models, we study the way to util- perspective, the word order of the target transla- ize the constituent based parse tree structure by tion is modeled by the syntax structure explicit- mapping the parse tree to sets of head-modifier ly. Chiang (2005), Wu (1997) and Xiong (2006) for phrase reordering. In this way, we build a learn the syntax rules using the formal gram- word level reordering model instead of phras- mars. While more research is conducted to learn al/constituent level model. In our model, with syntax rules with the help of linguistic analysis the help of the alignment and the head-modifier (Yamada and Knight, 2001; Graehl and Knight, dependency based relationship in the source 2004). However, there are some challenges to side, the reordering type of each target word these models. Firstly, the linguistic analysis is with alignment in source side is identified as far from perfect. Most of these methods require one of pre-defined reordering types. With these an off-the-shelf parser to generate syntactic reordering types, the reordering of phrase in structure, which makes the translation results translation is estimated on word level. sensitive to the parsing errors to some extent. 748 Coling 2010: Poster Volume, pages 748–756, Beijing, August 2010 Fig 1. An Constituent based Parse Tree 2 Baseline ing capacity of the translation model. Instead of Moses, a state-of-the-art phrase based SMT sys- directly employing the parse tree fragments tem is used as our baseline system. In Moses, (Bod, 1992; Johnson, 1998) in reordering rules given the source language f and target language (Huang and Knight, 2006; Liu 2006; Zhang and e, the decoder is to find: Jiang 2008), we make a mapping from trees to length(e) ebest = argmaxe p ( e | f ) pLM ( e ) ω (1) sets of head-modifier dependency relations (Collins 1996 ) which can be obtained from the where p(e|f) can be computed using phrase constituent based parse tree with the help of translation model, distortion model and lexical head rules ( Bikel, 2004 ). reordering model. pLM(e) can be computed us- length(e) ing the language model. ω is word penalty 3.1 Head-modifier Relation model. Among the above models, there are three According to Klein and Manning (2003) and reordering-related components: language model, Collins (1999), there are two shortcomings in n- lexical reordering model and distortion model. ary Treebank grammar. Firstly, the grammar is The language model can reorder the local target too coarse for parsing. The rules in different words within a fixed window in an implied way. context always have different distributions. Se- The lexical reordering model and distortion condly, the rules learned from training corpus reordering model tackle the reordering problem cannot cover the rules in testing set. between adjacent phrase on lexical level and Currently, the state-of-the-art parsing algo- alignment level. Besides these reordering model, rithms (Klein and Manning, 2003; Collins 1999) the decoder induces distortion pruning con- decompose the n-ary Treebank grammar into straints to encourage the decoder translate the sets of head-modifier relationships. The parsing leftmost uncovered word in the source side rules in these algorithms are constructed in the firstly and to limit the reordering within a cer- form of finer-grained binary head-modifier de- tain range. pendency relationships. Fig.2 presents an exam- ple of head-modifier based dependency tree 3 Model mapped from the constituent parse tree in Fig.1. In this paper, we utilize the constituent parse tree of source language to enhance the reorder- 749 Fig. 2. Head-modifier Relationships with Aligned Translation Moreover, there are several reasons for which Relationship: in this paper, we not only use the we adopt the head-modifier structured tree as label of the constituent label as Collins (1996), the main frame of our reordering model. Firstly, but also use some well-known context in pars- the dependency relationships can reflect some ing to define the head-modifier relationship r(.), underlying binary long distance dependency including the POS of the modifier m, the POS relations in the source side. Thus, binary depen- of the head h, the dependency direction d, the dency structure will suffer less from the long parent label of the dependency label l, the distance reordering constraint. Secondly, in grandfather label of the dependency relation p, head-modifier relation, we not only can utilize the POS of adjacent siblings of the modifier s. the context of dependency relation in reordering Thus, the head-modifier relationship can be model, but also can utilize some well-known represented as a 6-tuple <m, h, d, l, p, s>. and proved helpful context (Johnson, 1998) of constituent base parse tree in reordering model. r(.) relationship Finally, head-modifier relationship is mature r(1) <VV, - , -, -, -, - > and widely adopted method in full parsing. r(2) <NN, NN, right, NP, IP, - > r(3) <NN,VV, right, IP, CP, - > 3.2 Head-modifier Relation Based Reor- r(4) <VV, DEC, right, CP, NP, - > dering Model r(5) <NN,VV, left, VP, CP, - > Before elaborating the model, we define some r(6) <DEC, NP, right, NP, VP, - > notions further easy understanding. S=<f1, f r(7) <NN, VV, left, VP, TOP, - > 2…fn> is the source sentence; T=<e1,e2,…,em> is Table 1. Relations Extracted from Fig 2. the target sentence; AS={as(i) | 1≤ as(i) ≤ n } where as(i) represents that the ith word in source In Table 1, there are 7 relationships extracted sentence aligned to the as(i)th word in target from the source head-modifier based dependen- sentence; AT={aT(i) | 1≤ aT (i) ≤ n } where aT(i) cy tree as shown in Fig.2. Please notice that, in represents that the ith word in target sentence this paper, each source word has a correspond- aligned to the aT(i)th word in source sentence; ing relation. D= {( d(i), r(i) )| 0≤ d(i) ≤n} is the head- Reordering type: there are 4 reordering types modifier relation set of the words in S where for target words with linked word in the source d(i) represents that the ith word in source sen- side in our model: R= {rm1, rm2, rm3 , rm4}. The tence is the modifier of d(i)th word in source reordering type of target word as(i) is defined as sentence under relationship r(i); O= < o1, o2,…, follows: om > is the sequence of the reordering type of every word in target language. The reordering rm1: if the position number of the ith model probability is P(O| S, T, D, A). word’s head is less than i ( d(i) < i ) in source language, while the position num- ber of the word aligned to i is less than 750 as(d(i)) (as(i) < as(d(i)) ) in target lan- ing type into rm2; otherwise, we classify the guage; reordering type into rm4. Probability estimation: we adopt maximum rm : if the position number of the ith 2 likelihood (ML) based estimation in this paper. word’s head is less than i ( d(i) < i ) in In ML estimation, in order to avoid the data source language, while the position num- sparse problem brought by lexicalization, we ber of the word aligned to i is larger than discard the lexical information in source and a (d(i)) (a (i) > a (d(i)) ) in target lan- s s s target language: guage.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-