Simultaneous Word-Morpheme Alignment for Statistical Machine Translation

Simultaneous Word-Morpheme Alignment for Statistical Machine Translation

Simultaneous Word-Morpheme Alignment for Statistical Machine Translation Elif Eyigoz¨ Daniel Gildea Kemal Oflazer Computer Science Computer Science Computer Science University of Rochester University of Rochester Carnegie Mellon University Rochester, NY 14627 Rochester, NY 14627 PO Box 24866, Doha, Qatar Abstract word segmentations that create a one-to-one map- ping of words in both languages. Al-Onaizan et al. Current word alignment models for statisti- (1999), Cmejrekˇ et al. (2003), and Goldwater and cal machine translation do not address mor- McClosky (2005) manipulate morphologically rich phology beyond merely splitting words. We languages by selective lemmatization. Lee (2004) present a two-level alignment model that dis- attempts to learn the probability of deleting or merg- tinguishes between words and morphemes, in ing Arabic morphemes for Arabic to English trans- which we embed an IBM Model 1 inside an HMM based word alignment model. The lation. Niessen and Ney (2000) split German com- model jointly induces word and morpheme pound nouns, and merge German phrases that cor- alignments using an EM algorithm. We eval- respond to a single English word. Alternatively, uated our model on Turkish-English parallel Yeniterzi and Oflazer (2010) manipulate words of data. We obtained significant improvement of the morphologically poor side of a language pair BLEU scores over IBM Model 4. Our results to mimic having a morphological structure similar indicate that utilizing information from mor- to the richer side via exploiting syntactic structure, phology improves the quality of word align- in order to improve the similarity of words on both ments. sides of the translation. We present an alignment model that assumes in- 1 Introduction ternal structure for words, and we can legitimately talk about words and their morphemes in line with All current state-of-the-art approaches to SMT rely the linguistic conception of these terms. Our model on an automatically word-aligned corpus. However, avoids the problem of collapsing words and mor- current alignment models do not take into account phemes into one single category. We adopt a two- the morpheme, the smallest unit of syntax, beyond level representation of alignment: the first level in- merely splitting words. Since morphology has not volves word alignment, the second level involves been addressed explicitly in word alignment models, morpheme alignment in the scope of a given word researchers have resorted to tweaking SMT systems alignment. The model jointly induces word and by manipulating the content and the form of what morpheme alignments using an EM algorithm. should be the so-called “word”. We develop our model in two stages. Our initial Since the word is the smallest unit of translation model is analogous to IBM Model 1: the first level from the standpoint of word alignment models, the is a bag of words in a pair of sentences, and the sec- central focus of research on translating morphologi- ond level is a bag of morphemes. In this manner, cally rich languages has been decomposition of mor- we embed one IBM Model 1 in the scope of another phologically complex words into tokens of the right IBM Model 1. At the second stage, by introducing granularity and representation for machine transla- distortion probabilities at the word level, we develop tion. Chung and Gildea (2009) and Naradowsky and an HMM extension of the initial model. Toutanova (2011) use unsupervised methods to find We evaluated the performance of our model on the 32 Proceedings of NAACL-HLT 2013, pages 32–40, Atlanta, Georgia, 9–14 June 2013. c 2013 Association for Computational Linguistics Turkish-English pair both on hand-aligned data and are compatible if and only if: by running end-to-end machine translation experi- + + ments. To evaluate our results, we created gold word ∀ j, k, m, n ∈ N , ∃ s, t ∈ N alignments for 75 Turkish-English sentences. We [am(j, k) = (m, n) ⇒ aw(j) = m] ∧ obtain significant improvement of AER and BLEU [aw(j) = m ⇒ am(j, s) = (m, t)] ∧ scores over IBM Model 4. Section 2.1 introduces [a (j) = null ⇒ a (j, k) = null] (1) the concept of morpheme alignment in terms of its w m relation to word alignment. Section 2.2 presents Please note that, according to this definition of com- the derivation of the EM algorithm and Section 3 patibility, ‘am(j, k) = null’ does not necessarily im- presents the results of our experiments. ply ‘aw(j) = null’. A word alignment induces a set of compati- 2 Two-level Alignment Model (TAM) ble morpheme alignments. However, a morpheme alignment induces a unique word alignment. There- fore, if a morpheme alignment a and a word align- 2.1 Morpheme Alignment m ment aw are compatible, then the word alignment is Following the standard alignment models of Brown aw is recoverable from the morpheme alignment am. et al. (1993), we assume one-to-many alignment for The two-level alignment model (TAM), like IBM Model 1, defines an alignment between words both words and morphemes. A word alignment aw (or only a) is a function mapping a set of word po- of a sentence pair. In addition, it defines a mor- sitions in a source language sentence to a set of pheme alignment between the morphemes of a sen- word positions in a target language sentence. A mor- tence pair. The problem domain of IBM Model 1 is defined pheme alignment am is a function mapping a set of morpheme positions in a source language sentence over alignments between words, which is depicted to a set of morpheme positions in a target language as the gray box in Figure 1. In Figure 2, the smaller sentence. A morpheme position is a pair of integers boxes embedded inside the main box depict the new (j, k), which defines a word position j and a relative problem domain of TAM. Given the word align- morpheme position k in the word at position j. The ments in Figure 1, we are presented with a new alignments below are depicted in Figures 1 and 2. alignment problem defined over their morphemes. The new alignment problem is constrained by the given word alignment. We, like IBM Model 1, adopt aw(1) = 1 am(2, 1) = (1, 1) aw(2) = 1 a bag-of-morphemes approach to this new problem. We thus embed one IBM Model 1 into the scope of another IBM Model 1, and formulate a second-order Figure 1 shows a word alignment between two sen- interpretation of IBM Model 1. tences. Figure 2 shows the morpheme alignment be- TAM, like IBM Model 1, assumes that words and tween same sentences. We assume that all unaligned morphemes are translated independently of their morphemes in a sentence map to a special null mor- context. The units of translation are both words and pheme. morphemes. Both the word alignment aw and the A morpheme alignment a and a word alignment m morpheme alignment am are hidden variables that aw are compatible if and only if they satisfy the fol- need to be learned from the data using the EM algo- lowing conditions: If the morpheme alignment am rithm. maps a morpheme of e to a morpheme of f, then the In IBM Model 1, p(e|f), the probability of trans- word alignment aw maps e to f. If the word align- lating the sentence f into e with any alignment is ment aw maps e to f, then the morpheme alignment computed by summing over all possible word align- am maps at least one morpheme of e to a morpheme ments: of f. If the word alignment a maps e to null, then w X all of its morphemes are mapped to null. In sum, a p(e|f) = p(a, e|f) morpheme alignment am and a word alignment aw a 33 Figure 1: Word alignment Figure 2: Morpheme alignment In TAM, the probability of translating the sentence in the scope of the left part. In the right part, we f into e with any alignment is computed by sum- compute the probability of translating the word fi ming over all possible word alignments and all pos- into the word ej by summing over all possible mor- sible morpheme alignments that are compatible with pheme alignments between the morphemes of ej and a given word alignment aw: fi. R(ej, fi) is equivalent to R(e, f) except for the X X fact that its domain is not the set of sentences but p(e|f) = p(aw, e|f) p(am, e|aw, f) (2) the set of words. The length of words ej and fi in aw am R(ej, fi) are the number of morphemes of ej and fi. where am stands for a morpheme alignment. Since The left part, the contribution of word transla- the morpheme alignment am is in the scope of a tion probabilities alone, equals Eqn. 3. Therefore, given word alignment aw, am is constrained by aw. canceling the contribution of morpheme translation In IBM Model 1, we compute the probability of probabilities reduces TAM to IBM Model 1. In translating the sentence f into e by summing over our experiments, we call this reduced version of all possible word alignments between the words of f TAM ‘word-only’ (IBM). TAM with the contribu- and e: tion of both word and morpheme translation proba- |e| |f| bilities, as the equation above, is called ‘word-and- Y X p(e|f) = R(e, f) t(ej|fi) (3) morpheme’. Finally, we also cancel out the con- j=1 i=0 tribution of word translation probabilities, which is called ‘morpheme-only’.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us