Best Lexical Metric for Phrase-Based Statistical MT System Optimization

Best Lexical Metric for Phrase-Based Statistical MT System Optimization

The Best Lexical Metric for Phrase-Based Statistical MT System Optimization Daniel Cer, Christopher D. Manning and Daniel Jurafsky Stanford University Stanford, CA 94305, USA Abstract and Singer, 2003; Watanabe et al., 2007; Chiang et al., 2008) train translation models toward a specific Translation systems are generally trained to evaluation metric. This makes the quality of the re- optimize BLEU, but many alternative metrics sulting model dependent on how accurately the au- are available. We explore how optimizing tomatic metric actually reflects human preferences. toward various automatic evaluation metrics (BLEU, METEOR, NIST, TER) affects the re- The most popular metric for both comparing sys- sulting model. We train a state-of-the-art MT tems and tuning MT models has been BLEU. While system using MERT on many parameteriza- BLEU (Papineni et al., 2002) is relatively simple, tions of each metric and evaluate the result- scoring translations according to their n-gram over- ing models on the other metrics and also us- lap with reference translations, it still achieves a rea- ing human judges. In accordance with popular sonable correlation with human judgments of trans- wisdom, we find that it’s important to train on lation quality. It is also robust enough to use for au- the same metric used in testing. However, we also find that training to a newer metric is only tomatic optimization. However, BLEU does have a useful to the extent that the MT model’s struc- number of shortcomings. It doesn’t penalize n-gram ture and features allow it to take advantage of scrambling (Callison-Burch et al., 2006), and since the metric. Contrasting with TER’s good cor- it isn’t aware of synonymous words or phrases, it can relation with human judgments, we show that inappropriately penalize translations that use them. people tend to prefer BLEU and NIST trained Recently, there have been efforts to develop bet- models to those trained on edit distance based Translation metrics like TER or WER. Human prefer- ter evaluation metrics. Metrics such as ences for METEOR trained models varies de- Edit Rate (TER) (Snover et al., 2006; Snover et al., 1 pending on the source language. Since using 2009) and METEOR (Lavie and Denkowski, 2009) BLEU or NIST produces models that are more perform a more sophisticated analysis of the trans- robust to evaluation by other metrics and per- lations being evaluated and the scores they produce form well in human judgments, we conclude tend to achieve a better correlation with human judg- they are still the best choice for training. ments than those produced by BLEU (Snover et al., 2009; Lavie and Denkowski, 2009; Przybocki et al., 1 Introduction 2008; Snover et al., 2006). Their better correlations suggest that we might Since their introduction, automated measures of ma- obtain higher quality translations by making use of chine translation quality have played a critical role these new metrics when training our models. We ex- in the development and evolution of SMT systems. pect that training on a specific metric will produce While such metrics were initially intended for eval- the best performing model according to that met- uation, popular training methods such as minimum error rate training (MERT) (Och, 2003) and mar- 1METEOR: Metric for Evaluation of Translation with Ex- gin infused relaxed algorithm (MIRA) (Crammer plicit ORdering. 555 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, pages 555–563, Los Angeles, California, June 2010. c 2010 Association for Computational Linguistics ric. Doing better on metrics that better reflect human gram lengths and then combined using a geometric judgments seems to imply the translations produced mean. The score is then scaled by a brevity penalty by the model would be preferred by human judges. if the candidate translations are shorter than the ref- However, there are four potential problems. First, erences, BP = min(1.0, e1−len(R)/len(T )). Equa- some metrics could be susceptible to systematic ex- tion 1 gives BLEU using n-grams up to length N for ploitation by the training algorithm and result in a corpus of candidate translations T and reference model translations that have a high score according translations R. A variant of BLEU called the NIST to the evaluation metric but that are of low qual- metric (Doddington, 2002) weights n-gram matches ity.2 Second, other metrics may result in objective by how informative they are. functions that are harder to optimize. Third, some may result in better generalization performance at 1 test time by not encouraging overfitting of the train- N ! N Y n-grams(T T R) ing data. Finally, as a practical concern, metrics used BLEU:N = BP (1) n-grams(T ) for training cannot be too slow. n=1 In this paper, we systematically explore these four issues for the most popular metrics available to the While easy to compute, BLEU has a number of MT community. We examine how well models per- shortcomings. Since the order of matching n-grams form both on the metrics on which they were trained is ignored, n-grams in a translation can be randomly and on the other alternative metrics. Multiple mod- rearranged around non-matching material or other els are trained using each metric in order to deter- n-gram breaks without harming the score. BLEU mine the stability of the resulting models. Select also does not explicitly check whether information models are scored by human judges in order to deter- is missing from the candidate translations, as it only mine how performance differences obtained by tun- examines what fraction of candidate translation n- ing to different automated metrics relates to actual grams are in the references and not what fraction human preferences. of references n-grams are in the candidates (i.e., n The next sections introduce the metrics and our BLEU ignores -gram recall). Finally, the metric training procedure. We follow with two sets of core does not account for words and phrases that have results, machine evaluation in section 5, and human similar meanings. evaluation in section 6. 2.2 METEOR 2 Evaluation Metrics METEOR (Lavie and Denkowski, 2009) computes a one-to-one alignment between matching words in Designing good automated metrics for evaluating a candidate translation and a reference. If a word machine translations is challenging due to the vari- matches multiple other words, preference is given to ety of acceptable translations for each foreign sen- the alignment that reorders the words the least, with tence. Popular metrics produce scores primarily the amount of reordering measured by the number of based on matching sequences of words in the system crossing alignments. Alignments are first generated translation to those in one or more reference trans- for exact matches between words. Additional align- lations. The metrics primarily differ in how they ac- ments are created by repeatedly running the align- count for reorderings and synonyms. ment procedure over unaligned words, first allowing for matches between word stems, and then allow- 2.1 BLEU ing matches between words listed as synonyms in BLEU (Papineni et al., 2002) uses the percentage WordNet. From the final alignment, the candidate of n-grams found in machine translations that also translation’s unigram precision and recall is calcu- occur in the reference translations. These n-gram matches matches lated, P = length trans and R = length ref . These two precisions are calculated separately for different n- are then combined into a weighted harmonic mean (2). To penalize reorderings, this value is then scaled 2For example, BLEU computed without the brevity penalty would likely result in models that have a strong preference for by a fragmentation penalty based on the number of generating pathologically short translations. chunks the two sentences would need to be broken 556 into to allow them to be rearranged with no crossing 3 MERT β alignments, P = 1 − γ chunks . β,γ matches MERT is the standard technique for obtaining a ma- chine translation model fit to a specific evaluation PR Fα = (2) metric (Och, 2003). Learning such a model cannot αP + (1 − α)R be done using gradient methods since the value of the objective function only depends on the transla- tion model’s argmax for each sentence in the tun- METEORα,β,γ = Fα · Pβ,γ (3) ing set. Typically, this optimization is performed as Equation 3 gives the final METEOR score as the a series of line searches that examines the value of the evaluation metric at critical points where a new product of the unigram harmonic mean, Fα, and the translation argmax becomes preferred by the model. fragmentation penalty, Pβ,γ. The free parameters α, β, and γ can be used to tune the metric to human Since the model score assigned to each candidate judgments on a specific language and variation of translation varies linearly with changes to the model the evaluation task (e.g., ranking candidate transla- parameters, it is possible to efficiently find the global tions vs. reproducing judgments of translations ade- minimum along any given search direction with only 2 quacy and fluency). O(n ) operations when n-best lists are used. Using our implementation of MERT that allows 2.3 Translation Edit Rate for pluggable optimization metrics, we tune mod- els to BLEU:N for N = 1 ... 5, TER, two con- TER (Snover et al., 2006) searches for the shortest figurations of TERp, WER, several configurations sequence of edit operations needed to turn a can- of METEOR, as well as additive combinations of didate translation into one of the reference transla- these metrics. The TERp configurations include tions. The allowable edits are the insertion, dele- the default configuration of TERp and TERpA: tion, and substitution of individual words and swaps the configuration of TERp that was trained to of adjacent sequences of words.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us