BLEU: a Method for Automatic Evaluation of Machine Translation

Total Page:16

File Type:pdf, Size:1020Kb

BLEU: a Method for Automatic Evaluation of Machine Translation Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, July 2002, pp. 311-318. BLEU: a Method for Automatic Evaluation of Machine Translation Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu IBM T. J. Watson Research Center Yorktown Heights, NY 10598, USA fpapineni,roukos,toddward,[email protected] Abstract the evaluation bottleneck. Developers would bene- fit from an inexpensive automatic evaluation that is Human evaluations of machine translation quick, language-independent, and correlates highly are extensive but expensive. Human eval- with human evaluation. We propose such an evalua- uations can take months to finish and in- tion method in this paper. volve human labor that can not be reused. We propose a method of automatic ma- 1.2 Viewpoint chine translation evaluation that is quick, How does one measure translation performance? inexpensive, and language-independent, The closer a machine translation is to a professional that correlates highly with human evalu- human translation, the better it is. This is the cen- ation, and that has little marginal cost per tral idea behind our proposal. To judge the quality run. We present this method as an auto- of a machine translation, one measures its closeness mated understudy to skilled human judges to one or more reference human translations accord- which substitutes for them when there is ing to a numerical metric. Thus, our MT evaluation need for quick or frequent evaluations.1 system requires two ingredients: 1 Introduction 1. a numerical “translation closeness” metric 1.1 Rationale 2. a corpus of good quality human reference trans- Human evaluations of machine translation (MT) lations weigh many aspects of translation, including ade- quacy, fidelity , and fluency of the translation (Hovy, We fashion our closeness metric after the highly suc- 1999; White and O’Connell, 1994). A compre- cessful word error rate metric used by the speech hensive catalog of MT evaluation techniques and recognition community, appropriately modified for their rich literature is given by Reeder (2001). For multiple reference translations and allowing for le- the most part, these various human evaluation ap- gitimate differences in word choice and word or- proaches are quite expensive (Hovy, 1999). More- der. The main idea is to use a weighted average of over, they can take weeks or months to finish. This is variable length phrase matches against the reference a big problem because developers of machine trans- translations. This view gives rise to a family of met- lation systems need to monitor the effect of daily rics using various weighting schemes. We have se- changes to their systems in order to weed out bad lected a promising baseline metric from this family. ideas from good ideas. We believe that MT progress In Section 2, we describe the baseline metric in stems from evaluation and that there is a logjam of detail. In Section 3, we evaluate the performance of fruitful research ideas waiting to be released from BLEU. In Section 4, we describe a human evaluation 1So we call our method the bilingual evaluation understudy, experiment. In Section 5, we compare our baseline BLEU. metric performance with human evaluations. 2 The Baseline BLEU Metric large collections of translations presented in Section 5 show that this ranking ability is a general phe- Typically, there are many “perfect” translations of a nomenon, and not an artifact of a few toy examples. given source sentence. These translations may vary The primary programming task for a BLEU imple- in word choice or in word order even when they use mentor is to compare n-grams of the candidate with the same words. And yet humans can clearly dis- the n-grams of the reference translation and count tinguish a good translation from a bad one. For ex- the number of matches. These matches are position- ample, consider these two candidate translations of independent. The more the matches, the better the a Chinese source sentence: candidate translation is. For simplicity, we first fo- Example 1. cus on computing unigram matches. Candidate 1: It is a guide to action which ensures that the military always obeys 2.1 Modified n-gram precision the commands of the party. The cornerstone of our metric is the familiar pre- Candidate 2: It is to insure the troops cision measure. To compute precision, one simply forever hearing the activity guidebook counts up the number of candidate translation words that party direct. (unigrams) which occur in any reference translation and then divides by the total number of words in Although they appear to be on the same subject, they the candidate translation. Unfortunately, MT sys- differ markedly in quality. For comparison, we pro- tems can overgenerate “reasonable” words, result- vide three reference human translations of the same ing in improbable, but high-precision, translations sentence below. like that of example 2 below. Intuitively the prob- Reference 1: It is a guide to action that lem is clear: a reference word should be considered ensures that the military will forever exhausted after a matching candidate word is iden- heed Party commands. tified. We formalize this intuition as the modified Reference 2: It is the guiding principle unigram precision. To compute this, one first counts which guarantees the military forces the maximum number of times a word occurs in any always being under the command of the single reference translation. Next, one clips the to- Party. tal count of each candidate word by its maximum reference count,2adds these clipped counts up, and Reference 3: It is the practical guide for divides by the total (unclipped) number of candidate the army always to heed the directions words. of the party. Example 2. It is clear that the good translation, Candidate 1, Candidate: the the the the the the the. shares many words and phrases with these three ref- Reference 1: The cat is on the mat. erence translations, while Candidate 2 does not. We will shortly quantify this notion of sharing in Sec- Reference 2: There is a cat on the mat. tion 2.1. But first observe that Candidate 1 shares Modified Unigram Precision = 2=7.3 "It is a guide to action" with Reference 1, In Example 1, Candidate 1 achieves a modified "which" with Reference 2, "ensures that the unigram precision of 17=18; whereas Candidate military" with Reference 1, "always" with Ref- 2 achieves a modified unigram precision of 8=14. erences 2 and 3, "commands" with Reference 1, and Similarly, the modified unigram precision in Exam- finally "of the party" with Reference 2 (all ig- ple 2 is 2=7, even though its standard unigram pre- noring capitalization). In contrast, Candidate 2 ex- cision is 7=7. hibits far fewer matches, and their extent is less. 2 It is clear that a program can rank Candidate 1 Countclip = min(Count;Max Re f Count). In other words, higher than Candidate 2 simply by comparing n- one truncates each word’s count, if necessary, to not exceed the largest count observed in any single reference for that word. gram matches between each candidate translation 3As a guide to the eye, we have underlined the important and the reference translations. Experiments over words for computing modified precision. Modified n-gram precision is computed similarly 2.1.2 Ranking systems using only modified for any n: all candidate n-gram counts and their n-gram precision corresponding maximum reference counts are col- To verify that modified n-gram precision distin- lected. The candidate counts are clipped by their guishes between very good translations and bad corresponding reference maximum value, summed, translations, we computed the modified precision and divided by the total number of candidate n- numbers on the output of a (good) human transla- grams. In Example 1, Candidate 1 achieves a mod- tor and a standard (poor) machine translation system ified bigram precision of 10/17, whereas the lower using 4 reference translations for each of 127 source quality Candidate 2 achieves a modified bigram pre- sentences. The average precision results are shown cision of 1/13. In Example 2, the (implausible) can- in Figure 1. didate achieves a modified bigram precision of 0. This sort of modified n-gram precision scoring cap- tures two aspects of translation: adequacy and flu- Figure 1: Distinguishing Human from Machine ency. A translation using the same words (1-grams) 0.9 0.8 as in the references tends to satisfy adequacy. The 0.7 4 n 0.6 longer n-gram matches account for fluency. o i s 0.5 i c 0.4 e r P 0.3 2.1.1 Modified n-gram precision on blocks of 0.2 text 0.1 0 1 2 3 4 How do we compute modified n-gram precision Phrase (n-gram) Length on a multi-sentence test set? Although one typically evaluates MT systems on a corpus of entire docu- ments, our basic unit of evaluation is the sentence. The strong signal differentiating human (high pre- A source sentence may translate to many target sen- cision) from machine (low precision) is striking. tences, in which case we abuse terminology and re- The difference becomes stronger as we go from un- fer to the corresponding target sentences as a “sen- igram precision to 4-gram precision. It appears that tence.” We first compute the n-gram matches sen- any single n-gram precision score can distinguish tence by sentence. Next, we add the clipped n-gram between a good translation and a bad translation. counts for all the candidate sentences and divide by To be useful, however, the metric must also reliably the number of candidate n-grams in the test corpus distinguish between translations that do not differ so greatly in quality.
Recommended publications
  • BLEU Might Be Guilty but References Are Not Innocent
    BLEU might be Guilty but References are not Innocent Markus Freitag, David Grangier, Isaac Caswell Google Research ffreitag,grangier,[email protected] Abstract past, especially when comparing rule-based and statistical systems (Bojar et al., 2016b; Koehn and The quality of automatic metrics for machine translation has been increasingly called into Monz, 2006; Callison-Burch et al., 2006). question, especially for high-quality systems. Automated evaluations are however of crucial This paper demonstrates that, while choice importance, especially for system development. of metric is important, the nature of the ref- Most decisions for architecture selection, hyper- erences is also critical. We study differ- parameter search and data filtering rely on auto- ent methods to collect references and com- mated evaluation at a pace and scale that would pare their value in automated evaluation by not be sustainable with human evaluations. Au- reporting correlation with human evaluation for a variety of systems and metrics. Mo- tomated evaluation (Koehn, 2010; Papineni et al., tivated by the finding that typical references 2002) typically relies on two crucial ingredients: exhibit poor diversity, concentrating around a metric and a reference translation. Metrics gen- translationese language, we develop a para- erally measure the quality of a translation by as- phrasing task for linguists to perform on exist- sessing the overlap between the system output and ing reference translations, which counteracts the reference translation. Different overlap metrics this bias. Our method yields higher correla- have been proposed, aiming to improve correla- tion with human judgment not only for the submissions of WMT 2019 English!German, tion between human and automated evaluations.
    [Show full text]
  • A Visualisation Tool for Inspecting and Evaluating Metric Scores of Machine Translation Output
    Vis-Eval Metric Viewer: A Visualisation Tool for Inspecting and Evaluating Metric Scores of Machine Translation Output David Steele and Lucia Specia Department of Computer Science University of Sheffield Sheffield, UK dbsteele1,[email protected] Abstract cised for providing scores that can be non-intuitive and uninformative, especially at the sentence level Machine Translation systems are usually eval- (Zhang et al., 2004; Song et al., 2013; Babych, uated and compared using automated evalua- 2014). Additionally, scores across different met- tion metrics such as BLEU and METEOR to score the generated translations against human rics can be inconsistent with each other. This in- translations. However, the interaction with the consistency can be an indicator of linguistic prop- output from the metrics is relatively limited erties of the translations which should be further and results are commonly a single score along analysed. However, multiple metrics are not al- with a few additional statistics. Whilst this ways used and any discrepancies among them tend may be enough for system comparison it does to be ignored. not provide much useful feedback or a means Vis-Eval Metric Viewer (VEMV) was devel- for inspecting translations and their respective scores. Vis-Eval Metric Viewer (VEMV) is a oped as a tool bearing in mind the aforementioned tool designed to provide visualisation of mul- issues. It enables rapid evaluation of MT output, tiple evaluation scores so they can be easily in- currently employing up to eight popular metrics. terpreted by a user. VEMV takes in the source, Results can be easily inspected (using a typical reference, and hypothesis files as parameters, web browser) especially at the segment level, with and scores the hypotheses using several popu- each sentence (source, reference, and hypothesis) lar evaluation metrics simultaneously.
    [Show full text]
  • Why Word Error Rate Is Not a Good Metric for Speech Recognizer Training for the Speech Translation Task?
    WHY WORD ERROR RATE IS NOT A GOOD METRIC FOR SPEECH RECOGNIZER TRAINING FOR THE SPEECH TRANSLATION TASK? Xiaodong He, Li Deng, and Alex Acero Microsoft Research, One Microsoft Way, Redmond, WA 98052, USA ABSTRACT ASR, where the widely used metric is word error rate (WER), the translation accuracy is usually measured by the quantities Speech translation (ST) is an enabling technology for cross-lingual including BLEU (Bi-Lingual Evaluation Understudy), NIST-score, oral communication. A ST system consists of two major and Translation Edit Rate (TER) [12][14]. BLEU measures the n- components: an automatic speech recognizer (ASR) and a machine gram matches between the translation hypothesis and the translator (MT). Nowadays, most ASR systems are trained and reference(s). In [1][2], it was reported that translation accuracy tuned by minimizing word error rate (WER). However, WER degrades 8 to 10 BLEU points when the ASR output was used to counts word errors at the surface level. It does not consider the replace the verbatim ASR transcript (i.e., assuming no ASR error). contextual and syntactic roles of a word, which are often critical On the other hand, although WER is widely accepted as the de for MT. In the end-to-end ST scenarios, whether WER is a good facto metric for ASR, it only measures word errors at the surface metric for the ASR component of the full ST system is an open level. It takes no consideration of the contextual and syntactic roles issue and lacks systematic studies. In this paper, we report our of a word.
    [Show full text]
  • Machine Translation Evaluation
    Machine Translation Evaluation Sara Stymne Partly based on Philipp Koehn's slides for chapter 8 Why Evaluation? How good is a given machine translation system? Which one is the best system for our purpose? How much did we improve our system? How can we tune our system to become better? Hard problem, since many different translations acceptable ! semantic equivalence / similarity Ten Translations of a Chinese Sentence Israeli officials are responsible for airport security. Israel is in charge of the security at this airport. The security work for this airport is the responsibility of the Israel government. Israeli side was in charge of the security of this airport. Israel is responsible for the airport's security. Israel is responsible for safety work at this airport. Israel presides over the security of the airport. Israel took charge of the airport security. The safety of this airport is taken charge of by Israel. This airport's security is the responsibility of the Israeli security officials. (a typical example from the 2001 NIST evaluation set) Which translation is best? Source F¨arjetransporterna har minskat med 20,3 procent i ˚ar. Gloss The-ferry-transports have decreased by 20.3 percent in year. Ref Ferry transports are down by 20.3% in 2008. Sys1 The ferry transports has reduced by 20,3% this year. Sys2 This year, there has been a reduction of transports by ferry of 20.3 procent. Sys3 F¨arjetransporterna are down by 20.3% in 2003. Sys4 Ferry transports have a reduction of 20.3 percent in year. Sys5 Transports are down by 20.3% in year.
    [Show full text]
  • Arxiv:2006.14799V2 [Cs.CL] 18 May 2021 the Same User Input
    Evaluation of Text Generation: A Survey Evaluation of Text Generation: A Survey Asli Celikyilmaz [email protected] Facebook AI Research Elizabeth Clark [email protected] University of Washington Jianfeng Gao [email protected] Microsoft Research Abstract The paper surveys evaluation methods of natural language generation (NLG) systems that have been developed in the last few years. We group NLG evaluation methods into three categories: (1) human-centric evaluation metrics, (2) automatic metrics that require no training, and (3) machine-learned metrics. For each category, we discuss the progress that has been made and the challenges still being faced, with a focus on the evaluation of recently proposed NLG tasks and neural NLG models. We then present two examples for task-specific NLG evaluations for automatic text summarization and long text generation, and conclude the paper by proposing future research directions.1 1. Introduction Natural language generation (NLG), a sub-field of natural language processing (NLP), deals with building software systems that can produce coherent and readable text (Reiter & Dale, 2000a) NLG is commonly considered a general term which encompasses a wide range of tasks that take a form of input (e.g., a structured input like a dataset or a table, a natural language prompt or even an image) and output a sequence of text that is coherent and understandable by humans. Hence, the field of NLG can be applied to a broad range of NLP tasks, such as generating responses to user questions in a chatbot, translating a sentence or a document from one language into another, offering suggestions to help write a story, or generating summaries of time-intensive data analysis.
    [Show full text]
  • A Survey of Evaluation Metrics Used for NLG Systems
    A Survey of Evaluation Metrics Used for NLG Systems ANANYA B. SAI, Robert-Bosch Centre for Data Science and AI, Indian Institute of Technology, Madras, India AKASH KUMAR MOHANKUMAR, Indian Institute of Technology, Madras, India MITESH M. KHAPRA, Robert-Bosch Centre for Data Science and AI, Indian Institute of Technology, Madras, India The success of Deep Learning has created a surge in interest in a wide range of Natural Language Generation (NLG) tasks. Deep Learning has not only pushed the state of the art in several existing NLG tasks but has also facilitated researchers to explore various newer NLG tasks such as image captioning. Such rapid progress in NLG has necessitated the development of accurate automatic evaluation metrics that would allow us to track the progress in the field of NLG. However, unlike classification tasks, automatically evaluating NLG systems in itself is a huge challenge. Several works have shown that early heuristic-based metrics such as BLEU, ROUGE are inadequate for capturing the nuances in the different NLG tasks. The expanding number of NLG models and the shortcomings of the current metrics has led to a rapid surge in the number of evaluation metrics proposed since 2014. Moreover, various evaluation metrics have shifted from using pre-determined heuristic-based formulae to trained transformer models. This rapid change in a relatively short time has led to the need for a survey of the existing NLG metrics to help existing and new researchers to quickly come up to speed with the developments that have happened in NLG evaluation in the last few years.
    [Show full text]
  • English and Arabic Speech Translation
    Normalization for Automated Metrics: English and Arabic Speech Translation Sherri Condon*, Gregory A. Sanders†, Dan Parvaz*, Alan Rubenstein*, Christy Doran*, John Aberdeen*, and Beatrice Oshika* *The MITRE Corporation †National Institute of Standards and Technology 7525 Colshire Drive 100 Bureau Drive, Stop 8940 McLean, Virginia 22102 Gaithersburg, Maryland 20899–8940 {scondon, dparvaz, Rubenstein, cdoran, aberdeen, bea}@mitre.org / [email protected] eral different protocols and offline evaluations in Abstract which the systems process audio recordings and transcripts of interactions. Details of the The Defense Advanced Research Projects TRANSTAC evaluation methods are described in Agency (DARPA) Spoken Language Com- Weiss et al. (2008), Sanders et al. (2008) and Con- munication and Translation System for Tac- don et al. (2008). tical Use (TRANSTAC) program has Because the inputs in the offline evaluation are experimented with applying automated me- the same for each system, we can analyze transla- trics to speech translation dialogues. For trans- lations into English, BLEU, TER, and tions using automated metrics. Measures such as METEOR scores correlate well with human BiLingual Evaluation Understudy (BLEU) (Papi- judgments, but scores for translation into neni et al., 2002), Translation Edit Rate (TER) Arabic correlate with human judgments less (Snover et al., 2006), and Metric for Evaluation of strongly. This paper provides evidence to sup- Translation with Explicit word Ordering port the hypothesis that automated measures (METEOR) (Banerjee and Lavie, 2005) have been of Arabic are lower due to variation and in- developed and widely used for translations of text flection in Arabic by demonstrating that nor- and broadcast material, which have very different malization operations improve correlation properties than dialog.
    [Show full text]
  • Automatic Evaluation Measures for Statistical Machine Translation System Optimization
    Automatic Evaluation Measures for Statistical Machine Translation System Optimization Arne Mauser, Sasaˇ Hasan and Hermann Ney Human Language Technology and Pattern Recognition Lehrstuhl f¨ur Informatik 6, Computer Science Department RWTH Aachen University, D-52056 Aachen, Germany {mauser,hasan,ney}@cs.rwth-aachen.de Abstract Evaluation of machine translation (MT) output is a challenging task. In most cases, there is no single correct translation. In the extreme case, two translations of the same input can have completely different words and sentence structure while still both being perfectly valid. Large projects and competitions for MT research raised the need for reliable and efficient evaluation of MT systems. For the funding side, the obvious motivation is to measure performance and progress of research. This often results in a specific measure or metric taken as primarily evaluation criterion. Do improvements in one measure really lead to improved MT performance? How does a gain in one evaluation metric affect other measures? This paper is going to answer these questions by a number of experiments. 1. Introduction Using a log-linear model (Och and Ney, 2002), we obtain: Evaluation of machine translation (MT) output is a chal- M I J exp =1 λmhm(e1, f1 ) lenging task. In most cases, there is no single correct trans- I J m P r(e1|f1 ) = (3) lation. In the extreme case, two translations of the same in- M ′I′ J exp P m=1 λmhm(e 1 , f1 ) ′I′ put can have completely different words and sentence struc- e 1 ture while still both being perfectly valid.
    [Show full text]
  • Re-Evaluating the Role of BLEU in Machine Translation Research
    Re-evaluating the Role of BLEU in Machine Translation Research Chris Callison-Burch Miles Osborne Philipp Koehn School on Informatics University of Edinburgh 2 Buccleuch Place Edinburgh, EH8 9LW [email protected] Abstract Bleu to the extent that it currently is. In this We argue that the machine translation paper we give a number of counterexamples for community is overly reliant on the Bleu Bleu’s correlation with human judgments. We machine translation evaluation metric. We show that under some circumstances an improve- show that an improved Bleu score is nei- ment in Bleu is not sufficient to reflect a genuine ther necessary nor sufficient for achieving improvement in translation quality, and in other an actual improvement in translation qual- circumstances that it is not necessary to improve ity, and give two significant counterex- Bleu in order to achieve a noticeable improvement amples to Bleu’s correlation with human in translation quality. judgments of quality. This offers new po- We argue that Bleu is insufficient by showing tential for research which was previously that Bleu admits a huge amount of variation for deemed unpromising by an inability to im- identically scored hypotheses. Typically there are prove upon Bleu scores. millions of variations on a hypothesis translation that receive the same Bleu score. Because not all 1 Introduction these variations are equally grammatically or se- Over the past five years progress in machine trans- mantically plausible there are translations which lation, and to a lesser extent progress in natural have the same Bleu score but a worse human eval- language generation tasks such as summarization, uation.
    [Show full text]
  • Introduction to Natural Language Processing Computer Science 585—Fall 2009 University of Massachusetts Amherst
    Machine Translation: Training Introduction to Natural Language Processing Computer Science 585—Fall 2009 University of Massachusetts Amherst David Smith 1 Training Which features of data predict good translations? 2 Training: Generative/Discriminative • Generative –Maximum likelihood training: max p(data) –“Count and normalize” –Maximum likelihood with hidden structure • Expectation Maximization (EM) • Discriminative training –Maximum conditional likelihood –Minimum error/risk training –Other criteria: perceptron and max. margin 3 “Count and Normalize” ... into the programme ... ... into the disease ... • Language modeling example: ... into the disease ... ... into the correct ... assume the probability of a word ... into the next ... depends only on the previous 2 ... into the national ... ... into the integration ... words. ... into the Union ... ... into the Union ... ... into the Union ... ... into the sort ... ... into the internal ... • p(disease|into the) = 3/20 = 0.15 ... into the general ... ... into the budget ... • “Smoothing” reflects a prior belief ... into the disease ... ... into the legal … that p(breech|into the) > 0 despite ... into the various ... these 20 examples. ... into the nuclear ... ... into the bargain ... ... into the situation ... 4 Phrase Models I did not unfortunately receive an answer to this question Auf diese Frage habe ich leider keine Antwort bekom men Assume word alignments are given. 5 Phrase Models I did not unfortunately receive an answer to this question Auf diese Frage habe ich leider keine Antwort bekom men Some good phrase pairs. 6 Phrase Models I did not unfortunately receive an answer to this question Auf diese Frage habe ich leider keine Antwort bekom men Some bad phrase pairs. 7 “Count and Normalize” • Usual approach: treat relative frequencies of source phrase s and target phrase t as probabilities • This leads to overcounting when not all segmentations are legal due to unaligned words.
    [Show full text]
  • Minimum Error Rate Training in Statistical Machine Translation
    Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 [email protected] Abstract statistical approach and the final evaluation criterion used to measure success in a task. Often, the training procedure for statisti- Ideally, we would like to train our model param- cal machine translation models is based on eters such that the end-to-end performance in some maximum likelihood or related criteria. A application is optimal. In this paper, we investigate general problem of this approach is that methods to efficiently optimize model parameters there is only a loose relation to the final with respect to machine translation quality as mea- translation quality on unseen text. In this sured by automatic evaluation criteria such as word paper, we analyze various training criteria error rate and BLEU. which directly optimize translation qual- ity. These training criteria make use of re- 2 Statistical Machine Translation with cently proposed automatic evaluation met- Log-linear Models rics. We describe a new algorithm for effi- cient training an unsmoothed error count. Let us assume that we are given a source (‘French’) ¢¡¤£¦¥ ¡¨£ £ £ § © © © © § We show that significantly better results sentence ¥ , which is can often be obtained if the final evalua- to be translated into a target (‘English’) sentence ¡ ¡ © © © © § § tion criterion is taken directly into account Among all possible as part of the training procedure. target sentences, we will choose the sentence with the highest probability:1 ¡ "!#$ +* ')( 1 Introduction Pr (1) % & Many tasks in natural language processing have evaluation criteria that go beyond simply count- The argmax operation denotes the search problem, ing the number of wrong decisions the system i.e.
    [Show full text]
  • LEPOR: an Augmented Machine Translation Evaluation Metric Lifeng
    LEPOR: An Augmented Machine Translation Evaluation Metric by Lifeng Han, Aaron Master of Science in Software Engineering 2014 Faculty of Science and Technology University of Macau LEPOR: AN AUGMENTED MACHINE TRANSLATION EVALUATION METRIC by LiFeng Han, Aaron A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Software Engineering Faculty of Science and Technology University of Macau 2014 Approved by ___________________________________________________ Supervisor __________________________________________________ __________________________________________________ __________________________________________________ Date __________________________________________________________ In presenting this thesis in partial fulfillment of the requirements for a Master's degree at the University of Macau, I agree that the Library and the Faculty of Science and Technology shall make its copies freely available for inspection. However, reproduction of this thesis for any purposes or by any means shall not be allowed without my written permission. Authorization is sought by contacting the author at Address:FST Building 2023, University of Macau, Macau S.A.R. Telephone: +(00)853-63567608 E-mail: [email protected] Signature ______________________ 2014.07.10th Date __________________________ University of Macau Abstract LEPOR: AN AUGMENTED MACHINE TRANSLATION EVALUATION METRIC by LiFeng Han, Aaron Thesis Supervisors: Dr. Lidia S. Chao and Dr. Derek F. Wong Master of Science in Software Engineering Machine translation (MT) was developed as one of the hottest research topics in the natural language processing (NLP) literature. One important issue in MT is that how to evaluate the MT system reasonably and tell us whether the translation system makes an improvement or not. The traditional manual judgment methods are expensive, time-consuming, unrepeatable, and sometimes with low agreement.
    [Show full text]