DOCSLIB.ORG
Explore
Sign Up
Log In
Upload
Search
Home
» Tags
» BLEU
BLEU
BLEU Might Be Guilty but References Are Not Innocent
A Visualisation Tool for Inspecting and Evaluating Metric Scores of Machine Translation Output
Why Word Error Rate Is Not a Good Metric for Speech Recognizer Training for the Speech Translation Task?
Machine Translation Evaluation
Arxiv:2006.14799V2 [Cs.CL] 18 May 2021 the Same User Input
A Survey of Evaluation Metrics Used for NLG Systems
English and Arabic Speech Translation
Automatic Evaluation Measures for Statistical Machine Translation System Optimization
Re-Evaluating the Role of BLEU in Machine Translation Research
Introduction to Natural Language Processing Computer Science 585—Fall 2009 University of Massachusetts Amherst
Minimum Error Rate Training in Statistical Machine Translation
LEPOR: an Augmented Machine Translation Evaluation Metric Lifeng
Bertscore: Evaluating Text Generation with Bert
The Significance of Recall in Automatic Metrics for MT Evaluation
Towards Automatic Error Analysis of Machine Translation Output
BLEU: a Method for Automatic Evaluation of Machine Translation
Machine Translation Evaluation
Re-Evaluating the Role of BLEU in Machine Translation Research
Top View
Bleu: a Method for Automatic Evaluation of Machine Translation
Word Error Rates: Decomposition Over POS Classes and Applications for Error Analysis