Title: Machine Translation Evaluation Versus Quality Estimation
Authors: SPECIA LuciaRAJ DhwajTURCHI MARCO
Citation: Machine Translation vol. 24 no. 1 p. 39-50
Publisher: Springer
Publication Year: 2010
JRC Publication N°: JRC58892
ISSN: 0922-6567 (Print) 1573-0573 (Online)
URI: http://www.springerlink.com/content/m537r4201r6l3767/
http://publications.jrc.ec.europa.eu/repository/handle/JRC58892
DOI: 10.1007/s10590-010-9077-2
Type: Articles in Journals
Abstract: Most evaluation metrics for machine translation (MT) require reference translations for each sentence in order to produce a score reflecting certain aspects of its quality. The de facto metrics, BLEU and NIST, are known to have good correlation with human evaluation at the corpus level, but this is not the case at the segment level. As an attempt to overcome these two limitations, we address the problem of evaluating the quality of MT as a prediction task, where reference-independent features are extracted from the input sentences and their translation, and a quality score is obtained based on models produced from training data. We show that this approach yields better correlation with human evaluation as compared to commonly used metrics, even with models trained on different MT systems, language-pairs and text domains.
JRC Institute:Institute for the Protection and Security of the Citizen

Files in This Item:
There are no files associated with this item.


Items in repository are protected by copyright, with all rights reserved, unless otherwise indicated.