Please use this identifier to cite or link to this item:
|Title:||Machine Translation for Multilingual Summary Content Evaluation|
|Authors:||STEINBERGER JOSEF; TURCHI MARCO|
|Publisher:||Association for computation linguistics (ACL)|
|Type:||Articles in periodicals and books|
|Abstract:||The multilingual summarization pilot task at TAC’11 opened a lot of problems we are facing when we try to evaluate summary quality in different languages. The additional language dimension greatly increases annotation costs. For the TAC pilot task English articles were first translated to other 6 languages, model summaries were written and submitted system summaries were evaluated. We start with the discussion whether ROUGE can produce system rankings similar to those received from manual summary scoring by measuring their correlation. We study then three ways of projecting summaries to a different language: projection through sentence alignment in the case of parallel corpora, simple summary translation and summarizing machine translated articles. Building such summaries gives opportunity to run additional experiments and reinforce the evaluation. Later, we try to see whether machine translated models can perform close to original models.|
|JRC Directorate:||Space, Security and Migration|
Files in This Item:
There are no files associated with this item.
Items in repository are protected by copyright, with all rights reserved, unless otherwise indicated.