NIST 2009 Open Machine Translation Evaluation (MT09)
Informal System Combination Results

Date of release: Tue Oct 27 15:48:58 2009
Version: mt09_public_v1

Introduction

The NIST 2009 Open Machine Translation Evaluation (MT09) is part of an ongoing series of evaluations of human language translation technology. NIST conducts these evaluations in order to support machine translation (MT) research and help advance the state-of-the-art in machine translation technology. These evaluations provide an important contribution to the direction of research efforts and the calibration of technical capabilities. The evaluation was administered as outlined in the official MT09 evaluation plan.

Informal System Combination was an informal, diagnostic MT09 task, offered after the official evaluation period. Output from several MT09 systems on the Arabic-toEnglish and Urdu-to-English Current tests was anonymized and provided for system combination purposes. Participants in this category produced new output based on those provided translations.

Scores reported here are limited to primary Informal System Combination submissions.

Disclaimer

These results are not to be construed, or represented as endorsements of any participant's system or commercial product, or as official findings on the part of NIST or the U.S. Government. Note that the results submitted by developers of commercial MT products were generally from research systems, not commercially available products. Since MT09 was an evaluation of research algorithms, the MT09 test design required local implementation by each participant. As such, participants were only required to submit their translation system output to NIST for uniform scoring and analysis. The systems themselves were not independently evaluated by NIST.

Certain commercial equipment, instruments, software, or materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the equipment, instruments, software or materials are necessarily the best available for the purpose.

There is ongoing discussion within the MT research community regarding the most informative metrics for machine translation. The design and implementation of these metrics are themselves very much part of the research. At the present time, there is no single metric that has been deemed to be completely indicative of all aspects of system performance.

The data, protocols, and metrics employed in this evaluation were chosen to support MT research and should not be construed as indicating how well these systems would perform in applications. While changes in the data domain, or changes in the amount of data used to build a system, can greatly influence system performance, changing the task protocols could indicate different performance strengths and weaknesses for these same systems.

Because of the above reasons, this should not be interpreted as a product testing exercise and the results should not be used to make conclusions regarding which commercial products are best for a particular application.

History

Evaluation Data

System output for the Informal System Combination track included output of the Arabic-to-English and Urdu-to-English Current tests. Approximately 30% of the test data was designated as a development set for system combination. The remainder of the system output was provided as the test set.

Language Pair Data Genre Development Set Evaluation Set
Arabic-to-English Newswire 17 documents 42 documents
Web 16 documents 40 documents
Urdu-to-English Newswire 20 documents 48 documents
Web 48 documents 114 documents

Informal System Combination Results

Arabic-to-English (Table 1)

Site IDSystemBLEU-4 (mteval-v13a)IBM BLEU (bleu-1.04)NIST (mteval-v13a)TER (tercom-0.7.25)METEOR (meteor-0.7)
OverallNewswireWebOverallNewswireWebOverallNewswireWebOverallNewswireWebOverallNewswireWeb
bbnBBN_a2e_isc_primary0.57470.64400.49400.57470.64400.493811.8211.8410.410.37610.32200.42980.70430.76010.6469
sriSRI_a2e_isc_primary0.55430.62920.47330.55420.62910.473211.6811.7910.260.37880.32440.43280.69890.74740.6493
cmu-statxferCMU-Stat-Xfer_a2e_isc_primary0.55300.63320.46630.55290.63300.466211.6211.8010.150.38540.32790.44270.70330.75180.6538
rwthRWTH_a2e_isc_primary0.55150.64120.45230.55170.64110.452311.5611.869.8790.39230.32290.46130.69280.75680.6272
jhujhu_a2e_isc_primary0.54830.62940.45770.54810.62910.457411.5511.7310.010.38620.32720.44480.69190.74940.6330
hit-ltrcHIT-LTRC_a2e_isc_primary0.50370.59970.39820.50380.60000.398110.6511.488.4060.41350.34720.47930.65960.72490.5922
tubitak-uekaeTUBITAK_a2e_isc_primary0.46030.53710.37790.46030.53710.377910.3110.758.7260.45250.39420.51050.62630.68820.5625
Highest individual system score in ISC test set (system with highest BLEU-4 score on Overall data set)
system08_unconstrained.xml0.50080.57190.42450.50070.57200.424311.0411.289.5980.42290.36410.48130.66940.72710.6104

Urdu-to-English (Table 2)

Site IDSystemBLEU-4 (mteval-v13a)IBM BLEU (bleu-1.04)NIST (mteval-v13a)TER (tercom-0.7.25)METEOR (meteor-0.7)
OverallNewswireWebOverallNewswireWebOverallNewswireWebOverallNewswireWebOverallNewswireWeb
rwthRWTH_u2e_isc_primary(1)0.32320.37680.27370.32350.37670.27408.8229.2747.4250.56300.53830.58330.55390.61050.5046
jhujhu_u2e_isc_primary0.31930.37960.26270.31910.37920.26278.7369.1977.4180.55900.53170.58150.55120.60730.5022
cmu-statxferCMU-Stat-Xfer_u2e_isc_primary0.31880.38210.26020.31880.38210.26028.6949.1547.3530.57410.54220.60040.55600.61700.5030
hit-ltrcHIT-LTRC_u2e_isc_primary0.31030.37740.24530.31040.37730.24558.6399.1957.2710.58200.54160.61520.55190.61840.4941
Highest individual system score in ISC test set (system with highest BLEU-4 score on Overall data set)
system09_constrained.xml0.31040.37740.24560.31040.37730.24568.6409.1967.2760.58160.54140.61460.55220.61860.4945

(1)rescored