Shared Task: Machine Translation of News

The recurring translation task of the WMT workshops focuses on news text and (mainly) European language pairs. For the year the language pairs are:

The first three European language pairs are sponsored by the EU Horizon2020 projects QT21 and Cracker, the Finnish-English task is sponsored by the University of Helsinki, funding for the last two language pairs come from Yandex, and the Chinese-English task is sponsored by several Chinese institutions (see footer). We provide parallel corpora for all languages as training data, and additional resources for download.

GOALS

The goals of the shared translation task are:

We hope that both beginners and established research groups will participate in this task.

IMPORTANT DATES

Release of training data for shared tasksJanuary, 2017
Test data released (except zh-en/en-zh)May 2, 2017
Translation submission deadlineMay 8, 2017
Test week for en-zh/zh-enMay 15-22, 2017
Start of manual evaluationMay 15, 2017
End of manual evaluation (provisional)June 4, 2017

TASK DESCRIPTION

We provide training data for seven language pairs, and a common framework. The task is to improve methods current methods. This can be done in many ways. For instance participants could try to:

Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work.

You may participate in any or all of the seven language pairs. For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set.

We also strongly encourage your participation, if you use your own training corpus, your own sentence alignment, your own language model, or your own decoder.

If you use additional training data or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.

Your submission report should highlight in which ways your own methods and data differ from the standard task. We may break down submitted results in different tracks, based on what resources were used. We are mostly interested in submission that are constrained to the provided training data, so that the comparison is focused on the methods, not on the data used. You may submit contrastive runs to demonstrate the benefit of additional training data.

TRAINING DATA

The provided data is mainly taken from public data sources such as the Europarl corpus, and the UN corpus. Additional training data is taken from the News Commentary corpus, which we re-extract every year from the task.

We have added suitable additional training data to some of the language pairs.

You may also use the following monolingual corpora released by the LDC:

Note that the released data is not tokenized and includes sentences of any length (including empty sentences). All data is in Unicode (UTF-8) format. The following Moses tools allow the processing of the training data into tokenized format:

These tools are available in the Moses git repository.

DEVELOPMENT DATA

To evaluate your system during development, we suggest using the 2016 test set. The data is provided in raw text format and in an SGML format that suits the NIST scoring tool. We also release other dev and test sets from previous years.

Year CS-EN DE-EN FI-EN LV-EN RU-EN TR-EN ZH-EN
2008          
2009          
2010          
2011          
2012        
2013        
2014        
2015      
2016    

The 2017 test sets will be created from a sample of online newspapers from August 2016. The English-X sets are created using equal sized samples of English, and language X, with each sample professionally translated into the other language.

We have released development data for the tasks that are new this year, i.e. Chinese-English and Latvian-English. It is created in the same way as the test set.

The news-test2011 set has three additional Czech translations that you may want to use. You can download them from Charles University.

DOWNLOAD