Shared Task: Machine Translation of News

The recurring translation task of the WMT workshops focuses on news text. For this year the language pairs are:

We provide parallel corpora for all languages as training data, and additional resources for download.


The goals of the shared translation task are:

We hope that both beginners and established research groups will participate in this task.


Release of training data for shared tasks (by)29 February, 2020
Test suite source texts must reach us May 30 June 12, 2020
Test data released June 8 June 22, 2020
Translation submission deadlineJune 15 June 29, 2020 (Anywhere on Earth)
Translated test suites shipped back to test suites authors July 2 July 6, 2020
Start of manual evaluationTBD July 2020
End of manual evaluation TBD July 2020


We provide training data for all language pairs, and a common framework. The task is to improve current methods. We encourage a broad participation -- if you feel that your method is interesting but not state-of-the-art, then please participate in order to disseminate it and measure progress. Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work, per system submission.

You may participate in any or all of the language pairs. For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set. You are not limited to this training set, and you are not limited to the training set provided for your target language pair. This means that multilingual systems are allowed, and classed as constrained as long as they use only data released for WMT20.

If you use additional training data (not provided by the WMT20 organisers) or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.

Your submission report should highlight in which ways your own methods and data differ from the standard task. You should make it clear which tools you used, and which training sets you used.

Unsupervised learning

We announce tasks on unsupervised MT, and also on very low resource supervised MT. Please click here

Document-level MT

For this year, the test sets for English to/from German and Czech, and for English to Chinese will be produced by translating at the paragraph level. This means that the segments will be longer than usual, and will contain multiple sentences. Note that online news text typically has short paragraphs (generally the average for each source is less than 2 sentences).

More details on evaluation soon ...

Additional Test Suites Linked to News Translation Task

Test Suites follow the established format from previous years.

Please see the details here:



The data released for the WMT20 news translation task can be freely used for research purposes, we just ask that you cite the WMT20 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.


We aim to use publicly available sources of data wherever possible. Our main sources of training data are the Europarl corpus, the UN corpus, the news-commentary corpus and the ParaCrawl corpus. We also release a monolingual News Crawl corpus. Other language-specific corpora will be made available.

We have added suitable additional training data to some of the language pairs.

You may also use the following monolingual corpora released by the LDC:

Note that the released data is not tokenized and includes sentences of any length (including empty sentences). All data is in Unicode (UTF-8) format. The following Moses tools allow the processing of the training data into tokenized format:

These tools are available in the Moses git repository.


To evaluate your system during development, we suggest using the 2019 test set. Evaluation can be performed with the NIST tool or (recommended) with sacreBLEU, which will automatically download previous WMT test sets for you. We also release other dev and test sets from previous years. For the new language pairs, we release dev sets in February, prepared in the same way as the test sets.

2020 (dev)          

The 2020 test sets will be created from a sample of online newspapers from September-November 2019. The sources of the test sets will be original text from the online news, whereas the targets will be human-produced translations. This is in contrast to the test sets up to and including 2018, which were a 50-50 mixture of test sets produced in this way, and test sets produced in the reverse direction (i.e. with the original text on the target side).

We have released development data for the tasks that are new this year. It is created in the same way as the test set and included in the development tarball.

The news-test2011 set has three additional Czech translations that you may want to use. You can download them from Charles University.

New: Extra references (both translated and paraphrased) for the English to German WMT19 test set have been contributed by Google Research.



Submission this year uses the Ocelot tool.

Each submitted file has to be in a format that is used by standard scoring scripts such as NIST BLEU or TER.

This format is similar to the one used in the source test set files that were released, except for:

The script wrap-xml.perl makes the conversion of a output file in one-segment-per-line format into the required SGML file very easy:

Example: wrap-xml.perl en Google < decoder-output > decoder-output.sgm


Evaluation will be done both automatically as well as by human judgement.


This task would not have been possible without the sponsorship of test sets from Microsoft, Yandex, Tilde, NTT, LinguaCustodia and the University of Tokyo, the National Research Council of Canada and funding from the European Union's Horizon 2020 research and innovation programme under grant agreement 825299 (GoURMET) and 825460 (Elitr). We gratefully acknowledge the Legislative Assembly of Nunavut and Nunatsiaq News for providing the Inuktitut–English dev and test data.