The recurring translation task of the WMT workshops focuses on news text. For this year the language pairs are:
The goals of the shared translation task are:
|Release of training data for shared tasks (by)||29 February, 2020|
|Test suite source texts must reach us|
|Test data released|
|Translation submission deadline|
|Translated test suites shipped back to test suites authors|
|Start of manual evaluation||TBD July 2020|
|End of manual evaluation||TBD July 2020|
We provide training data for all language pairs, and a common framework. The task is to improve current methods. We encourage a broad participation -- if you feel that your method is interesting but not state-of-the-art, then please participate in order to disseminate it and measure progress. Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work, per system submission.
You may participate in any or all of the language pairs. For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set. You are not limited to this training set, and you are not limited to the training set provided for your target language pair. This means that multilingual systems are allowed, and classed as constrained as long as they use only data released for WMT20.
If you use additional training data (not provided by the WMT20 organisers) or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.
Your submission report should highlight in which ways your own methods and data differ from the standard task. You should make it clear which tools you used, and which training sets you used.
For this year, the test sets for English to/from German and Czech, and for English to Chinese will be produced by translating at the paragraph level. This means that the segments will be longer than usual, and will contain multiple sentences. Note that online news text typically has short paragraphs (generally the average for each source is less than 2 sentences).
More details on evaluation soon ...
Test Suites follow the established format from previous years.
Please see the details here: http://tinyurl.com/wmt20-test-suites
The data released for the WMT20 news translation task can be freely used for research purposes, we just ask that you cite the WMT20 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.
We aim to use publicly available sources of data wherever possible. Our main sources of training data are the Europarl corpus, the UN corpus, the news-commentary corpus and the ParaCrawl corpus. We also release a monolingual News Crawl corpus. Other language-specific corpora will be made available.
We have added suitable additional training data to some of the language pairs.You may also use the following monolingual corpora released by the LDC:
Note that the released data is not tokenized and includes sentences of any length (including empty sentences). All data is in Unicode (UTF-8) format. The following Moses tools allow the processing of the training data into tokenized format:
To evaluate your system during development, we suggest using the 2019 test set. Evaluation can be performed with the NIST tool or (recommended) with sacreBLEU, which will automatically download previous WMT test sets for you. We also release other dev and test sets from previous years. For the new language pairs, we release dev sets in February, prepared in the same way as the test sets.
The 2020 test sets will be created from a sample of online newspapers from September-November 2019. The sources of the test sets will be original text from the online news, whereas the targets will be human-produced translations. This is in contrast to the test sets up to and including 2018, which were a 50-50 mixture of test sets produced in this way, and test sets produced in the reverse direction (i.e. with the original text on the target side).
We have released development data for the tasks that are new this year. It is created in the same way as the test set and included in the development tarball.
The news-test2011 set has three additional Czech translations that you may want to use. You can download them from Charles University.
New: Extra references (both translated and paraphrased) for the English to German WMT19 test set have been contributed by Google Research.
|Europarl v10||✓||✓||✓||✓||Updated Now with metadata. Text is unchanged.|
|✓||✓||✓||✓||✓||✓||✓||✓||Updated for 2020 There is a new ParaCrawl version (5.1) for en-cs, en-de and en-pl. TMXs (including metadata) available from ParaCrawl website. The en-ru and fr-de versions are unchanged. The Japanese version of ParaCrawl (JParaCrawl) was prepared by NTT. The km-en and ps-en corpora were created for the shared task on parallel corpus filtering||✓||✓||✓||✓||Same as last year. The fr-de version is here|
|News Commentary v15||✓||✓||✓||✓||✓||✓||Updated|
|CzEng 2.0||✓||Register and download CzEng 2.0. Updated The new CzEng includes synthetic data, and includes all cs-en data supplied for the task. See the CzEng README for more details.|
|✓||✓||✓||✓||✓||✓||✓||✓||✓||✓||Updated for 2020|
|✓||✓||Register and download|
|✓||✓||✓||Updated Recrawled version of Tilde Rapid corpus. de-en and cs-en contain document information.|
|✓||Register and download Same as CWMT corpus from last year, new host.|
|✓||✓||✓||✓||✓||✓||✓||✓||New We release the official version, with added language identification (from cld2).|
|✓||✓||✓||New Back-translated news. The cs-en data is contained in CzEng. The zh-en and ru-en data was produced for the University of Edinburgh systems in 2017 and 2018.|
|✓||Note: English side is lowercased.|
|✓||from IWSLT 2017 Evaluation Campaign.|
|✓||New A corpus extracted from the Indian Prime Minister's news updates.|
|✓||New A corpus extracted from the Koran.|
|✓||New A crawled Tamil-English corpus, and a machine-readable dictionary, created by the NLP Centre at the University of Moratuwa. Please use tag
|✓||New Two crawled, multilingual, corpora, released by the CVIT group in IIIT-H. The corpora are available here and here.|
|✓||A crawled corpus produced by UFAL, Charles University|
|✓||✓||Mostly from OPUS. For more details, see shared task on parallel corpus filtering|
Experimentation with transfer learning, pivoting and related techniques is encouraged, and to this end we make available the following corpora:
|News crawl||✓||✓||✓||✓||✓||✓||✓||✓||✓||Updated Large corpora of crawled news, collected since 2007. Versions up to 2018 are as before For de,cs and en, versions are available with document boundaries, and without sentence-splitting.|
|News discussions||✓||✓||Updated Corpora crawled from comment sections of online newspapers. Available in English and French.|
|Europarl v10||✓||✓||✓||✓||✓||Monolingual version of European parliament crawl. Superset of the parallel version.|
|News Commentary||✓||✓||✓||✓||✓||✓||✓||Updated Monolingual text from news-commentary crawl. Superset of parallel version. Use v15.|
|Common Crawl||✓||✓||✓||✓||✓||✓||✓||✓||✓||✓||✓||✓||Deduplicated with development and evaluation sentences removed. English was updated 31 January 2016 to remove bad UTF-8. Downloads can be verified with SHA512 checksums. More English is available for unconstrained participants. Added Pashto and Khmer.|
|Wiki dumps||✓||✓||✓||✓||New Monolingual text wikipedia, extracted using WikiExtractor.|
Submission this year uses the Ocelot tool.
Each submitted file has to be in a format that is used by standard scoring scripts such as NIST BLEU or TER.
This format is similar to the one used in the source test set files that were released, except for:
<tstset trglang="en" setid="newstest2019" srclang="any">, with trglang set to either
ru. Important: srclang is always
The script wrap-xml.perl makes the conversion of a output file in one-segment-per-line format into the required SGML file very easy:
wrap-xml.perl LANGUAGE SRC_SGML_FILE SYSTEM_NAME < IN > OUT
wrap-xml.perl en newstest20120src.de.sgm Google < decoder-output > decoder-output.sgm
Evaluation will be done both automatically as well as by human judgement.