NB: All systems, test sets and evaluations are available from this repository
The recurring translation task of the WMT workshops focuses on news text. For this year the language pairs are:
The following is a quick guide to the language pairs (in terms of resource-level and language similarity)
High resource | Medium resource | Low resource | |
---|---|---|---|
Closely-related | bn-hi | xh-zu | |
Same family | en-de, en-cs, en-ru | en-is, fr-de | |
Distant | en-zh | en-ja | en-ha |
The goals of the shared translation task are:
Release of training data for shared tasks (by) | March, 2021 |
Test suite source texts must reach us | 27 May 2021 |
Test data released | 10 June 2021 |
Translation submission deadline | 17 June 2021 |
Translated test suites shipped back to test suites authors | 30 June 2021 |
Start of manual evaluation | TBC |
End of manual evaluation | TBC |
We provide training data for all language pairs, and a common framework. The task is to improve current methods. We encourage a broad participation -- if you feel that your method is interesting but not state-of-the-art, then please participate in order to disseminate it and measure progress. Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work, per system submission.
You may participate in any or all of the language pairs. For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set. You are not limited to this training set, and you are not limited to the training set provided for your target language pair. This means that multilingual systems are allowed, and classed as constrained as long as they use only data released for WMT21.
If you use additional training data (not provided by the WMT21 organisers) or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.
Your submission report should highlight in which ways your own methods and data differ from the standard task. You should make it clear which tools you used, and which training sets you used.
We are interested in the question of whether MT can be improved by using context beyond the sentence, and to what extent state-of-the-art MT systems can produce translations that are correct "in-context" All of our development and test data contains full documents, and all our human evaluation will be in-context, in other words the evaluators will view the sentence as well as its surrounding context when evaluating.
Our training data retains context and document boundaries wherever possible, in particular the following corpora retain the context intact:
Test Suites follow the established format from previous years.
Please see the details here: http://tinyurl.com/wmt21-test-suites.
The data released for the WMT21 news translation task can be freely used for research purposes, we just ask that you cite the WMT21 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.
We aim to use publicly available sources of data wherever possible. Our main sources of training data are the Europarl corpus, the UN corpus, the news-commentary corpus and the ParaCrawl corpus. We also release a monolingual News Crawl corpus. Other language-specific corpora will be made available.
We have added suitable additional training data to some of the language pairs.
You may also use the following monolingual corpora released by the LDC:Note that the released data is not tokenized and includes sentences of any length (including empty sentences). All data is in Unicode (UTF-8) format. The following Moses tools allow the processing of the training data into tokenized format:
tokenizer.perl
detokenizer.perl
lowercase.perl
wrap-xml.perl
Download here (Updated: 2021-04-07 with extra ha-en data, 2021-04-14 with Hindi, Bengali, Zulu, Xhosa dev sets)
To evaluate your system during development, we suggest using previous test sets. Evaluation can be performed with the NIST tool or (recommended) with sacreBLEU, which will automatically download previous WMT test sets for you. We also release other dev and test sets from previous years. For the new language pairs, we release dev sets in February, prepared in the same way as the test sets.
Year | CS-EN | DE-EN | HA-EN | IS-EN | JA-EN | RU-EN | ZH-EN | FR-DE |
---|---|---|---|---|---|---|---|---|
2008 | ✓ | ✓ | ||||||
2009 | ✓ | ✓ | ||||||
2010 | ✓ | ✓ | ||||||
2011 | ✓ | ✓ | ||||||
2012 | ✓ | ✓ | ✓ | |||||
2013 | ✓ | ✓ | ✓ | |||||
2014 | ✓ | ✓ | ✓ | |||||
2015 | ✓ | ✓ | ✓ | |||||
2016 | ✓ | ✓ | ✓ | |||||
2017 | ✓ | ✓ | ✓ | ✓ | ||||
2018 | ✓ | ✓ | ✓ | ✓ | ||||
2019 | ✓ | ✓ | ✓ | ✓ | ✓ | |||
2020 | ✓ | ✓ | ✓ | ✓ | ✓ | |||
2021 (dev) | ✓ | ✓ |
The 2021 test sets will be created from a sample of online newspapers from July-October 2020. The sources of the test sets will be original text from the online news, whereas the targets will be human-produced translations. This is in contrast to the test sets up to and including 2018, which were a 50-50 mixture of test sets produced in this way, and test sets produced in the reverse direction (i.e. with the original text on the target side).
We have released development data for the tasks that are new this year. It is created in the same way as the test set and included in the development tarball. Note that the dev data contains both forward and reverse translations (clearly marked).
We are switching to an xml format (instead of the previous sgm format) for all dev, test and submission files. It is important to use an xml parser to wrap/unwrap text in order to ensure correct escaping/de-escaping. We will provide tools.
The news-test2011 set has three additional Czech translations that you may want to use. You can download them from Charles University.
Extra references (both translated and paraphrased) for the English to German WMT19 test set have been contributed by Google Research.
NBThe cs-en version of newstest2020 should not be used during system development. It has not been included in the development tarball
File | CS-EN | DE-EN | HA-EN | IS-EN | JA-EN | RU-EN | ZH-EN | FR-DE | BN-HI | XH-ZU | Notes |
---|---|---|---|---|---|---|---|---|---|---|---|
Europarl v10 | ✓ | ✓ | ✓ | Now with metadata. Text is unchanged. Also the SV-EN is available (maybe helpful for IS-EN). | |||||||
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Updated for 2021 Further details on ParaCrawl, including tmx files, available at the ParaCrawl website. The zh-en and fr-de are ParaCrawl "bonus releases". The ja-en version of ParaCrawl (JParaCrawl) was prepared by NTT. Note that only the ticked language pairs are available for constrained participants, but the metadata (tmx files) may be used. | |||||
✓ | ✓ | New. Pre-release of forthcoming ParaCrawl release, to be available at at the ParaCrawl website. | |||||||||
✓ | ✓ | ✓ | ✓ | Same as last year. The fr-de version is here | |||||||
News Commentary v16 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Updated for 2021 | ||||
CzEng 2.0 | ✓ | Register and download CzEng 2.0. Updated The new CzEng includes synthetic data, and includes all cs-en data supplied for the task. See the CzEng README for more details. | |||||||||
✓ | ru-en | ||||||||||
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Updated for 2021 | |||
✓ | ✓ | Register and download | |||||||||
✓ | ✓ | Recrawled version of Tilde Rapid corpus. de-en and cs-en contain document information. | |||||||||
✓ | Register and download Same as CWMT corpus from last year, new host. | ||||||||||
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | We release the official version, with added language identification (from cld2). | ||||
✓ | A parallel English-Icelandic corpus of about 3.5M sentence pairs published by The Árni Magnússon Institute for Icelandic Studies | ||||||||||
✓ | ✓ | ✓ | Back-translated news. The cs-en data is contained in CzEng. The zh-en and ru-en data was produced for the University of Edinburgh systems in 2017 and 2018. | ||||||||
✓ | Note: English side is lowercased. | ||||||||||
✓ | |||||||||||
✓ | From IWSLT 2017 Evaluation Campaign. | ||||||||||
✓ | Parallel corpus derived from news articles and speeches on khamenei.ir | ||||||||||
✓ | Parallel corpus extracted from Opus | ||||||||||
✓ | ✓ | Aligned sentence pairs and document pairs from Common Crawl (same method as cc-aligned. |
Corpus | BN | CS | DE | EN | FR | HA | HI | IS | JA | RU | XH | ZH | ZU | Notes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
News crawl | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Updated Large corpora of crawled news, collected since 2007. Versions up to 2019 are as before For de,cs and en, versions are available with document boundaries, and without sentence-splitting. | ||
News discussions | ✓ | ✓ | Corpora crawled from comment sections of online newspapers. Available in English and French (no longer updated). | |||||||||||
Europarl v10 | ✓ | ✓ | ✓ | ✓ | Monolingual version of European parliament crawl. Superset of the parallel version. | |||||||||
News Commentary | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Updated Monolingual text from news-commentary crawl. Superset of parallel version. Use v16. | ||||||
Common Crawl | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Deduplicated with development and evaluation sentences removed. English was updated 31 January 2016 to remove bad UTF-8. Downloads can be verified with SHA512 checksums. More English is available. | ||||
Extended Common Crawl | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Extended Common Crawl extracted from crawls up to April 2020. | |
Icelandic Gigaword | ✓ | New Use version 20.05. Both sections are permitted. |
All primary systems will be included in the human evaluation. We will collect subjective judgments about the translation qaility from annotators, taking the document context into account.
If you participate in the shared task, then you must provide a defined amount of evaluation per language pair submitted. The amount of manual evaluation will be 10 hours per language pair, per team. You can provide evaluation of any of the task's languages, regardless of which language pair you submitted systems to.
For queries, please use the mailing list or contact bhaddow@ed.ac.uk.
This task would not have been possible without the sponsorship of test sets and evaluation from Microsoft, Facebook, Yandex, NTT, the University of Tokyo, LinguaCustodia as well as funding from the European Union's Horizon 2020 research and innovation programme under grant agreement 825299 (GoURMET) and 825460 (Elitr).
The Icelandic-English task is sponsored by the Language Technology Programme for Icelandic 2019–2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
For WMT21 we have partnered with Toloka to collect more annotations for the human evaluation of the news translation shared task. We are grateful for their support and look forward to our continued collaboration in the future!
In Toloka's own words:
The international data labeling platform Toloka collaborated with the WMT team to improve existing machine translation methods. Toloka's crowdsourcing service was integrated with Appraise, an open-source framework for human-based annotation tasks.
To increase the accuracy of machine translation, we need to systematically compare different MT methods to reference data. However, obtaining sufficient reference data can pose a challenge, especially for rare languages. Toloka solved this problem by providing a global crowdsourcing platform with enough annotators to cover all relevant language pairs. At the same time, the integration preserved the labeling processes that were already set up in Appraise without breaking any tasks.
Collaboration between Toloka and Appraise made it possible to get a relevant pool of annotators, provide them with an interface for labeling and getting rewards, and then combine quality control rules from both systems into a mutually reinforcing set for reliable results.
You can learn more about Toloka on their website: https://toloka.ai/