Shared Task: Lifelong Learning for Machine Translation

This task aims at developing autonomous lifelong learning Machine Translation systems. The initial training process is standard. The system is free to learn its model parameters using all provided bitexts that correspond to the WMT News Tasks data along with their year tag. The system is allowed to update its models using the incoming stream of source text (unsupervised adaptation), creating a better model to handle the next document. In addition, a simulated active learning protocol will allow the model to obtain help from a (simulated) human domain expert. This active learning will be simulated by obtaining the reference translation for a training example. See the details in the simulated active learning section. The evaluation will differ from standard evaluations since it will be performed across time, taking into account the cost of the simulated active learning. For this year the language pairs are:

We provide parallel corpora for all languages as training data. In order to ensure reproducibility and ensure a fair comparison of the proposed methods, the system will have to run on the BEAT platform. In order to ease the integration of the participants' systems, a python environment is provided with a baseline system. See details in the BEAT section The lifelong learning machine translation evaluation is based on the series of WMT News translation tasks, on top of which some additional data have been produced.




The goals of the shared task on lifelong learning for MT are:

We hope that both beginners and established research groups will participate in this task.


Deadline for final system submission July 31, 2020
Evaluation period (systems are run on our servers) August, 2020
System descriptionOctober 10, 2020
Online conference November 19-20, 2020


The autonomous system will be able to ask questions to domain experts when its confidence in the proposed output is low. Only one type of question can be asked, namely: "what is the translation of the sentence S". Each request of active learning will have a cost which is proportional to the number of source word of the query. The quantity of active learning data required to reach the performance level will be taken into account in the lifelong evaluation metric (see section evaluation)



The data released for the WMT20 translation task can be freely used for research purposes, we just ask that you cite the WMT20 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.


We aim to use publicly available sources of data wherever possible. Our main sources of training data are the Europarl corpus, the UN corpus, the news-commentary corpus and the ParaCrawl corpus.




The BEAT platform is a European computing e-infrastructure for Open Science allowing reproducible experiments.

The systems will be submitted in the BEAT platform at the following address. [TBA]

What to submit? The code to train and adapt the baseline model, integrated in the BEAT platform will be provided to the participants.


Evaluation will be done automatically.

BLEU score across time including active-learning penalisation will be used. Details can be found in the evaluation plan.


This task would not have been possible without the funding from the ALLIES project funded by the EU Chist-ERA programme.