Shared Task: Lifelong Learning for Machine Translation
This task aims at developing autonomous lifelong learning Machine Translation systems. The initial training process is standard. The system is free to learn its model parameters using all provided bitexts that correspond to the WMT News Tasks data along with their year tag. The system is allowed to update its models using the incoming stream of source text (unsupervised adaptation), creating a better model to handle the next document. In addition, a simulated active learning protocol will allow the model to obtain help from a (simulated) human domain expert. This active learning will be simulated by obtaining the reference translation for a training example. See the details in the simulated active learning section. The evaluation will differ from standard evaluations since it will be performed across time, taking into account the cost of the simulated active learning. For this year the language pairs are:
DOWNLOAD THE EVALUATION PLAN
JOIN THE SLACK CHANNEL ALLIES_LLMT_WMT
The goals of the shared task on lifelong learning for MT are:
- To develop systems that can self-develop relying solely on domain expert data and are then freed from the necessity of machine learning expertise
- To investigate the continuous training/adaptation of MT systems
- To provide additional publicly available corpora for machine translation and machine translation evaluation
- To explore methods to perform efficient active learning via a controlled simulated environment
- To foster research on unsupervised adaptation of MT systems
- To assess the effectiveness of document-level approaches
- To evaluate systems across time
|Deadline for final system submission||July 31, 2020|
|Evaluation period (systems are run on our servers)||August, 2020|
|System description||October 10, 2020|
|Online conference||November 19-20, 2020|
SIMULATED ACTIVE LEARNING
The autonomous system will be able to ask questions to domain experts when its confidence in the proposed output is low. Only one type of question can be asked, namely: "what is the translation of the sentence S". Each request of active learning will have a cost which is proportional to the number of source word of the query. The quantity of active learning data required to reach the performance level will be taken into account in the lifelong evaluation metric (see section evaluation)
LICENSING OF DATA
The data released for the WMT20 translation task can be freely used for research purposes, we just ask that you cite the WMT20 shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.
We aim to use publicly available sources of data wherever possible. Our main sources of training data are the Europarl corpus, the UN corpus, the news-commentary corpus and the ParaCrawl corpus.
- BEAT environment: allies_llmt_beat
- Parallel data: the data is available in the following github repository allies_llmt_data
File FR-EN DE-EN Notes Europarl v10 ✓ ✓ See the data repository Now with metadata and timestamps. News Commentary v15 ✓ ✓ See the data repository Now with metadata and timestamps.
BEAT PLATFORMThe BEAT platform is a European computing e-infrastructure for Open Science allowing reproducible experiments.
The systems will be submitted in the BEAT platform at the following address. [TBA]
What to submit? The code to train and adapt the baseline model, integrated in the BEAT platform will be provided to the participants.
Evaluation will be done automatically.BLEU score across time including active-learning penalisation will be used. Details can be found in the evaluation plan.