Home

This conference builds on a series of annual workshops and conferences on statistical machine translation, going back to 2006:

IMPORTANT DATES

Release of training data for shared tasksJanuary/February, 2017
Evaluation periods for shared tasksApril/May, 2017
Paper submission deadlineJune 9th, 2017
Paper notificationJune 30th, 2017
Camera-ready version dueJuly 14th, 2017
Conference in CopenhagenSeptember 7-8, 2017

OVERVIEW

This year's conference will feature the following shared tasks:

In addition to the shared tasks, the conference will also feature scientific papers on topics related to MT. Topics of interest include, but are not limited to:

We encourage authors to evaluate their approaches to the above topics using the common data sets created for the shared tasks.

REGISTRATION

Registration will be handled by EMNLP 2017.

NEWS TRANSLATION TASK

The first shared task which will examine translation between the following language pairs:

The text for all the test sets will be drawn from news articles. Participants may submit translations for any or all of the language directions. In addition to the common test sets the conference organizers will provide optional training resources.

All participants who submit entries will have their translations evaluated. We will evaluate translation performance by human judgment. To facilitate the human evaluation we will require participants in the shared tasks to manually judge some of the submitted translations. For each team, this will amount to ranking 300 sets of 5 translations, per language pair submitted.

BIOMEDICAL TRANSLATION TASK

In this second edition of this task, we will evaluate systems for the translation of biomedical documents for the following languages pairs:

Parallel corpora will be available for all language pairs but also monoligual corpora for some languages. Evaluation will be carried out both automatically and manually.

AUTOMATIC POST-EDITING TASK

This shared task will examine automatic methods for correcting errors produced by machine translation (MT) systems. Automatic Post-editing (APE) aims at improving MT output in black box scenarios, in which the MT system is used "as is" and cannot be modified. From the application point of view APE components would make it possible to:

In this third edition of the task, the evaluation will focus on English-German (IT domain) and German-English (Medical domain)

METRICS TASK

The metrics task (also called evaluation task) will assess automatic evaluation metrics' ability to:

Participants in the shared evaluation task will use their automatic evaluation metrics to score the output from the translation task and the NMT training task. In addition to MT outputs from the other two tasks, the participants will be provided with reference translations. We will measure the correlation of automatic evaluation metrics with the human judgments.

NEURAL MT TRAINING TASK (NMT TRAINING TASK)

This task will assess your team's ability to train a fixed neural MT model given fixed data.

Participants in the NMT training task will be given complete a Neural Monkey configuration file which describes the neural model. Training and validation data with fixed pre-processing scheme will be also given (English-to-Czech and Czech-to-English translation).

The participants will be expected to submit the variables file, i.e. the trained neural network, for one or both of the translation directions. We will use the variables and a fixed revision of Neural Monkey to translate official WMT17 test set. The outputs of the various configurations of the system will be scored using the standard manual evaluation procedure.

BANDIT LEARNING TASK

Bandit Learning for MT is a framework to train and improve MT systems by learning from weak or partial feedback: Instead of a gold-standard human-generated translation, the learner only receives feedback to a single proposed translation (this is why it is called partial), in form of a translation quality judgement (which can be as weak as a binary acceptance/rejection decision).

In this task, the user feedback will be simulated by a service hosted on Amazon Web Services (AWS), where participants can submit translations and receive feedback and use this feedback for training an MT model (German-to-English, e-commerce). Reference translations will not be revealed at any point, also evaluations are done via the service. The goal of this task is to find systems that learn efficiently and effectively from this type of feedback, i.e. they learn fast and achieve high translation quality without references.

POSTER FORMAT

TBD

ANNOUNCEMENTS

Subscribe to to the announcement list for WMT by entering your e-mail address below. This list will be used to announce when the test sets are released, to indicate any corrections to the training sets, and to amend the deadlines as needed.
Email:

You can read past announcements on the Google Groups page for WMT. These also include an archive of announcements from earlier workshops. Google Groups

INVITED TALK

TBC

ORGANIZERS

Ondřej Bojar (Charles University in Prague)
Christian Buck (University of Edinburgh)
Rajen Chatterjee (FBK)
Christian Federmann (MSR)
Barry Haddow (University of Edinburgh)
Matthias Huck (University of Edinburgh)
Antonio Jimeno Yepes (IBM Research Australia)
Julia Kreutzer (Heidelberg University)
Varvara Logacheva (University of Sheffield)
Aurélie Névéol (LIMSI, CNRS)
Mariana Neves (Hasso Plattner Institute)
Philipp Koehn (University of Edinburgh / Johns Hopkins University)
Christof Monz (University of Amsterdam)
Matteo Negri (FBK)
Matt Post (Johns Hopkins University)
Stefan Riezler (Heidelberg University)
Artem Sokolov (Heidelberg University, Amazon Development Center, Berlin)
Lucia Specia (University of Sheffield)
Karin Verspoor (University of Melbourne)
Marco Turchi (FBK)

PROGRAM COMMITTEE

CONTACT

For general questions, comments, etc. please send email to bhaddow@inf.ed.ac.uk.
For task-specific questions, please contact the relevant organisers.

ACKNOWLEDGEMENTS

This conference has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 645452 (QT21) and 645357 (Cracker).
We thank Yandex for their donation of data for the Russian-English and Turkish-English news tasks, and the University of Helsinki for their donation for the Finnish-English news tasks.