This conference builds on a series of annual workshops and conferences on statistical machine translation, going back to 2006:


Release of training data for shared tasksJanuary/February, 2017
Evaluation periods for shared tasksApril/May, 2017
Paper submission deadlineJune 9th, 2017 (Midnight, UTC -11)
Paper notificationJune 30th, 2017
Camera-ready version dueJuly 14th, 2017
Conference in CopenhagenSeptember 7-8, 2017


This year's conference will feature the following shared tasks:

In addition to the shared tasks, the conference will also feature scientific papers on topics related to MT. Topics of interest include, but are not limited to:

We encourage authors to evaluate their approaches to the above topics using the common data sets created for the shared tasks.


Registration will be handled by EMNLP 2017.


The first shared task which will examine translation between the following language pairs:

The text for all the test sets will be drawn from news articles. Participants may submit translations for any or all of the language directions. In addition to the common test sets the conference organizers will provide optional training resources.

All participants who submit entries will have their translations evaluated. We will evaluate translation performance by human judgment. To facilitate the human evaluation we will require participants in the shared tasks to manually judge some of the submitted translations. For each team, this will amount to ranking 300 sets of 5 translations, per language pair submitted.


In this second edition of this task, we will evaluate systems for the translation of biomedical documents for the following languages pairs:

Parallel corpora will be available for all language pairs but also monoligual corpora for some languages. Evaluation will be carried out both automatically and manually.


This shared task will examine automatic methods for correcting errors produced by machine translation (MT) systems. Automatic Post-editing (APE) aims at improving MT output in black box scenarios, in which the MT system is used "as is" and cannot be modified. From the application point of view APE components would make it possible to:

In this third edition of the task, the evaluation will focus on English-German (IT domain) and German-English (Medical domain)


The metrics task (also called evaluation task) will assess automatic evaluation metrics' ability to:

Participants in the shared evaluation task will use their automatic evaluation metrics to score the output from the translation task and the NMT training task. In addition to MT outputs from the other two tasks, the participants will be provided with reference translations. We will measure the correlation of automatic evaluation metrics with the human judgments.


This task will assess your team's ability to train a fixed neural MT model given fixed data.

Participants in the NMT training task will be given complete a Neural Monkey configuration file which describes the neural model. Training and validation data with fixed pre-processing scheme will be also given (English-to-Czech and Czech-to-English translation).

The participants will be expected to submit the variables file, i.e. the trained neural network, for one or both of the translation directions. We will use the variables and a fixed revision of Neural Monkey to translate official WMT17 test set. The outputs of the various configurations of the system will be scored using the standard manual evaluation procedure.


Bandit Learning for MT is a framework to train and improve MT systems by learning from weak or partial feedback: Instead of a gold-standard human-generated translation, the learner only receives feedback to a single proposed translation (this is why it is called partial), in form of a translation quality judgement (which can be as weak as a binary acceptance/rejection decision).

In this task, the user feedback will be simulated by a service hosted on Amazon Web Services (AWS), where participants can submit translations and receive feedback and use this feedback for training an MT model (German-to-English, e-commerce). Reference translations will not be revealed at any point, also evaluations are done via the service. The goal of this task is to find systems that learn efficiently and effectively from this type of feedback, i.e. they learn fast and achieve high translation quality without references.


Submissions will consist of regular full papers of 6-10 pages, plus additional pages for references, formatted following the EMNLP 2017 guidelines. In addition, shared task participants will be invited to submit short papers (suggested length: 4-6 pages, plus references) describing their systems or their evaluation metrics. Both submission and review processes will be handled electronically. Note that regular papers must be anonymized, while system descriptions do not need to be.

Research papers that have been or will be submitted to other meetings or publications must indicate this at submission time, and must be withdrawn from the other venues if accepted and published at WMT 2017. We will not accept for publication papers that overlap significantly in content or results with papers that have been or will be published elsewhere. It is acceptable to submit work that has been made available as a technical report (or similar, e.g. in arXiv) without citing it. This double submission policy only applies to research papers, so system papers can have significant overlap with other published work, if it is relevant to the system description.

We encourage individuals who are submitting research papers to evaluate their approaches using the training resources provided by this conference and past workshops, so that their experiments can be repeated by others using these publicly available corpora.


A0 Landscape.


Subscribe to to the announcement list for WMT by entering your e-mail address below. This list will be used to announce when the test sets are released, to indicate any corrections to the training sets, and to amend the deadlines as needed.

You can read past announcements on the Google Groups page for WMT. These also include an archive of announcements from earlier workshops. Google Groups


Holger Schwenk (Facebook)
Multilingual Representions and Applications in NLP


Ondřej Bojar (Charles University in Prague)
Christian Buck (University of Edinburgh)
Rajen Chatterjee (FBK)
Christian Federmann (MSR)
Yvette Graham (DCU)
Barry Haddow (University of Edinburgh)
Matthias Huck (University of Edinburgh)
Antonio Jimeno Yepes (IBM Research Australia)
Philipp Koehn (University of Edinburgh / Johns Hopkins University)
Julia Kreutzer (Heidelberg University)
Varvara Logacheva (University of Sheffield)
Christof Monz (University of Amsterdam)
Matteo Negri (FBK)
Aurélie Névéol (LIMSI, CNRS)
Mariana Neves (Federal Institute for Risk Assessment / Hasso Plattner Institute)
Matt Post (Johns Hopkins University)
Stefan Riezler (Heidelberg University)
Artem Sokolov (Heidelberg University, Amazon Development Center, Berlin)
Lucia Specia (University of Sheffield)
Marco Turchi (FBK)
Karin Verspoor (University of Melbourne)



WMT follows the ACL's anti-harassment policy


For general questions, comments, etc. please send email to bhaddow@inf.ed.ac.uk.
For task-specific questions, please contact the relevant organisers.


This conference has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 645452 (QT21) and 645357 (Cracker).
We thank Yandex for their donation of data for the Russian-English and Turkish-English news tasks, and the University of Helsinki for their donation for the Finnish-English news tasks.