This conference builds on a series of annual workshops and conferences on statistical machine translation, going back to 2006:
|Release of training data for shared tasks||January/February, 2018|
|Evaluation periods for shared tasks||May/June, 2018|
|Paper submission deadline||July 27th, 2018|
|Paper notification||August 18th, 2018|
|Camera-ready version due||August 31st, 2018|
|Conference in Brussels||October 31 - November 1, 2018|
This year's conference will feature the following shared tasks:
In addition to the shared tasks, the conference will also feature scientific papers on topics related to MT. Topics of interest include, but are not limited to:
This shared task will examine translation between the following language pairs:
NB: The Kazakh task is postponed (probably until WMT19)The text for all the test sets will be drawn from news articles. Participants may submit translations for any or all of the language directions. In addition to the common test sets the conference organizers will provide optional training resources.
All participants who submit entries will have their translations evaluated. We will evaluate translation performance by human judgment. To facilitate the human evaluation we will require participants in the shared tasks to manually judge some of the submitted translations. For each team, this will be about 8 hours per language pair submitted.
For 2018 we highlight the following innovations in the news task:
In this third edition of this task, we will evaluate systems for the translation of biomedical documents for the following languages pairs:
Parallel corpora will be available for all language pairs but also monoligual corpora for some languages. Evaluation will be carried out both automatically and manually.
This shared task will examine automatic methods for correcting errors produced by machine translation (MT) systems. Automatic Post-editing (APE) aims at improving MT output in black box scenarios, in which the MT system is used "as is" and cannot be modified. From the application point of view APE components would make it possible to:
In this fourth edition of the task, the evaluation will focus on two subtasks:
Submissions will consist of regular full papers of 6-10 pages, plus additional pages for references, formatted following the EMNLP 2018 guidelines. Supplementary material can be added to research papers. In addition, shared task participants will be invited to submit short papers (suggested length: 4-6 pages, plus references) describing their systems or their evaluation metrics. Both submission and review processes will be handled electronically. Note that regular papers must be anonymized, while system descriptions do not need to be.
Research papers that have been or will be submitted to other meetings or publications must indicate this at submission time, and must be withdrawn from the other venues if accepted and published at WMT 2018. We will not accept for publication papers that overlap significantly in content or results with papers that have been or will be published elsewhere. It is acceptable to submit work that has been made available as a technical report (or similar, e.g. in arXiv) without citing it. This double submission policy only applies to research papers, so system papers can have significant overlap with other published work, if it is relevant to the system description.
We encourage individuals who are submitting research papers to evaluate their approaches using the training resources provided by this conference and past workshops, so that their experiments can be repeated by others using these publicly available corpora.
|Subscribe to to the announcement list for WMT by entering your e-mail address below. This list will be used to announce when the test sets are released, to indicate any corrections to the training sets, and to amend the deadlines as needed.|
|You can read past announcements on the Google Groups page for WMT. These also include an archive of announcements from earlier workshops.|
WMT follows the ACL's anti-harassment policy
For general questions, comments, etc. please send email
For task-specific questions, please contact the relevant organisers.