EMNLP 2015 TENTH WORKSHOP
ON STATISTICAL MACHINE TRANSLATION

Shared Task: Machine Translation

17-18 September 2015
Lisbon, Portugal

[HOME] | [TRANSLATION TASK] | [METRICS TASK] | [TUNING TASK] | [QUALITY ESTIMATION TASK] | [AUTOMATIC POST-EDITING TASK] | [SCHEDULE] | [PAPERS] | [AUTHORS] | [RESULTS]

The recurring translation task of the WMT workshops focuses on European language pairs. For the years 2015-2017 our core language pairs will be English to and from Czech and German. In addition, we will introduce a guest language each year that provides a particular translation challenge. For 2015 the guest language will be Finnish. We provide parallel corpora for all languages as training data, and additional resources for download.

GOALS

The goals of the shared translation task are:

We hope that both beginners and established research groups will participate in this task.

TASK DESCRIPTION

We provide training data for five language pairs, and a common framework (including a baseline system). The task is to improve methods current methods. This can be done in many ways. For instance participants could try to:

Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work.

You may participate in any or all of the following language pairs:

For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set and baseline system.

We also strongly encourage your participation, if you use your own training corpus, your own sentence alignment, your own language model, or your own decoder.

If you use additional training data or existing translation systems, you must flag that your system uses additional data. We will distinguish system submissions that used the provided training data (constrained) from submissions that used significant additional data resources. Note that basic linguistic tools such as taggers, parsers, or morphological analyzers are allowed in the constrained condition.

Your submission report should highlight in which ways your own methods and data differ from the standard task. We may break down submitted results in different tracks, based on what resources were used. We are mostly interested in submission that are constraint to the provided training data, so that the comparison is focused on the methods, not on the data used. You may submit contrastive runs to demonstrate the benefit of additional training data.

TRAINING DATA

The provided data is mainly taken from version 7 of the Europarl corpus, which is freely available. Please click on the links below to download the sentence-aligned data, or go to the Europarl website for the source release. Note that this the same data as last year, since Europarl is not anymore translted across all 23 official European languages.

Additional training data is taken from the new News Commentary corpus. There are about 50 million words of training data per language from the Europarl corpus and 3 million words from the News Commentary corpus.

A new data resource from 2013 is the Common Crawl corpus which was collected from web sources. Each parallel corpus comes with a annotation file that gives the source of each sentence pair.

We will be releasing development data for the Finnish-English task, and for the French-English task. The reason for the French-English release is that the text type has changed this year, to a more informal type gathered from user-generated comments.

You may also use the following monolingual corpora released by the LDC:

Note that the released data is not tokenized and includes sentences of any length (including empty sentences). All data is in Unicode (UTF-8) format. The following tools allow the processing of the training data into tokenized format:

These tools are available in the Moses git repository.

DEVELOPMENT DATA

To evaluate your system during development, we suggest using the 2014 test set. The data is provided in raw text format and in an SGML format that suits the NIST scoring tool. We also release other test sets from previous years.

News news-test2008
  • English
  • French
  • Spanish
  • German
  • Czech
  • Hungarian
Cleaned version of the 2008 test set.
2051 sentences.
News news-test2009
  • English
  • French
  • Spanish
  • German
  • Czech
  • Hungarian
  • Italian
2525+502 sentences.
News news-test2010
  • English
  • French
  • Spanish
  • German
  • Czech
2489 sentences.
News news-test2011
  • English
  • French
  • Spanish
  • German
  • Czech
3003 sentences.
News news-test2012
  • English
  • French
  • Spanish
  • German
  • Czech
  • Russian
3003 sentences.
News news-test2013
  • English
  • French
  • Spanish
  • German
  • Czech
  • Russian
3000 sentences.
News news-test2014
  • English
  • Czech
  • French
  • German
  • Hindi
  • Russian
3003 sentences.
News news-dev2015
  • English
  • Finnish
1500 sentences.
News discussions
newsdiscuss-dev2015
  • English
  • French
1500 sentences.

The news-test2011 set has three additional Czech translations that you may want to use. You can download them from Charles University.

DOWNLOAD