ACL 2008 THIRD WORKSHOP
ON STATISTICAL MACHINE TRANSLATION

Shared Task: Machine Translation for European Languages

June 19, in conjunction with ACL 2008 in Columbus, Ohio

[HOME] | [SHARED TRANSLATION TASK] | [SHARED EVALUATION TASK] | [RESULTS] | [BASELINE SYSTEM]
[SCHEDULE] | [AUTHORS] | [PAPERS]

The translation shared task of this workshop focuses on European language pairs. Translation quality will be evaluated on a shared, unseen test set. We provide a parallel corpus as training data, a baseline system, and additional resources for download. Participants may augment this system or use their own system.

Goals

The goals of the shared translation task are: We hope that both beginners and established research groups will participate in this task.

New Challenges This Year

Task Description

We provide training data for four European language pairs, and a common framework (including a language model and a baseline system). The task is to improve methods current methods. This can be done in many ways. For instance participants could try to Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work.

You may participate in any or all of the following language pairs:

For all language pairs we will test translation in both directions. To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set, language model, and baseline system.

We also strongly encourage your participation, if you use your own training corpus, your own sentence alignment, your own language model, or your own decoder.

Your submission report should highlight in which ways your own methods and data differ from the standard task. We may break down submitted results in different tracks, based on what resources were used. We are mostly interested in submission that are constraint to the provided training data, so that the comparison is focused on the methods, not on the data used. You may submit contrastive runs to demonstrate the benefit of additional training data.

Training Data

The provided data is taken from a new release (version 3) of the Europarl corpus, which is freely available. Please click on the links below to download the sentence-aligned data, or go to the Europarl website for the source release. If you prepare training data from the Europarl corpus directly, please do not take data from Q4/2000 (October-December), since it is reserved for development and test data.

Additional training data is taken from the new News Commentary corpus. There are about 35-40 million words of training data per language from the Europarl corpus and 1 million words from the News Commentary corpus.

Europarl
  • French-English
  • Spanish-English
  • German-English
  • Spanish-German
  • French monolingual
  • Spanish monolingual
  • German monolingual
  • English monolingual
News Commentary
  • French-English
  • Spanish-English
  • German-English
  • Czech-English
Hunglish
  • Hungarian-English
The corpus was created as part of the hunglish project by the joint work of the Media Research and Education Center at the Budapest University of Technology and Economics and the Corpus Linguistics Department at the Hungarian Academy of Sciences Institute of Linguistics.
CzEng
  • Czech-English
You will need to to download the current version of the CzEng corpus (version v0.7) from the CzEng web site. Please re-download the data, if you accessed it before February 5, 2008.

Note that in difference to previous years the released data is not tokenized and includes sentences of any length (including empty sentences). Also, this year all data is in Unicode (UTF-8) format. The following tools allow the processing of the training data into tokenized format:

Development Data

To tune your system during development, we provide development sets of 2000 sentences (Europarl) and 1057 sentences (News Commentary). The data is provided in raw text format and in an SGML format that suits the NIST scoring tool.

Europarl dev2006
  • English
  • French
  • Spanish
  • German
This data is identical with the 2005 development test data and the 2006 development data.
News Commentary nc-dev2007
  • English
  • French
  • Spanish
  • German
  • Czech
This is identical with the 2007 development data.
Hunglish hung-dev2008
  • Hungarian
  • English
Note that this dev set is extracted from the official Hunglish corpus. The training corpus we provide (above) does not overlap with the dev set.

Overview of Development and Test Data

To test your system during development, we provide development test sets of 2000 sentences (Europarl) and 1064 sentences (News Commentary -- the Czech-English set is slightly smaller). We encourage people submitting research publications to the workshop to report results on these sets. The data is provided in raw text format and in an SGML format that suits the NIST scoring tool.

Europarl Development Data

Europarl devtest2006
  • English
  • French
  • Spanish
  • German
This data is identical with the 2005 test data and the 2006 development test data.
Europarl test2006
  • English
  • French
  • Spanish
  • German
This data is identical with the 2006 test data (in-domain part).
Europarl test2007
  • English
  • French
  • Spanish
  • German
This data is identical with the 2007 test data (in-domain part).

Other Development Data

News Commentary nc-devtest2007
  • English
  • French
  • Spanish
  • German
  • Czech
This data is identical with the 2006 test data (out-of-domain part).
News Commentary nc-test2007
  • English
  • French
  • Spanish
  • German
  • Czech
This data is identical with the 2007 test data (out-of-domain part).
Hunglish hung-devtest2008
  • Hungarian
  • English
Note that this devtest set is extracted from the official Hunglish corpus. The training corpus we provide (above) does not overlap with the devtest set.

Test Data

Europarl test2008
  • English
  • French
  • Spanish
  • German
News Commentary nc-test2008
  • English
  • Czech
News news-test2008
  • English
  • French
  • Spanish
  • German
  • Czech
  • Hungarian

Download

Evaluation

Evaluation will be done both automatically as well as by human judgement.

Dates

March 14: Test data released (available on this web site)
March 21: Results submissions (by email to pkoehn@inf.ed.ac.uk)
April 4: Short paper submissions (4 pages)

supported by the EuroMatrix project, P6-IST-5-034291-STP
funded by the European Commission under Framework Programme 6