Shared Task: Machine Translation for European Languages
June 23, in conjunction with ACL 2007 in Prague, Czech Republic
[HOME] [SHARED TASK] | RESULTS] | [BASELINE SYSTEM] | [PROCEEDINGS] | [SCHEDULE]
The shared task of the workshop is translation between European language pairs. Translation quality will be evaluated on a shared, unseen test set. We provide a parallel corpus as training data, a baseline system, and additional resources for download. Participants may augment this system or use their own system.
Goals
The goals of the shared translation task are:
- To investigate the applicability of current MT techniques when translating into languages other than English
- To examine special challenges in translating between European languages, such as word order differences between German and English
- To create publicly available corpora for machine translation
and machine translation evaluation
- To update performance numbers for European languages in order to provide a basis of comparison in future research
- To offer newcomers a smooth start with
hands-on experience in state-of-the-art statistical machine translation
methods
We hope that both beginners and established research groups
will participate in this task.
New Challenges This Year
- Domain Adaptation: We provide two data sets, Europarl and the much smaller News Commentary corpus (about 1 million words). Last year's competition included a test set from this corpus as a surprise out-of-domain set. Performance numbers were much lower for this corpus, and we pose as a challenge to improve upon that performace. Test data will be explicitly labeled as Europarl and News Commentary.
- Evaluation Metrics: We will investigate novel evaluation metrics besides BLEU and manual judgment on Adequacy and Fluency (which will be again the official metrics). For any researcher are working on evaluation metrics, please contact us, and we will include your metric in the final analysis.
Task Description
We provide training data for four European language pairs, and a common framework (including a language model and a baseline system). The task is to improve methods current methods. This can be done in many ways. For instance participants could try to
- improve word alignment quality, phrase extraction, phrase scoring
- add new components to the open source software of the baseline system
- augment the system otherwise (e.g. by preprocessing, reranking, etc.)
- build an entirely new translation systems
Participants will use their systems to translate a test set of unseen sentences in the source language. The translation quality is measured by a manual evaluation and various automatic evaluation metrics. Participants agree to contribute to the manual evaluation about eight hours of work.
You may participate in any or all of the following language pairs:
- French-English, and back
- Spanish-English, and back
- German-English, and back
- Czech-English, and back
To have a common framework that allows for comparable results, and also to lower the barrier to entry, we provide a common training set, language model, and baseline system.
We also strongly encourage your participation, if you use your own training corpus, your own sentence alignment, your own language model, or your own decoder.
Your submission report should highlight in which ways your own methods and data differ from the standard task. We may break down submitted results in different tracks, based on what resources were used.
Provided Data
The provided data is taken from a new release (version 3) of the Europarl corpus, which is freely available. Please click on the links below to download the sentence-aligned data, or go to the Europarl website for the source release. If you prepare training data from the Europarl corpus directly, please do not take data from Q4/2000 (October-December), since it is reserved for development and test data.
Additional training data is taken from the new News Commentary corpus. There are about 35-40 million words of training data per language from the Europarl corpus and 1 million words from the News Commentary corpus.
Europarl
- French-English
- Spanish-English
- German-English
- French monolingual
- Spanish monolingual
- German monolingual
- English monolingual
|
News Commentary
- French-English
- Spanish-English
- German-English
- Czech-English
|
Note that in difference to previous years the released data is not tokenized and includes sentences of any length (including empty sentences). Also, this year all data is in Unicode (UTF-8) format. The following tools allow the processing of the training data into tokenized format:
- Tokenizer
tokenizer.perl
- Detokenizer
detokenizer.perl
- Lowercaser
lowercase.perl
- SGML Wrapper
wrap-xml.perl
Development Data
To tune your system during development, we provide development sets of 2000 sentences (Europarl) and 1057 sentences (News Commentary). The data is provided in raw text format and in an SGML format that suits the NIST scoring tool.
Europarl dev2006
- English
- French
- Spanish
- German
This data is identical with the 2005 development test data and
the 2006 development data.
|
News Commentary nc-dev2007
- English
- French
- Spanish
- German
- Czech
This is newly released data.
|
Development Test Data
To test your system during development, we provide development test sets of 2000 sentences (Europarl) and 1064 sentences (News Commentary -- the Czech-English set is slightly smaller). We encourage people submitting research publications to the workshop to report results on these sets. The data is provided in raw text format and in an SGML format that suits the NIST scoring tool.
Europarl devtest2006
- English
- French
- Spanish
- German
This data is identical with the 2005 test data and
the 2006 development test data.
|
Europarl test2006
- English
- French
- Spanish
- German
This data is identical with the 2006 test data (in-domain part).
|
News Commentary nc-devtest2007
- English
- French
- Spanish
- German
- Czech
This data is identical with the 2006 test data (out-of-domain part).
|
Download
The test data is part of the dev sets in future years, so please check recent WMT workshop sites for it.
Evaluation
Evaluation will be done both automatically as well as by human judgement.
- Manual Scoring: We will collect subjective judgments about translation quality from human annotators. If you participate in the evaluation, we ask you to commit about 8 hours of time to do the manual evaluation. The evaluation will be done with an online tool.
- In difference to the previous years, we expect the translated submissions to be in recased, detokenized, XML format, just as in most other translation campaigns (NIST, TC-Star). The official BLEU scoring tool will be the NIST scoring tool.
- We will also be applying other automatic evaluation metrics to submissions, and will be reporting results about how well they correlate with human judgments. If you would like to have your evaluation metric tested in this way please contact Chris Callison-Burch.
Dates
March 21: Test data released
April 6: Results submissions (by email to pkoehn@inf.ed.ac.uk)
April 13: Short paper submissions (4 pages)
| supported by the EuroMatrix project, P6-IST-5-034291-STP funded by the European Commission under Framework Programme 6 |