Shared Task: Automatic Post-Editing

OVERVIEW

The second round of the APE shared task follows the first pilot round organised in 2015. The aim is to examine automatic methods for correcting errors produced by an unknown machine translation (MT) system. This has to be done by exploiting knowledge acquired from human post-edits, which are provided as training material.

Goals

The aim of this task is to improve MT output in black-box scenarios, in which the MT system is used "as is" and cannot be modified. From the application point of view APE components would make it possible to:

Task Description

This year the task focuses on the Information Technology (IT) domain, in which English source sentences have been translated into German by an unknown MT system and then manually post-edited by professional translators.

At training stage, the collected human post-edits have to be used to learn correction rules for the APE systems. At test stage they will be used for system evaluation with automatic metrics (TER and BLEU).

Data

Training, development and test data (the same used for the Sentence-level Quality Estimation task) consist in English-German triplets (source, target and post-edit) belonging to the IT domain and already tokenized.

Training and development respectively contain 12,000 and 1,000 triplets, while the test set 2,000 instances. All data is provided by the EU project QT21 (http://www.qt21.eu/).

NOTE: Any use of additional data for training your system is allowed (e.g. parallel corpora, post-edited corpora).

Evaluation

Systems' performance will be evaluated with respect to their capability to reduce the distance that separates an automatic translation from its human-revised version.

Such distance will be measured in terms of TER, which will be computed between automatic and human post-edits in case-sensitive mode.

Also BLEU will be taken into consideration as a secondary evaluation metric. To gain further insights on final output quality, a subset of the outputs of the submitted systems will also be manually evaluated.

The submitted runs will be ranked based on the average HTER calculated on the test set by using the tercom software.

The HTER calculated between the raw MT output and human post-editions in the test set will be used as baseline (i.e. the baseline is a system that leaves all the test instances unmodified).

Download Links

Training and development data

Test data (gold standard references are released in test_pe.zip)

Evaluation script

Results (Systems are ranked according to TER score) !!! NEW !!!
SystemsTERBLEU
AMU_ensemble8-mt+src_PRIMARY21.5267.65
AMU_ensemble4-mt_CONTRASTIVE23.0666.09
FBK_factored_contrastive23.9264.75
FBK_factored-qe_primary23.9464.75
USAAR_OSM_PRIMARY_BOTH24.1464.10
USAAR_CPBOSM_CONTRASTIVE_BOTH24.1464.00
CUNI_edit_gen_1_PRIMARY24.3163.32
Baseline_2 (Statistical phrase-based APE)24.6463.47
Official Baseline (MT)24.7662.11
DCU_R34_CONTRASTIVE26.7958.60
JUSAAR_SC_PRIMARY_BOTH26.9259.44
JUSAAR_SC_D_CONTRASTIVE_BOTH26.9759.18
DCU_R24_PRIMARY28.9755.19

DIFFERENCES FROM THE FIRST PILOT ROUND

Compared to the the pilot round, the main differences are:

Submission Format

The output of your system should produce automatic post-editions of the target sentences in the test in the following way:

<METHOD NAME>   <SEGMENT NUMBER>   <APE SEGMENT>

Where: Each field should be delimited by a single tab character.

Submission Requirements

Each participating team can submit at most 3 systems, but they have to explicitly indicate which of them represents their primary submission. In the case that none of the runs is marked as primary, the latest submission received will be used as the primary submission.

Submissions should be sent via email to wmt-ape-submission@fbk.eu. Please use the following pattern to name your files:

INSTITUTION-NAME_METHOD-NAME_SUBTYPE, where:

INSTITUTION-NAME is an acronym/short name for your institution, e.g. "UniXY"

METHOD-NAME is an identifier for your method, e.g. "pt_1_pruned"

SUBTYPE indicates whether the submission is primary or contrastive with the two alternative values: PRIMARY, CONTRASTIVE.

You are also invited to submit a short paper (4 to 6 pages) to WMT describing your APE method(s). You are not required to submit a paper if you do not want to. In that case, we ask you to give an appropriate reference describing your method(s) that we can cite in the WMT overview paper.

Important dates

Release of training data February 19, 2016
Test set distributed April 18, 2016
Submission deadline April 24, 2016 April 26, 2016 May 2, 2016
Paper submission deadlineMay 8, 2016 May 15, 2016
Manual evaluationMay 2016
Notification of acceptanceJune 5, 2016
Camera-ready deadlineJune 22, 2016

Organisers

Rajen Chatterjee (Fondazione Bruno Kessler)
Matteo Negri (Fondazione Bruno Kessler)
Raphael Rubino (Saarland University)
Marco Turchi (Fondazione Bruno Kessler)
Marcos Zampieri (Saarland University)

Contact

For any information or question on the task, please send an email to:wmt-ape@fbk.eu.
To be always updated about this year's edition of the APE task, you can also join the wmt-ape group.

Supported by the European Commission under the QT21
project (grant number 645452)