Search Descriptions

Main Topics

Search Publications


author

title

other

year

Analysis and Visualization

Neural machine translation models operate on high-dimensional representation at any stage of processing. Their abilities and failures are hard to determine from their millions of parameters. To better understand the behavior of neural machine translation models, researchers compared performance to phrase-based systems, explored linguistic abilities of the models, and developed method to visualize their processing.

Analysis And Visualization is the main subject of 29 publications.

Publications

Detailed quality assessments:

Bentivogli et al. (2016) considered different linguistic categories when comparing the performance of neural vs. statistical machine translation systems for English-German. Toral and Sánchez-Cartagena (2017) compared different broad aspects such as fluency and reordering for nine language directions. Sennrich (2017) developed an automatic method to detect specific morphosyntactic errors. First a test set is created by taking sentence pairs, and modifying the target sentence to exhibit specific types of error, such as wrong gender of determiners, wrong particles for verbs, wrong transliteration. Then a neural translation model is evaluated by how often it scores the correct translation higher then the faulty translations. The paper compares byte-pair encoding against character-based models for rare and unknown words.

Role of individual neurons:

Shi et al. (2016) correlated activation values of specific nodes in the state of a simple LSTM encoder-decoder translation model (without attention) with the length of the output and discovered nodes that count the number of words to ensure proper output length.

Predicting properties from internal representations: To probe intermediate representations, such as encoder and decoder states, a strategy is to use them as input to a classifier that predicts specific, mostly linguistic, properties.

Belinkov et al. (2017) predict the part of speech and morphological features of words linked to encoder and decoder states, showing better performance of character-based models, but not much difference for deeper layers.

Benchmarks

Discussion

Related Topics

New Publications

  • Strobelt et al. (2018)
  • Ott et al. (2018)
  • Malaviya et al. (2017)
  • Ghader and Monz (2017)
  • Tran et al. (2018)
  • Cífka and Bojar (2018)
  • Khayrallah and Koehn (2018)
  • Belinkov et al. (2017)
  • Poliak et al. (2018)
  • Belinkov and Bisk (2018)
  • Ma et al. (2018)
  • Marvin and Koehn (2018)
  • Karpathy et al. (2015)

Analytical Evaluation

  • Burlot and Yvon (2017)
  • Ding et al. (2017)
  • Shi et al. (2016)
  • Sennrich (2016)
  • Hirschmann et al. (2016)
  • Schnober et al. (2016)
  • Guta et al. (2015)
  • Koehn and Knowles (2017)
  • Eger et al. (2016)
  • Niehues et al. (2017)
  • Lee et al. (2017)

Actions

Download

Contribute