Search Descriptions

General

Neural machine Translation

Statistical Machine Translation

Search Publications


author

title

other

year

Attention Model

The currently dominant model in neural machine translation is the sequence-to-sequence model with attention.

Attention Model is the main subject of 31 publications. 8 are discussed here.

Publications

The attention model has its roots in a sequence-to-sequence model.

Cho et al. (2014) use recurrent neural networks for the approach. Sutskever et al. (2014) use a LSTM (long short-term memory) network and reverse the order of the source sentence before decoding.
The seminal work by Bahdanau et al. (2015) adds an alignment model (so called "attention mechanism") to link generated output words to source words, which includes conditioning on the hidden state that produced the preceding target word. Source words are represented by the two hidden states of recurrent neural networks that process the source sentence left-to-right and right-to-left. Luong et al. (2015) propose variants to the attention mechanism (which they call "global" attention model) and also a hard-constraint attention model ("local" attention model) which is restricted to a Gaussian distribution around a specific input word.
To explicitly model the trade-off between source context (the input words) and target context (the already produced target words), Tu et al. (2016) introduce an interpolation weight (called "context gate") that scales the impact of the (a) source context state and (b) the previous hidden state and the last word when predicting the next hidden state in the decoder.

Deep Models:

There are several various to add layers to the encoder and the decoder of he neural translation model. Wu et al. (2016) first use the traditional bidirectional recurrent neural networks to compute input word representations and then refine them with several stacked recurrent layers. Zhou et al. (2016) alternate between forward and backward recurrent layers. Barone et al. (2017) show good results with 4 stacks and 2 deep transitions each for encoder and decoder, as well as alternating networks for the encoder. There are a large number of variations (including the use of skip connections, the choice of LSTM vs. GRU, number of layers of any type) that still need to be explored empirical for various data conditions.

Benchmarks

Discussion

Related Topics

New Publications

  • Indurthi et al. (2019)
  • Mino et al. (2017)
  • Ibraheem et al. (2017)
  • Rikters and Fishel (2017)
  • Li et al. (2018)
  • Malaviya et al. (2018)
  • Shankar et al. (2018)
  • Lin et al. (2018)
  • Yang et al. (2017)

Attention Model

  • Zhang et al. (2017)
  • Yu et al. (2016)
  • Huang et al. (2016)
  • Mi et al. (2016)
  • Calixto et al. (2017)
  • Press and Wolf (2017)
  • Yang et al. (2017)

Advanced Modelling

  • Tu et al. (2017)
  • Gehring et al. (2017)
  • Oda et al. (2017)
  • Wang et al. (2017)
  • Wang et al. (2016)
  • Sountsov and Sarawagi (2016)
  • Shu and Miura (2016)

Actions

Download

Contribute