Search Descriptions

Main Topics

Search Publications






Embeddings of words, phrases, sentences, and entire documents have several uses, one among them is to work towards interlingual representations of meaning.

Embeddings is the main subject of 30 publications.


Word embeddings have become a common feature in current research in natural language processing. Mikolov et al. (2013) suggest that a simple linear transformation from word embeddings in one language to word embeddings in another language may be used to translate words. Xing et al. (2015) point out inconsistencies in the representation of word embeddings and the objective function for translation transforms between word embeddings, which they address with normalization.
Zhang et al. (2014) learn phrase embeddings using recursive neural networks and auto-encoders and a mapping between input and output phrase to add an additional score to the phrase translations and to filter the phrase table. Hu et al. (2015) use convolutional neural networks to encode the input and output phrase and pass them to matching that computes their similarity. They include the full input sentence context in the and use a learning strategy called curriculum learning that first learns from the easy training examples and then the harder ones.



Related Topics

New Publications

  • Duong et al. (2017)
  • Pilehvar and Collier (2017)
  • Duong et al. (2016)
  • Artetxe et al. (2016)
  • Vulić and Korhonen (2016)
  • Passban et al. (2016)
  • Cao et al. (2016)
  • Sergienya and Schütze (2015)
  • Köhn (2015)
  • Coulmance et al. (2015)
  • Vulić and Moens (2015)
  • Shi et al. (2015)
  • Sachdeva and Sharma (2015)
  • Zhao et al. (2015)
  • Garcia et al. (2014)
  • Hermann and Blunsom (2014)
  • Ha et al. (2014)
  • Zou et al. (2013)
  • Faruqui and Dyer (2014)
  • Chandar A P et al. (2014)
  • Gao et al. (2014)
  • Cho et al. (2014)
  • Levinboim and Chiang (2015)
  • Alkhouli et al. (2014)
  • Huang et al. (2015)
  • Su et al. (2015)