Incorporating discrete translation lexicons into neural machine translation

Philip Arthur, Graham Neubig, Satoshi Nakamura

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence. We propose a method to alleviate this problem by augmenting NMT systems with discrete translation lexicons that efficiently encode translations of these low-frequency words. We describe a method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.1
Original languageEnglish
Title of host publicationProceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
EditorsJian Su, Kevin Duh, Xavier Carreras
Place of PublicationAustin Texas
PublisherAssociation for Computational Linguistics (ACL)
Pages1557-1567
Number of pages11
DOIs
Publication statusPublished - Nov 2016
Externally publishedYes
EventEmpirical Methods in Natural Language Processing 2016 - Austin, United States of America
Duration: 1 Nov 20165 Nov 2016
https://www.aclweb.org/mirror/emnlp2016/
https://www.aclweb.org/anthology/volumes/D16-1/ (Proceedings)

Conference

ConferenceEmpirical Methods in Natural Language Processing 2016
Abbreviated titleEMNLP 2016
Country/TerritoryUnited States of America
CityAustin
Period1/11/165/11/16
Internet address

Cite this