Sequence to sequence mixture model for diverse machine translation

Xuanli He, Gholamreza Haffari, Mohammad Norouzi

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

8 Citations (Scopus)


Sequence to sequence (SEQ2SEQ) models often lack diversity in their generated translations. This can be attributed to the limitation of SEQ2SEQ models in capturing lexical and syntactic variations in a parallel corpus resulting from different styles, genres, topics, or ambiguity of the translation process. In this paper, we develop a novel sequence to sequence mixture (S2SMIX) model that improves both translation diversity and quality by adopting a committee of specialized translation models rather than a single translation model. Each mixture component selects its own training dataset via optimization of the marginal log-likelihood, which leads to a soft clustering of the parallel corpus. Experiments on four language pairs demonstrate the superiority of our mixture model compared to a SEQ2SEQ baseline with standard or diversity-boosted beam search. Our mixture model uses negligible additional parameters and incurs no extra computation cost during decoding.

Original languageEnglish
Title of host publicationCoNLL 2018 - The 22nd Conference on Computational Natural Language Learning - Proceedings of the Conference
EditorsMiikka Silfverberg
Place of PublicationStroudsburg PA USA
PublisherAssociation for Computational Linguistics (ACL)
Number of pages10
ISBN (Electronic)9781948087728
Publication statusPublished - 2018
EventConference on Natural Language Learning 2018 - Brussels, Belgium
Duration: 31 Oct 20181 Nov 2018
Conference number: 22nd (Proceedings)

Publication series

NameCoNLL 2018 - 22nd Conference on Computational Natural Language Learning, Proceedings


ConferenceConference on Natural Language Learning 2018
Abbreviated titleCoNLL 2018
Internet address

Cite this