Generalised unsupervised domain adaptation of neural machine translation with cross-lingual data selection

Trang Vu, Xuanli He, Dinh Phung, Reza Haffari

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

This paper considers the unsupervised domain adaptation problem for neural machine translation (NMT), where we assume the access to only monolingual text in either the source or target language in the new domain. We propose a cross-lingual data selection method to extract in-domain sentences in the missing language side from a large generic monolingual corpus. Our proposed method trains an adaptive layer on top of multilingual BERT by contrastive learning to align the representation between the source and target language. This then enables the transferability of the domain classifier between the languages in a zero-shot manner. Once the in-domain data is detected by the classifier, the NMT model is then adapted to the new domain by jointly learning translation and domain discrimination tasks. We evaluate our cross-lingual data selection method on NMT across five diverse domains in three language pairs, as well as a real-world scenario of translation for COVID-19. The results show that our proposed method outperforms other selection baselines up to +1.5 BLEU score.
Original languageEnglish
Title of host publication2021 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
EditorsXuanjing Huang, Lucia Specia, Scott Wen-tau Yin
Place of PublicationStroudsburg PA USA
PublisherAssociation for Computational Linguistics (ACL)
Pages3335–3346
Number of pages12
ISBN (Electronic)9781955917094
Publication statusPublished - 2021

Cite this