Generalised unsupervised domain adaptation of neural machine translation with cross-lingual data selection

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

6 Citations (Scopus)


This paper considers the unsupervised domain adaptation problem for neural machine translation (NMT), where we assume the access to only monolingual text in either the source or target language in the new domain. We propose a cross-lingual data selection method to extract in-domain sentences in the missing language side from a large generic monolingual corpus. Our proposed method trains an adaptive layer on top of multilingual BERT by contrastive learning to align the representation between the source and target language. This then enables the transferability of the domain classifier between the languages in a zero-shot manner. Once the in-domain data is detected by the classifier, the NMT model is then adapted to the new domain by jointly learning translation and domain discrimination tasks. We evaluate our cross-lingual data selection method on NMT across five diverse domains in three language pairs, as well as a real-world scenario of translation for COVID-19. The results show that our proposed method outperforms other selection baselines up to +1.5 BLEU score.

Original languageEnglish
Title of host publication2021 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
EditorsXuanjing Huang, Lucia Specia, Scott Wen-tau Yin
Place of PublicationStroudsburg PA USA
PublisherAssociation for Computational Linguistics (ACL)
Number of pages12
ISBN (Electronic)9781955917094
Publication statusPublished - 2021
EventEmpirical Methods in Natural Language Processing 2021 - Online, Punta Cana, Dominican Republic
Duration: 7 Nov 202111 Nov 2021 (Website) (Proceedings) (Proceedings - findings)


ConferenceEmpirical Methods in Natural Language Processing 2021
Abbreviated titleEMNLP 2021
Country/TerritoryDominican Republic
CityPunta Cana
Internet address

Cite this