Domain adaptation by joint distribution invariant projections

Sentao Chen, Mehrtash Harandi, Xiaona Jin, Xiaowei Yang

Research output: Contribution to journalArticleResearchpeer-review

42 Citations (Scopus)

Abstract

Domain adaptation addresses the learning problem where the training data are sampled from a source joint distribution (source domain), while the test data are sampled from a different target joint distribution (target domain). Because of this joint distribution mismatch, a discriminative classifier naively trained on the source domain often generalizes poorly to the target domain. In this article, we therefore present a Joint Distribution Invariant Projections (JDIP) approach to solve this problem. The proposed approach exploits linear projections to directly match the source and target joint distributions under the L2 -distance. Since the traditional kernel density estimators for distribution estimation tend to be less reliable as the dimensionality increases, we propose a least square method to estimate the L2 -distance without the need to estimate the two joint distributions, leading to a quadratic problem with analytic solution. Furthermore, we introduce a kernel version of JDIP to account for inherent nonlinearity in the data. We show that the proposed learning problems can be naturally cast as optimization problems defined on the product of Riemannian manifolds. To be comprehensive, we also establish an error bound, theoretically explaining how our method works and contributes to reducing the target domain generalization error. Extensive empirical evidence demonstrates the benefits of our approach over state-of-the-art domain adaptation methods on several visual data sets.

Original languageEnglish
Pages (from-to)8264-8277
Number of pages14
JournalIEEE Transactions on Image Processing
Volume29
DOIs
Publication statusPublished - 2020

Keywords

  • dimensionality reduction
  • domain adaptation
  • joint distribution matching
  • L-distance
  • Riemannian optimization

Cite this