TY - JOUR
T1 - Domain adaptation by joint distribution invariant projections
AU - Chen, Sentao
AU - Harandi, Mehrtash
AU - Jin, Xiaona
AU - Yang, Xiaowei
N1 - Funding Information:
Manuscript received February 29, 2020; revised July 10, 2020; accepted July 26, 2020. Date of publication August 5, 2020; date of current version August 13, 2020. This work was supported in part by the National Natural Science Foundation of China under Grant 61906069; in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2019A1515011411 and Grant 2019A1515011700; in part by the Project Funded by the China Postdoctoral Science Foundation under Grant 2019M662912; in part by the Science and Technology Program of Guangzhou under Grant 202002030355; and in part by the Fundamental Research Funds for the Central Universities under Grant 2019MS088. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Raja Bala. (Corresponding authors: Sentao Chen; Xiaowei Yang.) Sentao Chen, Xiaona Jin, and Xiaowei Yang are with the School of Software Engineering, South China University of Technology, Guangzhou 510006, China (e-mail: [email protected]).
Publisher Copyright:
© 1992-2012 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020
Y1 - 2020
N2 - Domain adaptation addresses the learning problem where the training data are sampled from a source joint distribution (source domain), while the test data are sampled from a different target joint distribution (target domain). Because of this joint distribution mismatch, a discriminative classifier naively trained on the source domain often generalizes poorly to the target domain. In this article, we therefore present a Joint Distribution Invariant Projections (JDIP) approach to solve this problem. The proposed approach exploits linear projections to directly match the source and target joint distributions under the L2 -distance. Since the traditional kernel density estimators for distribution estimation tend to be less reliable as the dimensionality increases, we propose a least square method to estimate the L2 -distance without the need to estimate the two joint distributions, leading to a quadratic problem with analytic solution. Furthermore, we introduce a kernel version of JDIP to account for inherent nonlinearity in the data. We show that the proposed learning problems can be naturally cast as optimization problems defined on the product of Riemannian manifolds. To be comprehensive, we also establish an error bound, theoretically explaining how our method works and contributes to reducing the target domain generalization error. Extensive empirical evidence demonstrates the benefits of our approach over state-of-the-art domain adaptation methods on several visual data sets.
AB - Domain adaptation addresses the learning problem where the training data are sampled from a source joint distribution (source domain), while the test data are sampled from a different target joint distribution (target domain). Because of this joint distribution mismatch, a discriminative classifier naively trained on the source domain often generalizes poorly to the target domain. In this article, we therefore present a Joint Distribution Invariant Projections (JDIP) approach to solve this problem. The proposed approach exploits linear projections to directly match the source and target joint distributions under the L2 -distance. Since the traditional kernel density estimators for distribution estimation tend to be less reliable as the dimensionality increases, we propose a least square method to estimate the L2 -distance without the need to estimate the two joint distributions, leading to a quadratic problem with analytic solution. Furthermore, we introduce a kernel version of JDIP to account for inherent nonlinearity in the data. We show that the proposed learning problems can be naturally cast as optimization problems defined on the product of Riemannian manifolds. To be comprehensive, we also establish an error bound, theoretically explaining how our method works and contributes to reducing the target domain generalization error. Extensive empirical evidence demonstrates the benefits of our approach over state-of-the-art domain adaptation methods on several visual data sets.
KW - dimensionality reduction
KW - domain adaptation
KW - joint distribution matching
KW - L-distance
KW - Riemannian optimization
UR - http://www.scopus.com/inward/record.url?scp=85089939594&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.3013167
DO - 10.1109/TIP.2020.3013167
M3 - Article
C2 - 32755860
AN - SCOPUS:85089939594
SN - 1057-7149
VL - 29
SP - 8264
EP - 8277
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -