Abstract
Domain adaptation is concerned with the problem of generalizing a classification model to a target domain with little or no labeled data, by leveraging the abundant labeled data from a related source domain. The source and target domains possess different joint probability distributions, making it challenging for model generalization. In this article, we introduce domain neural adaptation (DNA): an approach that exploits nonlinear deep neural network to 1) match the source and target joint distributions in the network activation space and 2) learn the classifier in an end-to-end manner. Specifically, we employ the relative chi-square divergence to compare the two joint distributions, and show that the divergence can be estimated via seeking the maximal value of a quadratic functional over the reproducing kernel hilbert space. The analytic solution to this maximization problem enables us to explicitly express the divergence estimate as a function of the neural network mapping. We optimize the network parameters to minimize the estimated joint distribution divergence and the classification loss, yielding a classification model that generalizes well to the target domain. Empirical results on several visual datasets demonstrate that our solution is statistically better than its competitors.
Original language | English |
---|---|
Pages (from-to) | 8630-8641 |
Number of pages | 12 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 34 |
Issue number | 11 |
DOIs | |
Publication status | Published - Nov 2023 |
Keywords
- Adaptation models
- Data models
- DNA
- Domain adaptation
- Hilbert space
- joint distribution matching
- Kernel
- neural network
- Neural networks
- Probability distribution
- relative chi-square (RCS) divergence
- reproducing kernel hilbert space (RKHS).