TY - GEN
T1 - Joint Distribution Adaptation via Wasserstein Adversarial Training
AU - Wang, Xiaolu
AU - Zhang, Wenyong
AU - Shen, Xin
AU - Liu, Huikang
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/18
Y1 - 2021/7/18
N2 - This paper considers the unsupervised domain adaptation problem, in which we want to find a good prediction function on the unlabeled target domain, by utilizing the information provided in the labeled source domain. A common approach to the domain adaptation problem is to learn a representation space where the distributional discrepancy of the source and target domains is small. Existing methods generally tend to match the marginal distributions of the two domains, while the label information in the source domain is not fully exploited. In this paper, we propose a representation learning approach for domain adaptation, which is addressed as JODAWAT. We aim to adapt the joint distributions of the feature-label pairs in the shared representation space for both domains. In particular, we minimize the Wasserstein distance between the source and target domains, while the prediction performance on the source domain is also guaranteed. The proposed approach results in a minimax adversarial training procedure that incorporates a novel split gradient penalty term. A generalization bound on the target domain is provided to reveal the efficacy of representation learning for joint distribution adaptation. We conduct extensive evaluations on JODAWAT, and test its classification accuracy on multiple synthetic and real datasets. The experimental results justify that our proposed method is able to achieve superior performance compared with various domain adaptation methods.
AB - This paper considers the unsupervised domain adaptation problem, in which we want to find a good prediction function on the unlabeled target domain, by utilizing the information provided in the labeled source domain. A common approach to the domain adaptation problem is to learn a representation space where the distributional discrepancy of the source and target domains is small. Existing methods generally tend to match the marginal distributions of the two domains, while the label information in the source domain is not fully exploited. In this paper, we propose a representation learning approach for domain adaptation, which is addressed as JODAWAT. We aim to adapt the joint distributions of the feature-label pairs in the shared representation space for both domains. In particular, we minimize the Wasserstein distance between the source and target domains, while the prediction performance on the source domain is also guaranteed. The proposed approach results in a minimax adversarial training procedure that incorporates a novel split gradient penalty term. A generalization bound on the target domain is provided to reveal the efficacy of representation learning for joint distribution adaptation. We conduct extensive evaluations on JODAWAT, and test its classification accuracy on multiple synthetic and real datasets. The experimental results justify that our proposed method is able to achieve superior performance compared with various domain adaptation methods.
KW - Wasserstein distance
KW - adversarial training
KW - domain adaptation
KW - generalization error bound
KW - transfer learning
UR - https://www.scopus.com/pages/publications/85116409779
U2 - 10.1109/IJCNN52387.2021.9533304
DO - 10.1109/IJCNN52387.2021.9533304
M3 - 会议稿件
AN - SCOPUS:85116409779
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 International Joint Conference on Neural Networks, IJCNN 2021
Y2 - 18 July 2021 through 22 July 2021
ER -