TY - GEN
T1 - Classifier Decoupled Training for Black-Box Unsupervised Domain Adaptation
AU - Chen, Xiangchuang
AU - Shen, Yunhang
AU - Luo, Xuan
AU - Zhang, Yan
AU - Li, Ke
AU - Lin, Shaohui
N1 - Publisher Copyright:
© 2024, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2024
Y1 - 2024
N2 - Black-box unsupervised domain adaptation (B2UDA ) is a challenging task in unsupervised domain adaptation, where the source model is treated as a black box and only its output is accessible. Previous works have treated the source models as a pseudo-labeling tool and formulated B2UDA as a noisy labeled learning (LNL) problem. However, they have ignored the gap between the “shift noise” caused by the domain shift and the hypothesis noise in LNL. To alleviate the negative impact of shift noise on B2UDA, we propose a novel framework called Classifier Decoupling Training (CDT), which introduces two additional classifiers to assist model training with a new label-confidence sampling. First, we introduce a self-training classifier to learn robust feature representation from the low-confidence samples, which is discarded during testing, and the final classifier is only trained with a few high-confidence samples. This step decouples the training of high-confidence and low-confidence samples to mitigate the impact of noise labels on the final classifier while avoiding overfitting to the few confident samples. Second, an adversarial classifier optimizes the feature distribution of low-confidence samples to be biased toward high-confidence samples through adversarial training, which greatly reduces intra-class variation. Third, we further propose a novel ETP-entropy Sampling (E2 S) to collect class-balanced high-confidence samples, which leverages the early-time training phenomenon into LNL. Extensive experiments on several benchmarks show that the proposed CDT achieves 88.2 %, 71.6 %, and 81.3 % accuracies on Office-31, Office-Home, and VisDA-17, respectively, which outperforms state-of-the-art methods.
AB - Black-box unsupervised domain adaptation (B2UDA ) is a challenging task in unsupervised domain adaptation, where the source model is treated as a black box and only its output is accessible. Previous works have treated the source models as a pseudo-labeling tool and formulated B2UDA as a noisy labeled learning (LNL) problem. However, they have ignored the gap between the “shift noise” caused by the domain shift and the hypothesis noise in LNL. To alleviate the negative impact of shift noise on B2UDA, we propose a novel framework called Classifier Decoupling Training (CDT), which introduces two additional classifiers to assist model training with a new label-confidence sampling. First, we introduce a self-training classifier to learn robust feature representation from the low-confidence samples, which is discarded during testing, and the final classifier is only trained with a few high-confidence samples. This step decouples the training of high-confidence and low-confidence samples to mitigate the impact of noise labels on the final classifier while avoiding overfitting to the few confident samples. Second, an adversarial classifier optimizes the feature distribution of low-confidence samples to be biased toward high-confidence samples through adversarial training, which greatly reduces intra-class variation. Third, we further propose a novel ETP-entropy Sampling (E2 S) to collect class-balanced high-confidence samples, which leverages the early-time training phenomenon into LNL. Extensive experiments on several benchmarks show that the proposed CDT achieves 88.2 %, 71.6 %, and 81.3 % accuracies on Office-31, Office-Home, and VisDA-17, respectively, which outperforms state-of-the-art methods.
KW - Adversarial learning
KW - Domain adaptation
KW - Noisy label
UR - https://www.scopus.com/pages/publications/85180780498
U2 - 10.1007/978-981-99-8435-0_2
DO - 10.1007/978-981-99-8435-0_2
M3 - 会议稿件
AN - SCOPUS:85180780498
SN - 9789819984343
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 16
EP - 30
BT - Pattern Recognition and Computer Vision - 6th Chinese Conference, PRCV 2023, Proceedings
A2 - Liu, Qingshan
A2 - Wang, Hanzi
A2 - Ji, Rongrong
A2 - Ma, Zhanyu
A2 - Zheng, Weishi
A2 - Zha, Hongbin
A2 - Chen, Xilin
A2 - Wang, Liang
PB - Springer Science and Business Media Deutschland GmbH
T2 - 6th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2023
Y2 - 13 October 2023 through 15 October 2023
ER -