Prediction of common labels for universal domain adaptation

  • Xinxin Shan
  • , Tai Ma
  • , Ying Wen*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Universal domain adaptation (UniDA) is an unsupervised domain adaptation that selectively transfers the knowledge between different domains containing different label sets. However, the existing methods do not predict the common labels of different domains and manually set a threshold to discriminate private samples, so they rely on the target domain to finely select the threshold and ignore the problem of negative transfer. In this paper, to address the above problems, we propose a novel classification model named Prediction of Common Labels (PCL) for UniDA, in which the common labels are predicted by Category Separation via Clustering (CSC). It is noted that we devise a new evaluation metric called category separation accuracy to measure the performance of category separation. To weaken negative transfer, we select source samples by the predicted common labels to fine-tune model for better domain alignment. In the test process, the target samples are discriminated by the predicted common labels and the results of clustering. Experimental results on three widely used benchmark datasets indicate the effectiveness of the proposed method.

Original languageEnglish
Pages (from-to)463-471
Number of pages9
JournalNeural Networks
Volume165
DOIs
StatePublished - Aug 2023

Keywords

  • Cross-domain classification
  • Deep learning
  • Prediction of common label
  • Universal domain adaptation

Fingerprint

Dive into the research topics of 'Prediction of common labels for universal domain adaptation'. Together they form a unique fingerprint.

Cite this