Attention Residual Fusion Network with Contrast for Source-free Domain Adaptation

Renrong Shao, Wei Zhang*, Jun Wang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Source-free domain adaptation (SFDA) involves training a model on source domain and then applying it to a related target domain without access to the source data and labels during adaptation. The complexity of scene information and lack of the source domain make SFDA a difficult task. Recent studies have shown promising results, but many approaches to domain adaptation concentrate on domain shift and neglect the effects of negative transfer, which may impede enhancements of model performance during adaptation. In this paper, addressing this issue, we propose a novel framework of Attention Residual Fusion Network (ARFNet) based on contrast learning for SFDA to alleviate negative transfer and domain shift during the progress of adaptation, in which attention residual fusion, global-local attention contrast, and dynamic centroid evaluation are exploited. Concretely, the attention mechanism is first exploited to capture the discriminative region of the target object. Then, in each block, attention features are decomposed into spatial-wise and channel-wise attentions. The spatial-wise attentions are aggregated with original semantic features to achieve the cross-layer attention residual fusion progressively while the channel-wise attentions are exploited for self-distillation. During adaptation progress, we contrast global and local representations to improve the perceptual capabilities of different categories, which enables the model to discriminate variations between inner-class and intra-class. Finally, a dynamic centroid evaluation strategy is exploited to evaluate the trustworthy centroids and labels for self-supervised self-distillation, which aims to accurately approximate the center of the source domain and pseudo-labels to mitigate domain shift. To validate the efficacy of our methods, we execute comprehensive experiments on five benchmarks of varying scales, i.e., Office-31, Office-Home, VisDA-C, DomainNet-126, Cub-Paintings. Experimental outcomes indicate that our method surpasses other techniques, attaining superior performance across SFDA benchmarks. Code is available at https://github.com/RoryShao/ARFNet.git.

Keywords

  • Source-free domain adaptation
  • contrastive learning
  • self-distillation
  • self-supervised learning

Fingerprint

Dive into the research topics of 'Attention Residual Fusion Network with Contrast for Source-free Domain Adaptation'. Together they form a unique fingerprint.

Cite this