Semantic driven attention network with attribute learning for unsupervised person re-identification

Simin Xu, Lingkun Luo, Jilin Hu, Bin Yang, Shiqiang Hu

Research output: Contribution to journalArticlepeer-review

26 Scopus citations

Abstract

Unsupervised domain adaptation (UDA) person re-identification (re-ID) aims to transfer knowledge from a labeled source domain to guide the task proposed on the unlabeled target domain, in which people share different identifications and cross multiple camera views within two different domains. Consequently, traditional UDA re-ID techniques generally suffer due to the negative transfer caused by the inevitable noise generated by variant backgrounds, while the foregrounds also lack sufficient reliable identification knowledge to guarantee the qualified cross-domain re-ID. To remedy the raised negative transfer caused by variant backgrounds, we propose a novel body structure estimation (BSE) mechanism enforced semantic driven attention network (SDA), which enables the designed model with semantic effectiveness to distinguish the foreground and background. In searching for the reliable feature representations as in the foreground areas, we propose a novel label refinery mechanism to dynamically optimize the traditional attribute learning techniques for the strengthened personal attribute features and thus resulting the qualified UDA-re-ID. Extensive experiments demonstrate the effectiveness of our method in solving unsupervised domain adaptation person re-ID task on three large-scale datasets including Market-1501, DukeMTMC-reID and MSMT17.

Original languageEnglish
Article number109354
JournalKnowledge-Based Systems
Volume252
DOIs
StatePublished - 27 Sep 2022
Externally publishedYes

Keywords

  • Attribute learning
  • Domain adaptation
  • Person re-identification
  • Semantic driven attention

Fingerprint

Dive into the research topics of 'Semantic driven attention network with attribute learning for unsupervised person re-identification'. Together they form a unique fingerprint.

Cite this