Dual Mutual Learning for Cross-Modality Person Re-Identification

Research output: Contribution to journalArticlepeer-review

69 Scopus citations

Abstract

Cross-modality person re-identification (Re-ID) is more challenging than traditional visible Re-ID due to the huge cross-modality gap from heterogeneous images. To alleviate this problem, existing methods often utilize a dual path learning framework equipped with metric loss to learn discriminative features. Despite effectiveness, the inevitable degeneration of intra-modality discrimination by taking cross-modality discrimination into consideration is unsolvable. Such degeneration substantially hinders the model's capability of further improving feature representations. To mitigate this degeneration, we propose a Dual Mutual Learning (DML) method for cross-modality Re-ID which conducts mutual learning between the cross-modality and each of two single modalities. We design a triple-branch deep model containing the RGB and IR branches and the cross-modality branch. The cross-modality branch is designed to learn modality-invariant feature subspace for appearance similarity measurement. Both the RGB branch and IR branch provide attention supervision information to the cross-modality branch for attention feature alignment so as to enhance the intra-modality discrimination. Experimental results on two standard benchmarks demonstrate DML is superior to state-of-The-Art methods.

Original languageEnglish
Pages (from-to)5361-5373
Number of pages13
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume32
Issue number8
DOIs
StatePublished - 1 Aug 2022

Keywords

  • Cross-modality
  • attention alignment
  • mutual learning
  • person re-identification

Fingerprint

Dive into the research topics of 'Dual Mutual Learning for Cross-Modality Person Re-Identification'. Together they form a unique fingerprint.

Cite this