TY - GEN
T1 - Fine-grained Learning for Visible-Infrared Person Re-identification
AU - Qi, Mengzan
AU - Chan, Sixian
AU - Hang, Chen
AU - Zhang, Guixu
AU - Li, Zhi
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Visible-Infrared Person Re-identification aims to retrieve specific identities from different modalities. In order to relieve the modality discrepancy, previous works mainly concentrate on aligning the distribution of high-level features, while disregarding the exploration of fine-grained information. In this paper, we propose a novel Fine-grained Information Exploration Network (FIENet) to implement discriminative representation, further alleviating the modality discrepancy. Firstly, we propose a Progressive Feature Aggregation Module (PFAM) to progressively aggregate mid-level features, and a Multi-Perception Interaction Module (MPIM) to achieve the interaction with diverse perceptions. Additionally, combined with PFAM and MPIM, more fine-grained information can be extracted, which is beneficial for FIENet to focus on discriminative human parts in both modalities effectively. Secondly, in terms of the feature center, we introduce an Identity-Guided Center Loss (IGCL) to supervise identity representation with intra-identity and inter-identity information. Finally, extensive experiments are conducted to demonstrate that our method achieves state-of-the-art performance.
AB - Visible-Infrared Person Re-identification aims to retrieve specific identities from different modalities. In order to relieve the modality discrepancy, previous works mainly concentrate on aligning the distribution of high-level features, while disregarding the exploration of fine-grained information. In this paper, we propose a novel Fine-grained Information Exploration Network (FIENet) to implement discriminative representation, further alleviating the modality discrepancy. Firstly, we propose a Progressive Feature Aggregation Module (PFAM) to progressively aggregate mid-level features, and a Multi-Perception Interaction Module (MPIM) to achieve the interaction with diverse perceptions. Additionally, combined with PFAM and MPIM, more fine-grained information can be extracted, which is beneficial for FIENet to focus on discriminative human parts in both modalities effectively. Secondly, in terms of the feature center, we introduce an Identity-Guided Center Loss (IGCL) to supervise identity representation with intra-identity and inter-identity information. Finally, extensive experiments are conducted to demonstrate that our method achieves state-of-the-art performance.
KW - fine-grained information
KW - modality discrepancy
KW - visible-infrared person re-identification
UR - https://www.scopus.com/pages/publications/85171157779
U2 - 10.1109/ICME55011.2023.00412
DO - 10.1109/ICME55011.2023.00412
M3 - 会议稿件
AN - SCOPUS:85171157779
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
SP - 2417
EP - 2422
BT - Proceedings - 2023 IEEE International Conference on Multimedia and Expo, ICME 2023
PB - IEEE Computer Society
T2 - 2023 IEEE International Conference on Multimedia and Expo, ICME 2023
Y2 - 10 July 2023 through 14 July 2023
ER -