TY - GEN
T1 - Self-Mimic Mutual-Distillation for Cross-Modality Person Re-Identification
AU - Zhang, Demao
AU - Hong, Ming
AU - Ye, Zhou
AU - Wang, Zheng
AU - Zhang, Zhizhong
AU - Luo, Xiaotong
AU - Xie, Yuan
AU - Qu, Yanyun
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Cross-modality person re-identification is a newly rising and challenging problem, as there is a significant gap between the visible and infrared images. Though recent methods rapidly narrow the gap, the intra-modality variance is often ignored before inter-modality alignment. In this paper, we study this problem in the knowledge distillation perspective and design a self-mimic mutual-distillation method to reduce the discrepancy of each person from intra-modality feature alignment to cross-modality feature alignment. For intra-modality feature alignment, the self-mimic mechanism is implemented to simultaneously learn globally viewed, stable, and distinguish prototypes for each ID and minimize the intra-modality discrepancy. For inter-modality feature alignment, the mutual distillation is conducted to minimize the cross-modality distribution discrepancy of each person. Extensive experimental results on SYSU-MM01 and RegDB demonstrate that the proposed method achieves the best performance, outperforming state-of-the-art methods by a large margin without adding extra network parameters to the baseline. Especially, on the SYSU-MM01 dataset, our method achieves 64.8% Rank-1 and 60.2% mAP with significant gains over the latest related method.
AB - Cross-modality person re-identification is a newly rising and challenging problem, as there is a significant gap between the visible and infrared images. Though recent methods rapidly narrow the gap, the intra-modality variance is often ignored before inter-modality alignment. In this paper, we study this problem in the knowledge distillation perspective and design a self-mimic mutual-distillation method to reduce the discrepancy of each person from intra-modality feature alignment to cross-modality feature alignment. For intra-modality feature alignment, the self-mimic mechanism is implemented to simultaneously learn globally viewed, stable, and distinguish prototypes for each ID and minimize the intra-modality discrepancy. For inter-modality feature alignment, the mutual distillation is conducted to minimize the cross-modality distribution discrepancy of each person. Extensive experimental results on SYSU-MM01 and RegDB demonstrate that the proposed method achieves the best performance, outperforming state-of-the-art methods by a large margin without adding extra network parameters to the baseline. Especially, on the SYSU-MM01 dataset, our method achieves 64.8% Rank-1 and 60.2% mAP with significant gains over the latest related method.
KW - Cross-Modality Person Re-identification
KW - Mutual Learning
KW - Self-mimic
UR - https://www.scopus.com/pages/publications/85137676314
U2 - 10.1109/ICME52920.2022.9859875
DO - 10.1109/ICME52920.2022.9859875
M3 - 会议稿件
AN - SCOPUS:85137676314
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
BT - ICME 2022 - IEEE International Conference on Multimedia and Expo 2022, Proceedings
PB - IEEE Computer Society
T2 - 2022 IEEE International Conference on Multimedia and Expo, ICME 2022
Y2 - 18 July 2022 through 22 July 2022
ER -