TY - JOUR
T1 - Improving Domain-Adaptive Person Re-Identification by Dual-Alignment Learning with Camera-Aware Image Generation
AU - Zhang, Chenyang
AU - Tang, Yongqiang
AU - Zhang, Zhizhong
AU - Li, Ding
AU - Yang, Xuebing
AU - Zhang, Wensheng
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2021/11/1
Y1 - 2021/11/1
N2 - Domain adaptation in person re-identification (re-ID) has always been challenging, especially for the lack of supervision information on the target domain. Existing methods generally introduced extra supervision by adversarial learning techniques, then added all the augmented data in the training process to optimize the re-ID model. However, the direct utilization of all the generated data not only increases additional computational cost but also ignores the potential correlation between the origin and generated data. In this article, we propose a novel dual-alignment learning framework (DAL) with camera-aware image generation to efficiently and effectively tackle this issue. Specifically, we propose a camera transfer matching module to generate additional training images with different camera styles, and construct the matching pairs with each containing a origin image and one corresponding camera transferred image. To strengthen the correlation of images for each matching pair, we align the pseudo-labels via clustering algorithm to reduce the pseudo-labels distribution discrepancy between the origin and generated images. Besides, to avoid model degeneration affected by some inaccurate pseudo-labels on unlabelled data, we maximize the mutual information to align the image feature representations of matching pair. The DAL allows us to decrease the camera variance and enhance the discrimination ability of re-ID model. Extensive experiments on three large-scale benchmarks demonstrate the superiority of DAL over state-of-the-art methods.
AB - Domain adaptation in person re-identification (re-ID) has always been challenging, especially for the lack of supervision information on the target domain. Existing methods generally introduced extra supervision by adversarial learning techniques, then added all the augmented data in the training process to optimize the re-ID model. However, the direct utilization of all the generated data not only increases additional computational cost but also ignores the potential correlation between the origin and generated data. In this article, we propose a novel dual-alignment learning framework (DAL) with camera-aware image generation to efficiently and effectively tackle this issue. Specifically, we propose a camera transfer matching module to generate additional training images with different camera styles, and construct the matching pairs with each containing a origin image and one corresponding camera transferred image. To strengthen the correlation of images for each matching pair, we align the pseudo-labels via clustering algorithm to reduce the pseudo-labels distribution discrepancy between the origin and generated images. Besides, to avoid model degeneration affected by some inaccurate pseudo-labels on unlabelled data, we maximize the mutual information to align the image feature representations of matching pair. The DAL allows us to decrease the camera variance and enhance the discrimination ability of re-ID model. Extensive experiments on three large-scale benchmarks demonstrate the superiority of DAL over state-of-the-art methods.
KW - Person re-identification
KW - convolutional neural networks
KW - generative adversarial networks
KW - mutual information
UR - https://www.scopus.com/pages/publications/85098797560
U2 - 10.1109/TCSVT.2020.3047095
DO - 10.1109/TCSVT.2020.3047095
M3 - 文章
AN - SCOPUS:85098797560
SN - 1051-8215
VL - 31
SP - 4334
EP - 4346
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 11
ER -