TY - GEN
T1 - Explicit Invariant Feature Induced Cross-Domain Crowd Counting
AU - Cai, Yiqing
AU - Chen, Lianggangxu
AU - Guan, Haoyue
AU - Lin, Shaohui
AU - Lu, Changhong
AU - Wang, Changbo
AU - He, Gaoqi
N1 - Publisher Copyright:
Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2023/6/27
Y1 - 2023/6/27
N2 - Cross-domain crowd counting has shown progressively improved performance. However, most methods fail to explicitly consider the transferability of different features between source and target domains. In this paper, we propose an innovative explicit Invariant Feature induced Cross-domain Knowledge Transformation framework to address the inconsistent domain-invariant features of different domains. The main idea is to explicitly extract domain-invariant features from both source and target domains, which builds a bridge to transfer more rich knowledge between two domains. The framework consists of three parts, global feature decoupling (GFD), relation exploration and alignment (REA), and graph-guided knowledge enhancement (GKE). In the GFD module, domain-invariant features are efficiently decoupled from domain-specific ones in two domains, which allows the model to distinguish crowds features from backgrounds in the complex scenes. In the REA module both inter-domain relation graph (Inter-RG) and intra-domain relation graph (Intra-RG) are built. Specifically, Inter-RG aggregates multi-scale domain-invariant features between two domains and further aligns local-level invariant features. Intra-RG preserves task-related specific information to assist the domain alignment. Furthermore, GKE strategy models the confidence of pseudo-labels to further enhance the adaptability of the target domain. Various experiments show our method achieves state-of-the-art performance on the standard benchmarks. Code is available at https://github.com/caiyiqing/IF-CKT.
AB - Cross-domain crowd counting has shown progressively improved performance. However, most methods fail to explicitly consider the transferability of different features between source and target domains. In this paper, we propose an innovative explicit Invariant Feature induced Cross-domain Knowledge Transformation framework to address the inconsistent domain-invariant features of different domains. The main idea is to explicitly extract domain-invariant features from both source and target domains, which builds a bridge to transfer more rich knowledge between two domains. The framework consists of three parts, global feature decoupling (GFD), relation exploration and alignment (REA), and graph-guided knowledge enhancement (GKE). In the GFD module, domain-invariant features are efficiently decoupled from domain-specific ones in two domains, which allows the model to distinguish crowds features from backgrounds in the complex scenes. In the REA module both inter-domain relation graph (Inter-RG) and intra-domain relation graph (Intra-RG) are built. Specifically, Inter-RG aggregates multi-scale domain-invariant features between two domains and further aligns local-level invariant features. Intra-RG preserves task-related specific information to assist the domain alignment. Furthermore, GKE strategy models the confidence of pseudo-labels to further enhance the adaptability of the target domain. Various experiments show our method achieves state-of-the-art performance on the standard benchmarks. Code is available at https://github.com/caiyiqing/IF-CKT.
UR - https://www.scopus.com/pages/publications/85167715782
U2 - 10.1609/aaai.v37i1.25098
DO - 10.1609/aaai.v37i1.25098
M3 - 会议稿件
AN - SCOPUS:85167715782
T3 - Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
SP - 259
EP - 267
BT - AAAI-23 Technical Tracks 1
A2 - Williams, Brian
A2 - Chen, Yiling
A2 - Neville, Jennifer
PB - AAAI press
T2 - 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Y2 - 7 February 2023 through 14 February 2023
ER -