TY - JOUR
T1 - Towards Balanced Representation Learning with Semantic Anchor Regularization
AU - Wang, Chengjie
AU - Nie, Qiang
AU - Chen, Ying
AU - Li, Jialin
AU - Liu, Yong
AU - Jiang, Xi
AU - Ge, Yanqi
AU - Wu, Yunsheng
AU - Zheng, Feng
AU - Ma, Lizhuang
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
PY - 2025/10
Y1 - 2025/10
N2 - Representation learning refers to the process of learning meaningful and informative features from raw data, of which one good criterion is to attain intra-class compactness and inter-class separability in the semantic space. However,real-world data are always imbalanced and noisy. Existing methods such as prototype-based learning and contrastive learning are deeply bounded to the feature learning process and susceptible to imbalanced data distribution. In this paper, we disentangle the representation regularization from the feature learning process and propose a semantic anchor regularization (SAR) that is generated from predefined anchors. These anchors serve as an independent third-party measurement and are made semantic-aware by sharing the task head with feature learning. By controlling the separability between semantic anchors and pulling the learned representation to these semantic anchors, the intra-class compactness and inter-class separability can be intuitively achieved. In essence, SAR performs in the manner of visual-language alignment but is more flexible. Extensive results on classification, segmentation, long-tailed learning, and semi-supervised learning demonstrate the SAR’s effectiveness for different downstream tasks.
AB - Representation learning refers to the process of learning meaningful and informative features from raw data, of which one good criterion is to attain intra-class compactness and inter-class separability in the semantic space. However,real-world data are always imbalanced and noisy. Existing methods such as prototype-based learning and contrastive learning are deeply bounded to the feature learning process and susceptible to imbalanced data distribution. In this paper, we disentangle the representation regularization from the feature learning process and propose a semantic anchor regularization (SAR) that is generated from predefined anchors. These anchors serve as an independent third-party measurement and are made semantic-aware by sharing the task head with feature learning. By controlling the separability between semantic anchors and pulling the learned representation to these semantic anchors, the intra-class compactness and inter-class separability can be intuitively achieved. In essence, SAR performs in the manner of visual-language alignment but is more flexible. Extensive results on classification, segmentation, long-tailed learning, and semi-supervised learning demonstrate the SAR’s effectiveness for different downstream tasks.
KW - Representation learning
KW - Semantic anchor regularization
UR - https://www.scopus.com/pages/publications/105011059202
U2 - 10.1007/s11263-025-02519-y
DO - 10.1007/s11263-025-02519-y
M3 - 文章
AN - SCOPUS:105011059202
SN - 0920-5691
VL - 133
SP - 7293
EP - 7311
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
IS - 10
ER -