TY - JOUR
T1 - Affine Subspace Robust Low-Rank Self-Representation
T2 - From Matrix to Tensor
AU - Tang, Yongqiang
AU - Xie, Yuan
AU - Zhang, Wensheng
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2023/8/1
Y1 - 2023/8/1
N2 - Low-rank self-representation based subspace learning has confirmed its great effectiveness in a broad range of applications. Nevertheless, existing studies mainly focus on exploring the global linear subspace structure, and cannot commendably handle the case where the samples approximately (i.e., the samples contain data errors) lie in several more general affine subspaces. To overcome this drawback, in this paper, we innovatively propose to introduce affine and nonnegative constraints into low-rank self-representation learning. While simple enough, we provide their underlying theoretical insight from a geometric perspective. The union of two constraints geometrically restricts each sample to be expressed as a convex combination of other samples in the same subspace. In this way, when exploring the global affine subspace structure, we can also consider the specific local distribution of data in each subspace. To comprehensively demonstrate the benefits of introducing two constraints, we instantiate three low-rank self-representation methods ranging from single-view low-rank matrix learning to multi-view low-rank tensor learning. We carefully design the solution algorithms to efficiently optimize the proposed three approaches. Extensive experiments are conducted on three typical tasks, including single-view subspace clustering, multi-view subspace clustering, and multi-view semi-supervised classification. The notably superior experimental results powerfully verify the effectiveness of our proposals.
AB - Low-rank self-representation based subspace learning has confirmed its great effectiveness in a broad range of applications. Nevertheless, existing studies mainly focus on exploring the global linear subspace structure, and cannot commendably handle the case where the samples approximately (i.e., the samples contain data errors) lie in several more general affine subspaces. To overcome this drawback, in this paper, we innovatively propose to introduce affine and nonnegative constraints into low-rank self-representation learning. While simple enough, we provide their underlying theoretical insight from a geometric perspective. The union of two constraints geometrically restricts each sample to be expressed as a convex combination of other samples in the same subspace. In this way, when exploring the global affine subspace structure, we can also consider the specific local distribution of data in each subspace. To comprehensively demonstrate the benefits of introducing two constraints, we instantiate three low-rank self-representation methods ranging from single-view low-rank matrix learning to multi-view low-rank tensor learning. We carefully design the solution algorithms to efficiently optimize the proposed three approaches. Extensive experiments are conducted on three typical tasks, including single-view subspace clustering, multi-view subspace clustering, and multi-view semi-supervised classification. The notably superior experimental results powerfully verify the effectiveness of our proposals.
KW - Affine subspace
KW - low-rank representation
KW - low-rank tensor
KW - multi-view learning
KW - subspace clustering
UR - https://www.scopus.com/pages/publications/85151362298
U2 - 10.1109/TPAMI.2023.3257407
DO - 10.1109/TPAMI.2023.3257407
M3 - 文章
C2 - 37028386
AN - SCOPUS:85151362298
SN - 0162-8828
VL - 45
SP - 9357
EP - 9373
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 8
ER -