TY - JOUR
T1 - Nonlinear dimensionality reduction based on dictionary learning
AU - Zheng, Si Long
AU - Li, Yuan Xiang
AU - Wei, Xian
AU - Peng, Xi Shuai
N1 - Publisher Copyright:
Copyright © 2016 Acta Automatica Sinica. All rights reserved.
PY - 2016/7/1
Y1 - 2016/7/1
N2 - Most classic dimensionality reduction (DR) algorithms (such as principle component analysis (PCA) and isometric mapping (ISOMAP)) focus on finding a low-dimensional embedding of original data, which are often not reversible. It is still challenging to make DR processes reversible in many applications. Sparse representation (SR) has shown its power on signal reconstruction and denoising. To tackle the problem of large scale dataset processing, this work focuses on developing a differentiable model for invertible DR based on SR. From high-dimensional input signal to the low-dimensional feature, we expect to preserve some important geometric features (such as inner product, distance and angle) such that the reliable reconstruction from the low dimensional space back to the original high dimensional space is possible. We employ the algorithm called concentrated dictionary learning (CDL) to train the high dimensional dictionary to concentrate the energy in its low dimensional subspace. Then we design a paired dictionaries: D and P, where D is used to obtain the sparse representation and P is a direct down-sampling of D. CDL can ensure P to capture the most energy of D. Then, the problem about signal reconstruction is transformed into how to train dictionaries D and P, so the process of input signal X to feature Y is transformed into the process of energy retention from D to P. Experimental results show that without the restrictions of linear projection using restricted isometry property (RIP), CDL can reconstruct the image at a lower dimensional space and outperform state-of-the-art DR methods (such as Gaussian random compressive sensing). In addition, for noise-corrupted images, CDL can obtain better compression performance than JPEG2000.
AB - Most classic dimensionality reduction (DR) algorithms (such as principle component analysis (PCA) and isometric mapping (ISOMAP)) focus on finding a low-dimensional embedding of original data, which are often not reversible. It is still challenging to make DR processes reversible in many applications. Sparse representation (SR) has shown its power on signal reconstruction and denoising. To tackle the problem of large scale dataset processing, this work focuses on developing a differentiable model for invertible DR based on SR. From high-dimensional input signal to the low-dimensional feature, we expect to preserve some important geometric features (such as inner product, distance and angle) such that the reliable reconstruction from the low dimensional space back to the original high dimensional space is possible. We employ the algorithm called concentrated dictionary learning (CDL) to train the high dimensional dictionary to concentrate the energy in its low dimensional subspace. Then we design a paired dictionaries: D and P, where D is used to obtain the sparse representation and P is a direct down-sampling of D. CDL can ensure P to capture the most energy of D. Then, the problem about signal reconstruction is transformed into how to train dictionaries D and P, so the process of input signal X to feature Y is transformed into the process of energy retention from D to P. Experimental results show that without the restrictions of linear projection using restricted isometry property (RIP), CDL can reconstruct the image at a lower dimensional space and outperform state-of-the-art DR methods (such as Gaussian random compressive sensing). In addition, for noise-corrupted images, CDL can obtain better compression performance than JPEG2000.
KW - Compressed sensing (CS)
KW - Dictionary learning
KW - Dimensionality reduction (DR)
KW - Sparse representation (SR)
UR - https://www.scopus.com/pages/publications/84980028136
U2 - 10.16383/j.aas.2016.c150557
DO - 10.16383/j.aas.2016.c150557
M3 - 文章
AN - SCOPUS:84980028136
SN - 0254-4156
VL - 42
SP - 1065
EP - 1076
JO - Zidonghua Xuebao/Acta Automatica Sinica
JF - Zidonghua Xuebao/Acta Automatica Sinica
IS - 7
ER -