TY - GEN
T1 - Joint learning dictionary and discriminative features for high dimensional data
AU - Wei, Xian
AU - Li, Yuanxiang
AU - Shen, Hao
AU - Kleinsteuber, Martin
AU - Murphey, Yi Lu
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/1/1
Y1 - 2016/1/1
N2 - Recently, sparse representation (SR) over a redundant dictionary has become a popular way of representing the data. It has been verified as an efficient and useful tool to promote the discrimination between signals. This work develops a joint learning approach to find the low dimensional discriminative features for high dimensional data. To avoid the high computational cost of direct sparse coding on large scale input data, we first learn SR in an orthogonal projected space over a task-driven sparsifying dictionary. We then exploit the discriminative projection on SR. The whole learning process is treated as an optimization problem of trace quotient maximization, which involves an orthogonal projection on original data space, a dictionary and a discriminative projection on sparse codes. The related cost function is well defined on a product manifold of the Stiefel manifold, the Oblique manifold and the Grassmann manifold. Finally, we employ a stochastic gradient descent algorithm on the smooth product manifold to maximize the cost function. Our numerical experiments on visual recognition demonstrate the effectiveness of the proposed algorithm, in comparison with the state of the arts.
AB - Recently, sparse representation (SR) over a redundant dictionary has become a popular way of representing the data. It has been verified as an efficient and useful tool to promote the discrimination between signals. This work develops a joint learning approach to find the low dimensional discriminative features for high dimensional data. To avoid the high computational cost of direct sparse coding on large scale input data, we first learn SR in an orthogonal projected space over a task-driven sparsifying dictionary. We then exploit the discriminative projection on SR. The whole learning process is treated as an optimization problem of trace quotient maximization, which involves an orthogonal projection on original data space, a dictionary and a discriminative projection on sparse codes. The related cost function is well defined on a product manifold of the Stiefel manifold, the Oblique manifold and the Grassmann manifold. Finally, we employ a stochastic gradient descent algorithm on the smooth product manifold to maximize the cost function. Our numerical experiments on visual recognition demonstrate the effectiveness of the proposed algorithm, in comparison with the state of the arts.
UR - https://www.scopus.com/pages/publications/85019099438
U2 - 10.1109/ICPR.2016.7899661
DO - 10.1109/ICPR.2016.7899661
M3 - 会议稿件
AN - SCOPUS:85019099438
T3 - Proceedings - International Conference on Pattern Recognition
SP - 366
EP - 371
BT - 2016 23rd International Conference on Pattern Recognition, ICPR 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd International Conference on Pattern Recognition, ICPR 2016
Y2 - 4 December 2016 through 8 December 2016
ER -