Abstract
Advances of modern science and engineering lead to unprecedented amount of data for information processing. Of particular interest is the semi-supervised learning, where very few training samples are available among large volumes of unlabeled data. Graph-based algorithms using Laplacian regularization have achieved state-of-the-art performance, but can induce huge memory and computational costs. In this paper, we introduce L1-norm penalization on the low-rank factorized kernel for efficient, globally optimal model selection in graph-based semi-supervised learning. An important novelty is that our formulation can be transformed to a standard LASSO regression. On one hand, this makes it possible to employ advanced sparse solvers to handle large scale problems; on the other hand, a globally optimal subset of basis can be chosen adaptively given desired strength of penalizing model complexity, in contrast to some current endeavors that pre-determine the basis without coupling it with the learning task. Our algorithm performs competitively with state-of-the-art algorithms on a variety of benchmark data sets. In particular, it is orders of magnitude faster than exact algorithms and achieves a good trade-off between accuracy and scalability.
| Original language | English |
|---|---|
| Pages (from-to) | 265-272 |
| Number of pages | 8 |
| Journal | Neurocomputing |
| Volume | 129 |
| DOIs | |
| State | Published - 10 Apr 2014 |
| Externally published | Yes |
Keywords
- Graph Laplacian
- Low-rank approximation
- Manifold regularization
- Regularized least squares
- Semi-supervised learning
- Sparse regression