Gradient learning of symmetric positive-definite matrix regression

Research output: Contribution to journalArticlepeer-review

Abstract

Non-Euclidean data are nowadays frequently encountered due to the advance in data-collection techniques. Under the Tikhonov regularization framework, this paper focuses on the gradient learning in a regression setting where the response is a symmetric positive-definite (SPD) matrix and the predictor is a Euclidean vector. We endow the SPD manifold with the Log-Euclidean metric to transform our model on the manifold to the Euclidean space and calculate the gradients by solving a linear system under the assumption that the gradient function resides in a reproducing kernel Hilbert space. We further simplify our algorithm and reduce the dimension of the linear system by singular value decomposition. Theoretical properties about the approximation error of the reducing-matrix-size algorithm and the error bound of gradient estimation are investigated as well. In numerical experiments, we show the validity of our SPD gradient learning algorithm in variable selection and sufficient dimension reduction. A real-world dataset about New York taxi networks is studied to illustrate the applicability of our algorithm.

Original languageEnglish
Article number184
JournalStatistics and Computing
Volume35
Issue number6
DOIs
StatePublished - Dec 2025

Keywords

  • Gradient learning
  • Log-Euclidean metric
  • Non-Euclidean data
  • Sufficient dimension reduction
  • Symmetric positive-definite matrix

Fingerprint

Dive into the research topics of 'Gradient learning of symmetric positive-definite matrix regression'. Together they form a unique fingerprint.

Cite this