Statistical Rates of Convergence for Functional Partially Linear Support Vector Machines for Classi_cation

  • Yingying Zhang
  • , Yan Yong Zhao
  • , Heng Lian*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

In this paper, we consider the learning rate of support vector machines with both a functional predictor and a high-dimensional multivariate vectorial predictor. Similar to the literature on learning in reproducing kernel Hilbert spaces, a source condition and a capacity condition are used to characterize the convergence rate of the estimator. It is highly non-trivial to establish the possibly faster rate of the linear part. Using a key basic in- equality comparing losses at two carefully constructed points, we establish the learning rate of the linear part which is the same as if the functional part is known. The proof relies on empirical processes and the Rademacher complexity bound in the semi-nonparametric setting as analytic tools, Young's inequality for operators, as well as a novel "approximate convexity" assumption.

Original languageEnglish
JournalJournal of Machine Learning Research
Volume23
StatePublished - 1 May 2022

Keywords

  • Convergence rate
  • Prediction risk
  • Rademacher complexity
  • Support vector classification

Fingerprint

Dive into the research topics of 'Statistical Rates of Convergence for Functional Partially Linear Support Vector Machines for Classi_cation'. Together they form a unique fingerprint.

Cite this