Skip to main navigation Skip to search Skip to main content

Kolmogorov–Smirnov learning by neural networks with a nonconvex surrogate loss

  • Fang Fang*
  • , Sizhe Wang
  • , Yumeng Chen
  • *Corresponding author for this work
  • East China Normal University

Research output: Contribution to journalArticlepeer-review

Abstract

Kolmogorov–Smirnov (KS) statistic has been widely used in many areas to evaluate the performance of binary classification. However, almost no classification algorithm tries to optimise it directly at the training stage due to the computational and theoretical challenges brought by the special form of KS. In this paper, we propose a novel Kolmogorov–Smirnov neural Network (KSNet) using KS as the optimisation objective. The difficulty of non-smoothness of the empirical KS is overcame by introducing a smooth nonconvex surrogate function. The KSNet brings great potential to improve the KS in test data especially for imbalanced data and it shows inspiring robustness to data noise. Theoretically, we establish the non-asymptotic excess risk bound of KSNet with a ReLU activated feedforward neural network and show its Bayes-risk consistency. Experiments on a variety of real datasets confirm the advantages of KSNet over a lot of existing methods.

Original languageEnglish
JournalJournal of Nonparametric Statistics
DOIs
StateAccepted/In press - 2026

Keywords

  • Binary classification
  • ReLU neural network
  • imbalanced data
  • large scale data
  • non-asymptotic risk

Fingerprint

Dive into the research topics of 'Kolmogorov–Smirnov learning by neural networks with a nonconvex surrogate loss'. Together they form a unique fingerprint.

Cite this