Monotonic learning in the PAC framework: A new perspective

  • Ming Li
  • , Chenyi Zhang*
  • , Qin Li
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Monotone learning describes learning processes in which expected error consistently decreases as the amount of training data increases. However, recent studies challenge this conventional wisdom, revealing significant gaps in the understanding of generalization in machine learning. Addressing these gaps is crucial for advancing the theoretical foundations of the field. In this work, we utilize Probably Approximately Correct (PAC) learning theory to construct a theoretical error distribution that approximates a learning algorithm's actual performance. We rigorously prove that this theoretical distribution exhibits monotonicity as sample sizes increase. We identify two scenarios under which deterministic algorithms based on Empirical Risk Minimization (ERM) are monotone: (1) the hypothesis space is finite, or (2) the hypothesis space has finite VC-dimension. Experiments on three classical learning problems validate our findings by demonstrating that the monotonicity of the algorithms’ generalization error is guaranteed, as its theoretical error upper bound monotonically converges to the minimum generalization error.

Original languageEnglish
Article number114504
JournalKnowledge-Based Systems
Volume330
DOIs
StatePublished - 25 Nov 2025

Keywords

  • Machine learning
  • Monotonicity
  • PAC

Fingerprint

Dive into the research topics of 'Monotonic learning in the PAC framework: A new perspective'. Together they form a unique fingerprint.

Cite this