跳到主要导航 跳到搜索 跳到主要内容

The Teaching Dimension of Regularized Kernel Learners

科研成果: 期刊稿件会议文章同行评审

摘要

Teaching dimension (TD) is a fundamental theoretical property for understanding machine teaching algorithms. It measures the sample complexity of teaching a target hypothesis to a learner. The TD of linear learners has been studied extensively, whereas the results of teaching non-linear learners are rare. A recent result investigates the TD of polynomial and Gaussian kernel learners. Unfortunately, the theoretical bounds therein show that the TD is high when teaching those non-linear learners. Inspired by the fact that regularization can reduce the learning complexity in machine learning, a natural question is whether the similar fact happens in machine teaching. To answer this essential question, this paper proposes a unified theoretical framework termed STARKE to analyze the TD of regularized kernel learners. On the basis of STARKE, we derive a generic result of any type of kernels. Furthermore, we disclose that the TD of regularized linear and regularized polynomial kernel learners can be strictly reduced. For regularized Gaussian kernel learners, we reveal that, although their TD is infinite, their ϵ-approximate TD can be exponentially reduced compared with that of the unregularized learners. The extensive experimental results of teaching the optimization-based learners verify the theoretical findings.

源语言英语
页(从-至)17984-18002
页数19
期刊Proceedings of Machine Learning Research
162
出版状态已出版 - 2022
活动39th International Conference on Machine Learning, ICML 2022 - Baltimore, 美国
期限: 17 7月 202223 7月 2022

指纹

探究 'The Teaching Dimension of Regularized Kernel Learners' 的科研主题。它们共同构成独一无二的指纹。

引用此