Skip to main navigation Skip to search Skip to main content

Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization

  • Zhuanghua Liu
  • , Cheng Chen
  • , Luo Luo*
  • , Bryan Kian Hsiang Low
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper studies the problem of solving nonconvex nonsmooth optimization over a closed convex set. Most previous works tackle such problems by transforming the constrained problem into an unconstrained problem. However, they only provide asymptotic convergence analysis for their methods. In this work, we provide the non-asymptotic convergence analysis for solving constrained nonconvex nonsmooth optimization. We first generalize classical gradient mapping and the Frank-Wolfe gap in the nonsmooth setting. Then we introduce novel notions of approximate stationarity concerning such generalized quantities. We also propose several stochastic zeroth-order algorithms for the problem, along with their non-asymptotic convergence guarantees of obtaining the proposed approximate stationarity. Finally, we conduct numerical experiments that demonstrate the effectiveness of our algorithms.

Original languageEnglish
Pages (from-to)30842-30872
Number of pages31
JournalProceedings of Machine Learning Research
Volume235
StatePublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Fingerprint

Dive into the research topics of 'Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization'. Together they form a unique fingerprint.

Cite this