Abstract
This paper studies the problem of solving nonconvex nonsmooth optimization over a closed convex set. Most previous works tackle such problems by transforming the constrained problem into an unconstrained problem. However, they only provide asymptotic convergence analysis for their methods. In this work, we provide the non-asymptotic convergence analysis for solving constrained nonconvex nonsmooth optimization. We first generalize classical gradient mapping and the Frank-Wolfe gap in the nonsmooth setting. Then we introduce novel notions of approximate stationarity concerning such generalized quantities. We also propose several stochastic zeroth-order algorithms for the problem, along with their non-asymptotic convergence guarantees of obtaining the proposed approximate stationarity. Finally, we conduct numerical experiments that demonstrate the effectiveness of our algorithms.
| Original language | English |
|---|---|
| Pages (from-to) | 30842-30872 |
| Number of pages | 31 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 235 |
| State | Published - 2024 |
| Event | 41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria Duration: 21 Jul 2024 → 27 Jul 2024 |
Fingerprint
Dive into the research topics of 'Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver