Distributed and Parallel ADMM for Structured Nonconvex Optimization Problem

Xiangfeng Wang, Junchi Yan, Bo Jin*, Wenhao Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

The nonconvex optimization problems have recently attracted significant attention. However, both efficient algorithm and solid theory are still very limited. The difficulty is even pronounced for structured large-scale problems in many real-world applications. This article proposes an application-driven algorithmic framework for structured nonconvex optimization problems with distributed and parallel techniques, which jointly handles the high dimensionality of model parameters and distributed training data. The theoretical convergence of our algorithm is established under moderate assumptions. We apply the proposed method to popular multitask applications, including a multitask reinforcement learning problem. The promising performance demonstrates our framework is effective and efficient.

Original languageEnglish
Pages (from-to)4540-4552
Number of pages13
JournalIEEE Transactions on Cybernetics
Volume51
Issue number9
DOIs
StatePublished - Sep 2021

Keywords

  • Distributed
  • large-scale optimization
  • multitask reinforcement learning
  • nonconvex optimization
  • parallel

Fingerprint

Dive into the research topics of 'Distributed and Parallel ADMM for Structured Nonconvex Optimization Problem'. Together they form a unique fingerprint.

Cite this