Distributed Online Optimization Based on One-Step Gradient Descent and Multi-Step Consensus

  • Yingjie Zhou
  • , Xinyu Wang
  • , Tao Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We propose a distributed online optimization al-gorithm with continuously learning ability. In this algorithm, we first perform one-step gradient descent with fixed step size to ensure the ability of tracking the optimal solutions, and then use multi-step consensus to ensure the collaboration between neighboring nodes. For strongly convex and smooth objective functions, we provide a dynamic regret analysis of the proposed algorithm and show that the dynamic regret is upper bounded by the initial values, the path variation of the optimal solution, and a linear growth term. The coefficient of the linear growth term can be made arbitrarily small by adjusting the step size of gradient descent. We also demonstrate the performance of the proposed algorithm by numerical simulations.

Original languageEnglish
Title of host publication2024 18th International Conference on Control, Automation, Robotics and Vision, ICARCV 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages840-845
Number of pages6
ISBN (Electronic)9798331518493
DOIs
StatePublished - 2024
Event18th International Conference on Control, Automation, Robotics and Vision, ICARCV 2024 - Dubai, United Arab Emirates
Duration: 12 Dec 202415 Dec 2024

Publication series

Name2024 18th International Conference on Control, Automation, Robotics and Vision, ICARCV 2024

Conference

Conference18th International Conference on Control, Automation, Robotics and Vision, ICARCV 2024
Country/TerritoryUnited Arab Emirates
CityDubai
Period12/12/2415/12/24

Keywords

  • Distributed online optimization
  • continuously learning ability
  • dynamic regret

Fingerprint

Dive into the research topics of 'Distributed Online Optimization Based on One-Step Gradient Descent and Multi-Step Consensus'. Together they form a unique fingerprint.

Cite this