TY - JOUR
T1 - Distributed Stochastic Optimization With Unbounded Subgradients Over Randomly Time-Varying Networks
AU - Chen, Yan
AU - Fradkov, Alexander L.
AU - Fu, Keli
AU - Fu, Xiaozheng
AU - Li, Tao
N1 - Publisher Copyright:
© 1963-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Motivated by distributed statistical learning over uncertain communication networks, we study distributed stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. The network is modeled by a sequence of time-varying random digraphs with each node representing a local optimizer and each edge representing a communication link. We consider the distributed subgradient optimization algorithm with noisy measurements of local cost functions' subgradients, additive and multiplicative noises among information exchanging between each pair of nodes. By stochastic Lyapunov method, convex analysis, algebraic graph theory and martingale convergence theory, we prove that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes' states converge to the global optimal solution almost surely.
AB - Motivated by distributed statistical learning over uncertain communication networks, we study distributed stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. The network is modeled by a sequence of time-varying random digraphs with each node representing a local optimizer and each edge representing a communication link. We consider the distributed subgradient optimization algorithm with noisy measurements of local cost functions' subgradients, additive and multiplicative noises among information exchanging between each pair of nodes. By stochastic Lyapunov method, convex analysis, algebraic graph theory and martingale convergence theory, we prove that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes' states converge to the global optimal solution almost surely.
KW - Additive and multiplicative communication noise
KW - distributed stochastic convex optimization
KW - random graph
KW - subgradient
UR - https://www.scopus.com/pages/publications/85215567443
U2 - 10.1109/TAC.2024.3525182
DO - 10.1109/TAC.2024.3525182
M3 - 文章
AN - SCOPUS:85215567443
SN - 0018-9286
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
ER -