Convergence of asynchronous distributed gradient methods over stochastic networks
We consider distributed optimization problems in which a number of agents are to seek the global optimum of a sum of cost functions through only local information sharing. In this paper, we are particularly interested in scenarios, where agents are operating asynchronously over stochastic networks s...
Saved in:
Main Authors: | Xu, Jinming, Zhu, Shanying, Soh, Yeng Chai, Xie, Lihua |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Article |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/145309 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
A Bregman splitting scheme for distributed optimization over networks
by: Xu, Jinming, et al.
Published: (2020) -
Innovation compression for communication-efficient distributed optimization with linear convergence
by: Zhang, Jiaqi, et al.
Published: (2023) -
A dual splitting approach for distributed resource allocation with regularization
by: Xu, Jinming, et al.
Published: (2020) -
Gradient-free distributed optimization with exact convergence
by: Pang, Yipeng, et al.
Published: (2022) -
Exponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphs
by: Li, Li, et al.
Published: (2022)