Convergence of asynchronous distributed gradient methods over stochastic networks

We consider distributed optimization problems in which a number of agents are to seek the global optimum of a sum of cost functions through only local information sharing. In this paper, we are particularly interested in scenarios, where agents are operating asynchronously over stochastic networks s...

Full description

Saved in:
Bibliographic Details
Main Authors: Xu, Jinming, Zhu, Shanying, Soh, Yeng Chai, Xie, Lihua
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/145309
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-145309
record_format dspace
spelling sg-ntu-dr.10356-1453092020-12-17T02:08:21Z Convergence of asynchronous distributed gradient methods over stochastic networks Xu, Jinming Zhu, Shanying Soh, Yeng Chai Xie, Lihua School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Convergence Distributed Optimization We consider distributed optimization problems in which a number of agents are to seek the global optimum of a sum of cost functions through only local information sharing. In this paper, we are particularly interested in scenarios, where agents are operating asynchronously over stochastic networks subject to random failures. Most existing algorithms require coordinated and decaying stepsizes to ensure zero gap between the estimated value of each agent and the exact optimum, restricting it from asynchronous implementation and resulting in slower convergence results. To deal with this issue, we develop a new asynchronous distributed gradient method (AsynDGM) based on consensus theory. The proposed algorithm not only allows for asynchronous implementation in a completely distributed manner but also, most importantly, is able to seek the exact optimum even with constant stepsizes. We will show that the assumption of boundedness of gradients, which is widely used in the literature, can be dropped by instead imposing the standard Lipschitz continuity condition on gradients. Moreover, we derive an upper bound of stepsize within which the proposed AsynDGM can achieve a linear convergence rate for strongly convex functions with Lipschitz gradients. A canonical example of sensor fusion problems is provided to illustrate the effectiveness of the proposed algorithm. 2020-12-17T02:08:21Z 2020-12-17T02:08:21Z 2018 Journal Article Xu, J., Zhu, S., Soh, Y. C., & Xie, L. (2018). Convergence of asynchronous distributed gradient methods over stochastic networks. IEEE Transactions on Automatic Control, 63(2), 434-448. doi:10.1109/TAC.2017.2730481 1558-2523 https://hdl.handle.net/10356/145309 10.1109/TAC.2017.2730481 2 63 434 448 en IEEE Transactions on Automatic Control © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TAC.2017.2730481
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Convergence
Distributed Optimization
spellingShingle Engineering::Electrical and electronic engineering
Convergence
Distributed Optimization
Xu, Jinming
Zhu, Shanying
Soh, Yeng Chai
Xie, Lihua
Convergence of asynchronous distributed gradient methods over stochastic networks
description We consider distributed optimization problems in which a number of agents are to seek the global optimum of a sum of cost functions through only local information sharing. In this paper, we are particularly interested in scenarios, where agents are operating asynchronously over stochastic networks subject to random failures. Most existing algorithms require coordinated and decaying stepsizes to ensure zero gap between the estimated value of each agent and the exact optimum, restricting it from asynchronous implementation and resulting in slower convergence results. To deal with this issue, we develop a new asynchronous distributed gradient method (AsynDGM) based on consensus theory. The proposed algorithm not only allows for asynchronous implementation in a completely distributed manner but also, most importantly, is able to seek the exact optimum even with constant stepsizes. We will show that the assumption of boundedness of gradients, which is widely used in the literature, can be dropped by instead imposing the standard Lipschitz continuity condition on gradients. Moreover, we derive an upper bound of stepsize within which the proposed AsynDGM can achieve a linear convergence rate for strongly convex functions with Lipschitz gradients. A canonical example of sensor fusion problems is provided to illustrate the effectiveness of the proposed algorithm.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Xu, Jinming
Zhu, Shanying
Soh, Yeng Chai
Xie, Lihua
format Article
author Xu, Jinming
Zhu, Shanying
Soh, Yeng Chai
Xie, Lihua
author_sort Xu, Jinming
title Convergence of asynchronous distributed gradient methods over stochastic networks
title_short Convergence of asynchronous distributed gradient methods over stochastic networks
title_full Convergence of asynchronous distributed gradient methods over stochastic networks
title_fullStr Convergence of asynchronous distributed gradient methods over stochastic networks
title_full_unstemmed Convergence of asynchronous distributed gradient methods over stochastic networks
title_sort convergence of asynchronous distributed gradient methods over stochastic networks
publishDate 2020
url https://hdl.handle.net/10356/145309
_version_ 1688665378553593856