Gradient-free distributed optimization with exact convergence
In this paper, a gradient-free distributed algorithm is introduced to solve a set constrained optimization problem under a directed communication network. Specifically, at each time-step, the agents locally compute a so-called pseudo-gradient to guide the updates of the decision variables, which can...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163543 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In this paper, a gradient-free distributed algorithm is introduced to solve a set constrained optimization problem under a directed communication network. Specifically, at each time-step, the agents locally compute a so-called pseudo-gradient to guide the updates of the decision variables, which can be applied in the fields where the gradient information is unknown, not available or non-existent. A surplus-based method is adopted to remove the doubly stochastic requirement on the weighting matrix, which enables the implementation of the algorithm in graphs having no associated doubly stochastic weighting matrix. For the convergence results, the proposed algorithm is able to obtain the exact convergence to the optimal value with any positive, non-summable and non-increasing step-sizes. Furthermore, when the step-size is also square-summable, the proposed algorithm is guaranteed to achieve the exact convergence to an optimal solution. In addition to the standard convergence analysis, the convergence rate of the proposed algorithm is also investigated. Finally, the effectiveness of the proposed algorithm is verified through numerical simulations. |
---|