Distributed online convex optimization with time-varying coupled inequality constraints

This paper considers distributed online optimization with time-varying coupled inequality constraints. The global objective function is composed of local convex cost and regularization functions and the coupled constraint function is the sum of local convex functions. A distributed online primal-dua...

Full description

Saved in:
Bibliographic Details
Main Authors: Yi, Xinlei, Li, Xiuxian, Xie, Lihua, Johansson, Karl H.
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/164461
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This paper considers distributed online optimization with time-varying coupled inequality constraints. The global objective function is composed of local convex cost and regularization functions and the coupled constraint function is the sum of local convex functions. A distributed online primal-dual dynamic mirror descent algorithm is proposed to solve this problem, where the local cost, regularization, and constraint functions are held privately and revealed only after each time slot. Without assuming Slater's condition, we first derive regret and constraint violation bounds for the algorithm and show how they depend on the stepsize sequences, the accumulated dynamic variation of the comparator sequence, the number of agents, and the network connectivity. As a result, under some natural decreasing stepsize sequences, we prove that the algorithm achieves sublinear dynamic regret and constraint violation if the accumulated dynamic variation of the optimal sequence also grows sublinearly. We also prove that the algorithm achieves sublinear static regret and constraint violation under mild conditions. Assuming Slater's condition, we show that the algorithm achieves smaller bounds on the constraint violation. In addition, smaller bounds on the static regret are achieved when the objective function is strongly convex. Finally, numerical simulations are provided to illustrate the effectiveness of the theoretical results.