Augmented distributed optimization for networked systems

The dissertation develops several new schemes and algorithms for solving distributed optimization problems in large-scale networked systems in which a group of agents are to collaboratively seek the global optimum through peer-to-peer communication networks. The problem arises in various application...

Full description

Saved in:
Bibliographic Details
Main Author: Xu, Jinming
Other Authors: Soh Yeng Chai
Format: Theses and Dissertations
Language:English
Published: 2016
Subjects:
Online Access:https://hdl.handle.net/10356/68805
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-68805
record_format dspace
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Science::Mathematics::Applied mathematics::Optimization
DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering
spellingShingle DRNTU::Science::Mathematics::Applied mathematics::Optimization
DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering
Xu, Jinming
Augmented distributed optimization for networked systems
description The dissertation develops several new schemes and algorithms for solving distributed optimization problems in large-scale networked systems in which a group of agents are to collaboratively seek the global optimum through peer-to-peer communication networks. The problem arises in various application domains, such as coordinated control, resource allocation and sensor fusion. Common features to these areas are that the system in question typically has a large number of agents involved without any centralized coordinator and that resources, such as sensing, communication and computation, are usually scattered throughout the network. As a result, the agents have to coordinate their behaviors with each other through only local information exchange to achieve a desired network (system) objective. For coordinated control of large-scale networked systems, we propose a novel distributed simultaneous perturbation approach (D-SPA) to solve the distributed optimization problem based on simultaneous perturbation techniques as well as consensus strategies. The proposed method is model-free and requires little knowledge on the coupling structure of the problem to be optimized. Using singular perturbation and averaging theory, we show that the proposed scheme will converge to the neighborhood of the Pareto optimum of the problem so long as the energy of perturbation signals is sufficiently small. To illustrate its effectiveness, we apply the proposed approach to a simulated offshore wind farm for energy maximization and make a comprehensive comparison with the existing state-of-the-art technique. On the other hand, for coordinated estimation in large-scale statistical signal processing, most existing distributed algorithms usually require a perfect synchronization mechanism and decaying stepsize for achieving the exact optimum, restricting it from being asynchronously implemented and resulting in slower convergence rates. In addition, the assumption of the boundedness of (sub)gradient is often made for convergence analysis, which is quite restrictive in unconstrained problems. To overcome these issues, we propose two augmented distributed algorithms both of which involve an extra step of consensus in each iteration. Specifically, a general efficient distributed algorithm, termed Distributed Forward-Backward Bregman Splitting (D-FBBS), is proposed to simultaneously solve the primal problem as well as its dual based on the Bregman method and operator splitting. The proposed algorithm allows agents to communicate asynchronously and thus lends itself to stochastic networks. This algorithm belongs to the family of general proximal point algorithms and is shown to have close connections with some existing well-known algorithms for fixed networks but generally different from them in handling stochastic networks. To further tackle the asynchronous issues in computation, we propose a new augmented distributed gradient method (AugDGM) based on the existing well-known distributed gradient method. Both algorithms are able to converge to the exact optimum even with constant stepsize over stochastic networks without the assumption of boundedness of (sub)gradient. With proper assumptions, we establish a non-ergodic convergence rate of o(1/k) in terms of fixed point residual for fixed networks and an ergodic convergence rate of O(1/k) for stochastic networks respectively for the D-FBBS algorithm. For the asynchronous version of AugDGM (AsynDGM), we obtain an ergodic convergence rate of O(1/k^1/2) in terms of the objective error for strongly convex functions with Lipschitz gradients over both fixed and stochastic networks. Some examples of sensor fusion problems are provided to illustrate the effectiveness of the proposed algorithms.
author2 Soh Yeng Chai
author_facet Soh Yeng Chai
Xu, Jinming
format Theses and Dissertations
author Xu, Jinming
author_sort Xu, Jinming
title Augmented distributed optimization for networked systems
title_short Augmented distributed optimization for networked systems
title_full Augmented distributed optimization for networked systems
title_fullStr Augmented distributed optimization for networked systems
title_full_unstemmed Augmented distributed optimization for networked systems
title_sort augmented distributed optimization for networked systems
publishDate 2016
url https://hdl.handle.net/10356/68805
_version_ 1772826107110752256
spelling sg-ntu-dr.10356-688052023-07-04T16:37:25Z Augmented distributed optimization for networked systems Xu, Jinming Soh Yeng Chai School of Electrical and Electronic Engineering Centre for Modelling and Control of Complex Systems DRNTU::Science::Mathematics::Applied mathematics::Optimization DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering The dissertation develops several new schemes and algorithms for solving distributed optimization problems in large-scale networked systems in which a group of agents are to collaboratively seek the global optimum through peer-to-peer communication networks. The problem arises in various application domains, such as coordinated control, resource allocation and sensor fusion. Common features to these areas are that the system in question typically has a large number of agents involved without any centralized coordinator and that resources, such as sensing, communication and computation, are usually scattered throughout the network. As a result, the agents have to coordinate their behaviors with each other through only local information exchange to achieve a desired network (system) objective. For coordinated control of large-scale networked systems, we propose a novel distributed simultaneous perturbation approach (D-SPA) to solve the distributed optimization problem based on simultaneous perturbation techniques as well as consensus strategies. The proposed method is model-free and requires little knowledge on the coupling structure of the problem to be optimized. Using singular perturbation and averaging theory, we show that the proposed scheme will converge to the neighborhood of the Pareto optimum of the problem so long as the energy of perturbation signals is sufficiently small. To illustrate its effectiveness, we apply the proposed approach to a simulated offshore wind farm for energy maximization and make a comprehensive comparison with the existing state-of-the-art technique. On the other hand, for coordinated estimation in large-scale statistical signal processing, most existing distributed algorithms usually require a perfect synchronization mechanism and decaying stepsize for achieving the exact optimum, restricting it from being asynchronously implemented and resulting in slower convergence rates. In addition, the assumption of the boundedness of (sub)gradient is often made for convergence analysis, which is quite restrictive in unconstrained problems. To overcome these issues, we propose two augmented distributed algorithms both of which involve an extra step of consensus in each iteration. Specifically, a general efficient distributed algorithm, termed Distributed Forward-Backward Bregman Splitting (D-FBBS), is proposed to simultaneously solve the primal problem as well as its dual based on the Bregman method and operator splitting. The proposed algorithm allows agents to communicate asynchronously and thus lends itself to stochastic networks. This algorithm belongs to the family of general proximal point algorithms and is shown to have close connections with some existing well-known algorithms for fixed networks but generally different from them in handling stochastic networks. To further tackle the asynchronous issues in computation, we propose a new augmented distributed gradient method (AugDGM) based on the existing well-known distributed gradient method. Both algorithms are able to converge to the exact optimum even with constant stepsize over stochastic networks without the assumption of boundedness of (sub)gradient. With proper assumptions, we establish a non-ergodic convergence rate of o(1/k) in terms of fixed point residual for fixed networks and an ergodic convergence rate of O(1/k) for stochastic networks respectively for the D-FBBS algorithm. For the asynchronous version of AugDGM (AsynDGM), we obtain an ergodic convergence rate of O(1/k^1/2) in terms of the objective error for strongly convex functions with Lipschitz gradients over both fixed and stochastic networks. Some examples of sensor fusion problems are provided to illustrate the effectiveness of the proposed algorithms. DOCTOR OF PHILOSOPHY (EEE) 2016-06-01T08:44:11Z 2016-06-01T08:44:11Z 2016 Thesis https://hdl.handle.net/10356/68805 10.32657/10356/68805 en 150 p. application/pdf