Data-driven control and operation of active distribution system

The presence of renewable energy generation units in an active distribution system reduces reliance on fossil fuels and promotes a more environmentally friendly way of operating. However, there are uncertainties associated with primary sources like solar and wind energy, which create challenges for...

Full description

Saved in:
Bibliographic Details
Main Author: Yan, Rudai
Other Authors: Xu Yan
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173466
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The presence of renewable energy generation units in an active distribution system reduces reliance on fossil fuels and promotes a more environmentally friendly way of operating. However, there are uncertainties associated with primary sources like solar and wind energy, which create challenges for control and operation. Specifically, the unpredictable and rapid fluctuations in renewable energy sources (RESs) can cause significant deviations in system frequency and voltage, potentially leading to instability. Moreover, the two-way flow of power and inherent uncertainties in generation and loads make it difficult for traditional power system control and operation methods to perform effectively, especially in systems with a high penetration of RESs. To ensure a reliable and secure power supply, this thesis proposes the use of data-driven approaches to coordinate distributed energy resources (DERs) for frequency and voltage regulation in distribution systems. MG plays a key role in integrating renewable and clean energy resources at distribution level. The thesis proposes a data-driven method for distributed frequency control of islanded microgrids based on multi-agent quantum deep reinforcement learning (MAQDRL). The proposed method combines the conventional DRL framework with quantum machine learning and can adaptively obtain the optimal cooperative control strategy. The microgrid secondary frequency control is organized in a distributed way each agent performs the control action only based on the local and neighboring information. The proposed method can effectively regulate the frequency with better time delay tolerance and displays the quantum advantage in parameter reduction. To reduce the computational burden in the multi-agent design, a graph reinforcement learning based data-driven method for frequency control of an islanded AC microgrid is further proposed. The control signals from secondary control are the nonlinear combinations of local and neighboring information, which are determined by a DRL agent. Specifically, the proposed method embeds the graph attention network (GAT) into the policy network to decide how the neighboring features are aggregated. After training process, the weights from GAT model can be given to controllers to realize the distributed control. Then, in order to defend the side channel attacks and make the framework practical enough for industrial applications, measurement-device-independent QKD (MDI-QKD) is introduced in the MG distributed control. The following contributions are made. 1) A novel QKD-based quantum-secure control architecture is established to assure the data transmission security in MG distributed control. 2) MDI-QKD with asymmetric protocol is used to form a scalable QKD network for MG control. 3) A method for fast parameter optimization based on deep neural network (DNN) is proposed to adjust the parameters in QKD systems in real-time. Thirdly, for the voltage/var control (VVC) that can regulate voltage profiles and minimize energy loss in the network, the thesis proposes a new multi-agent safe graph reinforcement learning method to optimize reactive power output from PV inverters in real-time. The network is divided into several zones, and a decentralized framework is proposed for coordinated control of reactive power output in each zone to regulate voltage profiles and minimize network energy loss. The VVC problem is formulated as a multi-agent decentralized partially observable constrained Markov decision process. Each zone has a central control agent that embeds graph convolution networks (GCNs) in the policy network to improve the decision-making capability. The GCN extracts graph structured features from the ADN topology, reflecting the relationship between VVC and grid topology, and can filter noise and impute missing data. The training process includes primal-dual policy optimization to rigorously satisfy voltage safety constraints. Finally, multi-objective multi-agent deep reinforcement learning (MOMADRL) framework is proposed. By incorporating multiple actors and with a parallel training scheme, multiple policies can be concurrently learned for Pareto optimal solutions. The proposed MOMADRL method is then applied to the PV inverters based decentralized voltage/var control (VVC) of distribution networks with two conflicting objectives, namely, simultaneously minimizing network power loss and bus voltage deviation.