Data-driven voltage control of active distribution networks

As traditional fossil fuel reserves diminish and environmental concerns over air pollution and greenhouse gas emissions rise, the global demand for renewable energy will continue to escalate. However, as renewables are integrated more extensively into distribution networks, numerous problems arise a...

Full description

Saved in:
Bibliographic Details
Main Author: Guo, Chenxi
Other Authors: Xu Yan
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170604
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:As traditional fossil fuel reserves diminish and environmental concerns over air pollution and greenhouse gas emissions rise, the global demand for renewable energy will continue to escalate. However, as renewables are integrated more extensively into distribution networks, numerous problems arise accordingly, with voltage violation being one of the significant challenges. Therefore, voltage/var control (VVC) is introduced to address this issue. Nowadays, photovoltaic (PV) inverters are increasingly being used in VVC due to their capability to provide fast reactive power support. However, the traditional optimization-based methods face challenges in real-time operation and suffer from modeling restrictions. For these reasons, this paper first presents a PV inverter based decentralized voltage/var control (VVC) framework to provide faster and more flexible control actions. Then, a multi-agent deep reinforcement learning based data-driven method, multi-agent twin delayed deep deterministic policy gradient (MATD3), is proposed to solve the decentralized VVC problem. The simulations conducted on the IEEE 33-bus distribution network demonstrate that the proposed method can achieve both faster response times and sound control performance compared to deep deterministic policy gradient (DDPG) method and conventional optimization-based approaches.