A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids

This paper develops a multi-timescale coordinated operation method for microgrids based on modern deep reinforcement learning. Considering the complementary characteristics of different storage devices, the proposed approach achieves multi-timescale coordination of battery and supercapacitor by intr...

Full description

Saved in:
Bibliographic Details
Main Authors: Hu, Chunchao, Cai, Zexiang, Zhang, Yanxu, Yan, Rudai, Cai, Yu, Cen, Bowei
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/164380
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-164380
record_format dspace
spelling sg-ntu-dr.10356-1643802023-01-18T05:33:40Z A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids Hu, Chunchao Cai, Zexiang Zhang, Yanxu Yan, Rudai Cai, Yu Cen, Bowei School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Microgrid Operation Hybrid Energy Storage System This paper develops a multi-timescale coordinated operation method for microgrids based on modern deep reinforcement learning. Considering the complementary characteristics of different storage devices, the proposed approach achieves multi-timescale coordination of battery and supercapacitor by introducing a hierarchical two-stage dispatch model. The first stage makes an initial decision irrespective of the uncertainties using the hourly predicted data to minimize the operational cost. For the second stage, it aims to generate corrective actions for the first-stage decisions to compensate for real-time renewable generation fluctuations. The first stage is formulated as a non-convex deterministic optimization problem, while the second stage is modeled as a Markov decision process solved by an entropy-regularized deep reinforcement learning method, i.e., the Soft Actor-Critic. The Soft Actor-Critic method can efficiently address the exploration–exploitation dilemma and suppress variations. This improves the robustness of decisions. Simulation results demonstrate that different types of energy storage devices can be used at two stages to achieve the multi-timescale coordinated operation. This proves the effectiveness of the proposed method. Published version The work was supported by Guangdong Provincial Key Laboratory of New Technology for Smart Grid Funded Project under Grant No. 2020b1212070025. 2023-01-18T05:33:39Z 2023-01-18T05:33:39Z 2022 Journal Article Hu, C., Cai, Z., Zhang, Y., Yan, R., Cai, Y. & Cen, B. (2022). A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids. Protection and Control of Modern Power Systems, 7(1). https://dx.doi.org/10.1186/s41601-022-00252-z 2367-0983 https://hdl.handle.net/10356/164380 10.1186/s41601-022-00252-z 2-s2.0-85135412381 1 7 en Protection and Control of Modern Power Systems © The Author(s) 2022. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Microgrid Operation
Hybrid Energy Storage System
spellingShingle Engineering::Electrical and electronic engineering
Microgrid Operation
Hybrid Energy Storage System
Hu, Chunchao
Cai, Zexiang
Zhang, Yanxu
Yan, Rudai
Cai, Yu
Cen, Bowei
A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids
description This paper develops a multi-timescale coordinated operation method for microgrids based on modern deep reinforcement learning. Considering the complementary characteristics of different storage devices, the proposed approach achieves multi-timescale coordination of battery and supercapacitor by introducing a hierarchical two-stage dispatch model. The first stage makes an initial decision irrespective of the uncertainties using the hourly predicted data to minimize the operational cost. For the second stage, it aims to generate corrective actions for the first-stage decisions to compensate for real-time renewable generation fluctuations. The first stage is formulated as a non-convex deterministic optimization problem, while the second stage is modeled as a Markov decision process solved by an entropy-regularized deep reinforcement learning method, i.e., the Soft Actor-Critic. The Soft Actor-Critic method can efficiently address the exploration–exploitation dilemma and suppress variations. This improves the robustness of decisions. Simulation results demonstrate that different types of energy storage devices can be used at two stages to achieve the multi-timescale coordinated operation. This proves the effectiveness of the proposed method.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Hu, Chunchao
Cai, Zexiang
Zhang, Yanxu
Yan, Rudai
Cai, Yu
Cen, Bowei
format Article
author Hu, Chunchao
Cai, Zexiang
Zhang, Yanxu
Yan, Rudai
Cai, Yu
Cen, Bowei
author_sort Hu, Chunchao
title A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids
title_short A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids
title_full A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids
title_fullStr A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids
title_full_unstemmed A soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids
title_sort soft actor-critic deep reinforcement learning method for multi-timescale coordinated operation of microgrids
publishDate 2023
url https://hdl.handle.net/10356/164380
_version_ 1756370600214921216