A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility

With the high penetration of wind power connected to the integrated electricity and district heating systems (IEDHSs), wind power curtailment still inevitably occurs in the traditional IEDHS dispatch. Focusing on the flexibilities of the IEDHS is considered to be a beneficial solution to further pro...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Bin, Ghias, Amer M. Y. M., Chen, Zhe
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/164664
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-164664
record_format dspace
spelling sg-ntu-dr.10356-1646642023-02-08T01:25:00Z A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility Zhang, Bin Ghias, Amer M. Y. M. Chen, Zhe School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Integrated Energy Systems Renewable Energy With the high penetration of wind power connected to the integrated electricity and district heating systems (IEDHSs), wind power curtailment still inevitably occurs in the traditional IEDHS dispatch. Focusing on the flexibilities of the IEDHS is considered to be a beneficial solution to further promote the integration of wind power. In the district heating network, the thermal inertia is utilized to improve such flexibility. Therefore, an IEDHS dispatch model considering the thermal inertia of district heating network and operational flexibility of generators is proposed in this paper. In addition, to avoid the tendency of traditional reinforcement learning (RL) to fall into local optimality when solving high-dimensional problems, a double-deck deep RL (D3RL) framework is proposed in this study. D3RL combines with a deep deterministic policy gradient (DDPG) agent in the upper level and a conventional optimization solver in the lower level to simplify the action and reward design. In the simulation, the proposed model considering the transmission time delay characteristics of the district heating network and the operational flexibility of generators is verified in four scheduling scenarios. Besides, the superiority of the proposed D3RL method is validated in a larger IEDHS. Numerical results show that the considered scheduling model can use the heat storage characteristics of heating pipelines, reduce operating costs, improve the operational flexibility and encourage wind power utilization. Compared with traditional RL, the proposed optimization method can improve its training speed and convergence performance. Ministry of Education (MOE) Nanyang Technological University Published version This work was supported by the School of Electrical and Electronic Engineering at Nanyang Technological University, Ministry of Education, Singapore, under Grant AcRF TIER 1 RG50/21. 2023-02-08T01:25:00Z 2023-02-08T01:25:00Z 2022 Journal Article Zhang, B., Ghias, A. M. Y. M. & Chen, Z. (2022). A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility. Energy Reports, 8, 15067-15080. https://dx.doi.org/10.1016/j.egyr.2022.11.028 2352-4847 https://hdl.handle.net/10356/164664 10.1016/j.egyr.2022.11.028 2-s2.0-85141923468 8 15067 15080 en AcRF TIER 1 RG50/21 Energy Reports © 2022 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Integrated Energy Systems
Renewable Energy
spellingShingle Engineering::Electrical and electronic engineering
Integrated Energy Systems
Renewable Energy
Zhang, Bin
Ghias, Amer M. Y. M.
Chen, Zhe
A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility
description With the high penetration of wind power connected to the integrated electricity and district heating systems (IEDHSs), wind power curtailment still inevitably occurs in the traditional IEDHS dispatch. Focusing on the flexibilities of the IEDHS is considered to be a beneficial solution to further promote the integration of wind power. In the district heating network, the thermal inertia is utilized to improve such flexibility. Therefore, an IEDHS dispatch model considering the thermal inertia of district heating network and operational flexibility of generators is proposed in this paper. In addition, to avoid the tendency of traditional reinforcement learning (RL) to fall into local optimality when solving high-dimensional problems, a double-deck deep RL (D3RL) framework is proposed in this study. D3RL combines with a deep deterministic policy gradient (DDPG) agent in the upper level and a conventional optimization solver in the lower level to simplify the action and reward design. In the simulation, the proposed model considering the transmission time delay characteristics of the district heating network and the operational flexibility of generators is verified in four scheduling scenarios. Besides, the superiority of the proposed D3RL method is validated in a larger IEDHS. Numerical results show that the considered scheduling model can use the heat storage characteristics of heating pipelines, reduce operating costs, improve the operational flexibility and encourage wind power utilization. Compared with traditional RL, the proposed optimization method can improve its training speed and convergence performance.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Zhang, Bin
Ghias, Amer M. Y. M.
Chen, Zhe
format Article
author Zhang, Bin
Ghias, Amer M. Y. M.
Chen, Zhe
author_sort Zhang, Bin
title A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility
title_short A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility
title_full A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility
title_fullStr A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility
title_full_unstemmed A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility
title_sort double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility
publishDate 2023
url https://hdl.handle.net/10356/164664
_version_ 1759058784519454720