Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems

The random charging and dynamic traveling behaviors of massive plug-in electric vehicles (PEVs) pose challenges to the efficient and safe operation of transportation-electrification coupled systems (TECSs). To realize real-time scheduling of urban PEV fleet charging demand, this paper proposes a PEV...

Full description

Saved in:
Bibliographic Details
Main Authors: Xing, Qiang, Chen, Zhong, Wang, Ruisheng, Zhang, Ziqi
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/169372
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-169372
record_format dspace
spelling sg-ntu-dr.10356-1693722023-07-21T15:40:42Z Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems Xing, Qiang Chen, Zhong Wang, Ruisheng Zhang, Ziqi School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Transportation-Electrification Coupled System Charging Station Recommendation The random charging and dynamic traveling behaviors of massive plug-in electric vehicles (PEVs) pose challenges to the efficient and safe operation of transportation-electrification coupled systems (TECSs). To realize real-time scheduling of urban PEV fleet charging demand, this paper proposes a PEV decision-making guidance (PEVDG) strategy based on the bi-level deep reinforcement learning, achieving the reduction of user charging costs while ensuring the stable operation of distribution networks (DNs). For the discrete time-series characteristics and the heterogeneity of decision actions, the FEVDG problem is duly decoupled into a bi-level finite Markov decision process, in which the upper-lower layers are used respectively for charging station (CS) recommendation and path navigation. Specifically, the upper-layer agent realizes the mapping relationship between the environment state and the optimal CS by perceiving the PEV charging requirements, CS equipment resources and DN operation conditions. And the action decision output of the upper-layer is embedded into the state space of the lower-layer agent. Meanwhile, the lower-level agent determines the optimal road segment for path navigation by capturing the real-time PEV state and the transportation network information. Further, two elaborate reward mechanisms are developed to motivate and penalize the decision-making learning of the dual agents. Then two extension mechanisms (i.e., dynamic adjustment of learning rates and adaptive selection of neural network units) are embedded into the Rainbow algorithm based on the DQN architecture, constructing a modified Rainbow algorithm as the solution to the concerned bi-level decision-making problem. The average rewards for the upper-lower levels are ¥ -90.64 and ¥ 13.24 respectively. The average equilibrium degree of the charging service and average charging cost are 0.96 and ¥ 42.45, respectively. Case studies are conducted within a practical urban zone with the TECS. Extensive experimental results show that the proposed methodology improves the generalization and learning ability of dual agents, and facilitates the collaborative operation of traffic and electrical networks. Published version This research was funded by the National Natural Science Foundation of China (52077035). 2023-07-17T03:48:01Z 2023-07-17T03:48:01Z 2023 Journal Article Xing, Q., Chen, Z., Wang, R. & Zhang, Z. (2023). Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems. Frontiers in Energy Research, 10. https://dx.doi.org/10.3389/fenrg.2022.944313 2296-598X https://hdl.handle.net/10356/169372 10.3389/fenrg.2022.944313 2-s2.0-85147044236 10 en Frontiers in Energy Research © 2023 Xing, Chen, Wang and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Transportation-Electrification Coupled System
Charging Station Recommendation
spellingShingle Engineering::Electrical and electronic engineering
Transportation-Electrification Coupled System
Charging Station Recommendation
Xing, Qiang
Chen, Zhong
Wang, Ruisheng
Zhang, Ziqi
Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems
description The random charging and dynamic traveling behaviors of massive plug-in electric vehicles (PEVs) pose challenges to the efficient and safe operation of transportation-electrification coupled systems (TECSs). To realize real-time scheduling of urban PEV fleet charging demand, this paper proposes a PEV decision-making guidance (PEVDG) strategy based on the bi-level deep reinforcement learning, achieving the reduction of user charging costs while ensuring the stable operation of distribution networks (DNs). For the discrete time-series characteristics and the heterogeneity of decision actions, the FEVDG problem is duly decoupled into a bi-level finite Markov decision process, in which the upper-lower layers are used respectively for charging station (CS) recommendation and path navigation. Specifically, the upper-layer agent realizes the mapping relationship between the environment state and the optimal CS by perceiving the PEV charging requirements, CS equipment resources and DN operation conditions. And the action decision output of the upper-layer is embedded into the state space of the lower-layer agent. Meanwhile, the lower-level agent determines the optimal road segment for path navigation by capturing the real-time PEV state and the transportation network information. Further, two elaborate reward mechanisms are developed to motivate and penalize the decision-making learning of the dual agents. Then two extension mechanisms (i.e., dynamic adjustment of learning rates and adaptive selection of neural network units) are embedded into the Rainbow algorithm based on the DQN architecture, constructing a modified Rainbow algorithm as the solution to the concerned bi-level decision-making problem. The average rewards for the upper-lower levels are ¥ -90.64 and ¥ 13.24 respectively. The average equilibrium degree of the charging service and average charging cost are 0.96 and ¥ 42.45, respectively. Case studies are conducted within a practical urban zone with the TECS. Extensive experimental results show that the proposed methodology improves the generalization and learning ability of dual agents, and facilitates the collaborative operation of traffic and electrical networks.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Xing, Qiang
Chen, Zhong
Wang, Ruisheng
Zhang, Ziqi
format Article
author Xing, Qiang
Chen, Zhong
Wang, Ruisheng
Zhang, Ziqi
author_sort Xing, Qiang
title Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems
title_short Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems
title_full Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems
title_fullStr Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems
title_full_unstemmed Bi-level deep reinforcement learning for PEV decision-making guidance by coordinating transportation-electrification coupled systems
title_sort bi-level deep reinforcement learning for pev decision-making guidance by coordinating transportation-electrification coupled systems
publishDate 2023
url https://hdl.handle.net/10356/169372
_version_ 1773551264292929536