Data-driven bi-level predictive energy management strategy for fuel cell buses with algorithmics fusion

This paper aims to answer how to effectively integrate the data-driven method into the traditional predictive energy management algorithm rather than replacing it outright. Given the challenge of selecting an appropriate prediction horizon for predictive energy management, this study seeks to bridge...

Full description

Saved in:
Bibliographic Details
Main Authors: Li, Menglin, Liu, Haoran, Yan, Mei, Wu, Jingda, Jin, Lisheng, He, Hongwen
Other Authors: School of Mechanical and Aerospace Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173532
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This paper aims to answer how to effectively integrate the data-driven method into the traditional predictive energy management algorithm rather than replacing it outright. Given the challenge of selecting an appropriate prediction horizon for predictive energy management, this study seeks to bridge traditional predictive energy management with machine learning approaches, thereby presenting a novel bi-level predictive energy management strategy for fuel cell buses with multi-prediction horizons. In the upper layer, the core parameter, prediction horizon, of the traditional model predictive control energy management framework is optimized using two distinct data-driven methods. The first method employs deep learning to establish a mapping relationship between the vehicle states and the optimal prediction horizon through deep neural networks. The second method utilizes reinforcement learning to obtain the best prediction horizon under varying vehicle states through intelligent agent exploration. In the lower level, predictive energy management is performed on fuel cell buses based on optimization levels. Finally, the proposed strategy is validated using test data from actual fuel cell buses. The results demonstrate that two data-driven methods, based on the optimal ΔSoC approximation and the deep reinforcement learning, can select the appropriate prediction horizon more conducive to energy saving according to the vehicle states. Regarding energy consumption, the multi-horizon predictive energy management based on deep reinforcement learning exhibits a remarkable reduction in energy consumption by 7.62 %, 4.55 %, 4.60 %, and 7.80 %, when compared with the predictive energy management employing fixed prediction horizons of 5 s, 10 s, 15 s, and 20 s, respectively. Furthermore, it outperforms the multi-horizon predictive energy management approach based on the optimal ΔSoC approximation by 3.59 %.