Optimization strategy based on deep reinforcement learning for home energy management

With the development of a smart grid and smart home, massive amounts of data can be made available, providing the basis for algorithm training in artificial intelligence applications. These continuous improving conditions are expected to enable the home energy management system (HEMS) to cope with t...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Yuankun, Zhang, Dongxia, Gooi, Hoay Beng
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148706
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-148706
record_format dspace
spelling sg-ntu-dr.10356-1487062021-06-11T04:38:50Z Optimization strategy based on deep reinforcement learning for home energy management Liu, Yuankun Zhang, Dongxia Gooi, Hoay Beng School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Deep Reinforcement Learning Demand Response With the development of a smart grid and smart home, massive amounts of data can be made available, providing the basis for algorithm training in artificial intelligence applications. These continuous improving conditions are expected to enable the home energy management system (HEMS) to cope with the increasing complexities and uncertainties in the enduser side of the power grid system. In this paper, a home energy management optimization strategy is proposed based on deep Q-learning (DQN) and double deep Q-learning (DDQN) to perform scheduling of home energy appliances. The applied algorithms are model-free and can help the customers reduce electricity consumption by taking a series of actions in response to a dynamic environment. In the test, the DDQN is more appropriate for minimizing the cost in a HEMS compared to DQN. In the process of method implementation, the generalization and reward setting of the algorithms are discussed and analyzed in detail. The results of this method are compared with those of Particle Swarm Optimization (PSO) to validate the performance of the proposed algorithm. The effectiveness of applied data-driven methods is validated by using a real-world database combined with the household energy storage model. Published version 2021-06-11T04:38:50Z 2021-06-11T04:38:50Z 2020 Journal Article Liu, Y., Zhang, D. & Gooi, H. B. (2020). Optimization strategy based on deep reinforcement learning for home energy management. CSEE Journal of Power and Energy Systems, 6(3), 572-582. https://dx.doi.org/10.17775/CSEEJPES.2019.02890 2096-0042 https://hdl.handle.net/10356/148706 10.17775/CSEEJPES.2019.02890 2-s2.0-85091667722 3 6 572 582 en CSEE Journal of Power and Energy Systems © 2019 The Author(s) (published by CSEE). This is an open-access article distributed under the terms of the Creative Commons Attribution License. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Deep Reinforcement Learning
Demand Response
spellingShingle Engineering::Electrical and electronic engineering
Deep Reinforcement Learning
Demand Response
Liu, Yuankun
Zhang, Dongxia
Gooi, Hoay Beng
Optimization strategy based on deep reinforcement learning for home energy management
description With the development of a smart grid and smart home, massive amounts of data can be made available, providing the basis for algorithm training in artificial intelligence applications. These continuous improving conditions are expected to enable the home energy management system (HEMS) to cope with the increasing complexities and uncertainties in the enduser side of the power grid system. In this paper, a home energy management optimization strategy is proposed based on deep Q-learning (DQN) and double deep Q-learning (DDQN) to perform scheduling of home energy appliances. The applied algorithms are model-free and can help the customers reduce electricity consumption by taking a series of actions in response to a dynamic environment. In the test, the DDQN is more appropriate for minimizing the cost in a HEMS compared to DQN. In the process of method implementation, the generalization and reward setting of the algorithms are discussed and analyzed in detail. The results of this method are compared with those of Particle Swarm Optimization (PSO) to validate the performance of the proposed algorithm. The effectiveness of applied data-driven methods is validated by using a real-world database combined with the household energy storage model.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Liu, Yuankun
Zhang, Dongxia
Gooi, Hoay Beng
format Article
author Liu, Yuankun
Zhang, Dongxia
Gooi, Hoay Beng
author_sort Liu, Yuankun
title Optimization strategy based on deep reinforcement learning for home energy management
title_short Optimization strategy based on deep reinforcement learning for home energy management
title_full Optimization strategy based on deep reinforcement learning for home energy management
title_fullStr Optimization strategy based on deep reinforcement learning for home energy management
title_full_unstemmed Optimization strategy based on deep reinforcement learning for home energy management
title_sort optimization strategy based on deep reinforcement learning for home energy management
publishDate 2021
url https://hdl.handle.net/10356/148706
_version_ 1702431199846203392