Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online. The goal is to minimize the overall cost under a real-time pricing scheme. While previous works have introduced centralized approaches in whi...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144903 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-144903 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Engineering::Mechanical engineering Reinforcement Learning Smart Grid |
spellingShingle |
Engineering::Computer science and engineering Engineering::Mechanical engineering Reinforcement Learning Smart Grid Lee, Joash Wang, Wenbo Niyato, Dusit Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids |
description |
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online. The goal is to minimize the overall cost under a real-time pricing scheme. While previous works have introduced centralized approaches in which the scheduling algorithm has full observability, we propose the formulation of a smart grid environment as a Markov game. Each household is a decentralized agent with partial observability, which allows scalability and privacy-preservation in a realistic
setting. The grid operator produces a price signal that varies with the energy demand. We propose an extension to a multiagent, deep actor-critic algorithm to address partial observability and the perceived non-stationarity of the environment from the agent’s viewpoint. This algorithm learns a centralized critic that coordinates training of decentralized agents. Our approach thus uses centralized learning but decentralized execution. Simulation results show that our online deep reinforcement learning method can reduce both the peak-to-average ratio of total energy consumed and the cost of electricity for all households based purely on instantaneous observations and a price signal. |
author2 |
Interdisciplinary Graduate School (IGS) |
author_facet |
Interdisciplinary Graduate School (IGS) Lee, Joash Wang, Wenbo Niyato, Dusit |
format |
Conference or Workshop Item |
author |
Lee, Joash Wang, Wenbo Niyato, Dusit |
author_sort |
Lee, Joash |
title |
Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids |
title_short |
Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids |
title_full |
Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids |
title_fullStr |
Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids |
title_full_unstemmed |
Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids |
title_sort |
demand-side scheduling based on multi-agent deep actor-critic learning for smart grids |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/144903 |
_version_ |
1759855445398257664 |
spelling |
sg-ntu-dr.10356-1449032023-03-05T16:27:14Z Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids Lee, Joash Wang, Wenbo Niyato, Dusit Interdisciplinary Graduate School (IGS) IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (IEEE SmartGridComm 2020) Energy Research Institute @ NTU (ERI@N) Engineering::Computer science and engineering Engineering::Mechanical engineering Reinforcement Learning Smart Grid We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online. The goal is to minimize the overall cost under a real-time pricing scheme. While previous works have introduced centralized approaches in which the scheduling algorithm has full observability, we propose the formulation of a smart grid environment as a Markov game. Each household is a decentralized agent with partial observability, which allows scalability and privacy-preservation in a realistic setting. The grid operator produces a price signal that varies with the energy demand. We propose an extension to a multiagent, deep actor-critic algorithm to address partial observability and the perceived non-stationarity of the environment from the agent’s viewpoint. This algorithm learns a centralized critic that coordinates training of decentralized agents. Our approach thus uses centralized learning but decentralized execution. Simulation results show that our online deep reinforcement learning method can reduce both the peak-to-average ratio of total energy consumed and the cost of electricity for all households based purely on instantaneous observations and a price signal. Agency for Science, Technology and Research (A*STAR) Ministry of Education (MOE) National Research Foundation (NRF) Accepted version This research is supported by the National Research Foundation (NRF), Singapore, under Singapore Energy Market Authority (EMA), Energy Resilience, NRF2017EWT-EP003-041, Singapore NRF2015-NRF-ISF001-2277, Singapore NRF National Satellite of Excellence, Design Science and Technology for Secure Critical Infrastructure NSoE DeST-SCI2019-0007, A*STAR-NTU-SUTD Joint Research Grant on Artificial Intelligence for the Future of Manufacturing RGANS1906, Wallenberg AI, Autonomous Systems and Software Program and Nanyang Technological University (WASP/NTU) under grant M4082187 (4080), Singapore Ministry of Education (MOE) Tier 1 (RG16/20), and NTU-WeBank JRI (NWJ-2020-004), Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI). 2020-12-03T00:47:35Z 2020-12-03T00:47:35Z 2020 Conference Paper Lee, J., Wang, W., & Niyato, D. (2020). Demand-side scheduling based on multi-agent deep actor-critic learning for smart grids. Proceedings of the IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (IEEE SmartGridComm 2020). doi:10.1109/SmartGridComm47815.2020.9302935 https://hdl.handle.net/10356/144903 10.1109/SmartGridComm47815.2020.9302935 en National Research Foundation (NRF), Singapore, under Singapore Energy Market Authority (EMA), Energy Resilience, NRF2017EWT-EP003-041 Singapore NRF2015-NRF-ISF001-2277 Singapore NRF National Satellite of Excellence, Design Science and Technology for Secure Critical Infrastructure NSoE DeST-SCI2019-0007 A*STARNTU- SUTD Joint Research Grant on Artificial Intelligence for the Future of Manufacturing RGANS1906 Wallenberg AI, Autonomous Systems and Software Program and Nanyang Technological University (WASP/NTU) under grant M4082187 (4080) Singapore Ministry of Education (MOE) Tier 1 (RG16/20) NTU-WeBank JRI (NWJ-2020-004) Alibaba Group through Alibaba Innovative Research (AIR) Program Alibaba-NTU Singapore Joint Research Institute (JRI) © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/SmartGridComm47815.2020.9302935 application/pdf |