INCREASING SMART MICROGRID RENEWABLE FRACTION BY CONTROLLING THE CHARGING-DISCHARGING OF THE BATTERY ENERGY STORAGE SYSTEM (BESS) USING A DEEP Q-LEARNING ALGORITHM
sources (RES), development of microgrids (MGs) is the most probable answer to the rising electrical energy demands, along with the depletion of fossil fuel. Although so, the intermittent nature of RES becomes a problem to MGs, as power intermittency decreases the power quality produced by the MG. Th...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/68150 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Summary: | sources (RES), development of microgrids (MGs) is the most probable answer to the rising electrical energy demands, along with the depletion of fossil fuel. Although so, the intermittent nature of RES becomes a problem to MGs, as power intermittency decreases the power quality produced by the MG. The solution to this power quality problem is through applying control to MG components, with one of them being the battery energy storage system (BESS). In the energy management system, there are 3 levels of control, namely primary control, secondary control, and tertiary control.
By applying secondary control to a BESS’s charging and discharging action, the MG’s electric power quality could increase as the effect of active and reactive power’s release and absorb. In addition to that, control of BESS charging and discharging could increase the renewable fraction (RF) of the MG. To achieve this, an optimization-based energy management algorithm controller will be applied in this research using a type of reinforcement learning method called deep Q-learning. The control algorithm will then be placed in the MG digital twin (MGDT) framework that models physical objects to digital objects. The results obtained from BESS scheduling with the optimization-based energy management algorithm using the deep Q-learning method yielded an average RF value of 44.6%. Furthermore, when compared with the rule-based energy management algorithm, it is found that the RF value of the optimization-based algorithm is 2.2% higher than the rule-based algorithm.
|
---|