BUILDING ENERGY MANAGEMENT OPTIMIZATION WITH REINFORCEMENT LEARNING METHOD TO IMPROVE PHOTOVOLTAIC SELF-CONSUMPTION
The global climate change has prompted the Indonesian government to enhance the electricity energy mix, one of which is through the installation of microgrids in buildings. However, the intermittent nature of PV production can lead to a decrease in grid reliability. Furthermore, from the demand side...
Saved in:
Main Author: | |
---|---|
Format: | Theses |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/80006 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Summary: | The global climate change has prompted the Indonesian government to enhance the electricity energy mix, one of which is through the installation of microgrids in buildings. However, the intermittent nature of PV production can lead to a decrease in grid reliability. Furthermore, from the demand side, electricity consumption profile in university buildings is complex and dynamic, requiring an energy management system in the building. One parameter of building energy management systems is the self-consumption of directly consumed PV production. Therefore, an increase in photovoltaic self-consumption is achieved by implementing energy management control in the building, one of which is through the management of charging and discharging in Battery Energy Storage System (BESS).
In this research, a controller with Reinforcement Learning (RL) approach using the Proximal Policy Optimization (PPO) learning algorithm is built. The methodology of this research involves creating a building and microgrid model along with PV production and electricity consumption profiles in September, December, March, and June. Subsequently, a rule-based controller is constructed as a reference, and then an RL control agent is developed with specified learning rate values and the creation of a reward function. The controller is then evaluated and analyzed in terms of self-consumption improvement. All stages are conducted in the MATLAB and Simulink environment.
The obtained results show that the optimum RL agent is achieved with actor and critic learning rate 0,001 and 0,0001 respectively. An excellent self-consumption improvement values is achieved by the rule-based controller that will be used as a reference, followed by a good improvement by the RL controller with an increase range of 2.42% - 16.7%. The RL controller also proves to be better in maintaining the health of the BESS by minimizing deep discharge up to the maximum Depth of Discharge compared to the rule-based controller.
|
---|