Exploration of network centrality in goal conditioned reinforcement learning

This final year project explores the domain of Goal Conditioned Reinforcement Learning (GCRL) with a particular focus on addressing the challenges presented by sparse reward environments, common in real-world scenarios. The paper begins by laying a solid foundation in the basic principles of Reinfor...

全面介紹

Saved in:
書目詳細資料
主要作者: Sharma Divyansh
其他作者: Arvind Easwaran
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/175302
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:This final year project explores the domain of Goal Conditioned Reinforcement Learning (GCRL) with a particular focus on addressing the challenges presented by sparse reward environments, common in real-world scenarios. The paper begins by laying a solid foundation in the basic principles of Reinforcement Learning (RL) and Markov Decision Processes (MDPs), setting the stage for a deeper investigation into GCRL. Through the implementation and analysis of two advanced RL algorithms—REINFORCE and REINFORCE with baseline—the paper conducts four successful experiments. The first experiment illustrates the difficulty of achieving convergence to an optimal policy in sparse reward settings. The second experiment evaluates the exploration capabilities of Hindsight Experience Replay (HER), noting its limitations without proper guidance. The third experiment confirms the hypothesis that introducing sub-goals can significantly improve sample efficiency, a finding achieved through the manual placement of a sub-goal. Building on this, the fourth experiment introduces a novel approach to sub-goal generation through betweenness centrality, demonstrating not only a successful strategy for self-discovered, effective sub-goal identification but also a bridge between reinforcement learning and graph theory. Overall, this paper makes effort to the understanding of GCRL, particularly in overcoming the hurdles of sparse rewards, and proposes a sub-goal generation method using betweenness centrality over observed transitions.