Exploration of network centrality in goal conditioned reinforcement learning

This final year project explores the domain of Goal Conditioned Reinforcement Learning (GCRL) with a particular focus on addressing the challenges presented by sparse reward environments, common in real-world scenarios. The paper begins by laying a solid foundation in the basic principles of Reinfor...

Full description

Saved in:
Bibliographic Details
Main Author: Sharma Divyansh
Other Authors: Arvind Easwaran
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175302
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This final year project explores the domain of Goal Conditioned Reinforcement Learning (GCRL) with a particular focus on addressing the challenges presented by sparse reward environments, common in real-world scenarios. The paper begins by laying a solid foundation in the basic principles of Reinforcement Learning (RL) and Markov Decision Processes (MDPs), setting the stage for a deeper investigation into GCRL. Through the implementation and analysis of two advanced RL algorithms—REINFORCE and REINFORCE with baseline—the paper conducts four successful experiments. The first experiment illustrates the difficulty of achieving convergence to an optimal policy in sparse reward settings. The second experiment evaluates the exploration capabilities of Hindsight Experience Replay (HER), noting its limitations without proper guidance. The third experiment confirms the hypothesis that introducing sub-goals can significantly improve sample efficiency, a finding achieved through the manual placement of a sub-goal. Building on this, the fourth experiment introduces a novel approach to sub-goal generation through betweenness centrality, demonstrating not only a successful strategy for self-discovered, effective sub-goal identification but also a bridge between reinforcement learning and graph theory. Overall, this paper makes effort to the understanding of GCRL, particularly in overcoming the hurdles of sparse rewards, and proposes a sub-goal generation method using betweenness centrality over observed transitions.