Exploration of network centrality in goal conditioned reinforcement learning

This final year project explores the domain of Goal Conditioned Reinforcement Learning (GCRL) with a particular focus on addressing the challenges presented by sparse reward environments, common in real-world scenarios. The paper begins by laying a solid foundation in the basic principles of Reinfor...

Full description

Saved in:
Bibliographic Details
Main Author: Sharma Divyansh
Other Authors: Arvind Easwaran
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175302
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175302
record_format dspace
spelling sg-ntu-dr.10356-1753022024-04-26T15:44:13Z Exploration of network centrality in goal conditioned reinforcement learning Sharma Divyansh Arvind Easwaran School of Computer Science and Engineering Hardware & Embedded Systems Lab (HESL) arvinde@ntu.edu.sg Computer and Information Science Reinforcement learning Goal conditioned reinforcement learning Policy gradient algorithms Network centrality This final year project explores the domain of Goal Conditioned Reinforcement Learning (GCRL) with a particular focus on addressing the challenges presented by sparse reward environments, common in real-world scenarios. The paper begins by laying a solid foundation in the basic principles of Reinforcement Learning (RL) and Markov Decision Processes (MDPs), setting the stage for a deeper investigation into GCRL. Through the implementation and analysis of two advanced RL algorithms—REINFORCE and REINFORCE with baseline—the paper conducts four successful experiments. The first experiment illustrates the difficulty of achieving convergence to an optimal policy in sparse reward settings. The second experiment evaluates the exploration capabilities of Hindsight Experience Replay (HER), noting its limitations without proper guidance. The third experiment confirms the hypothesis that introducing sub-goals can significantly improve sample efficiency, a finding achieved through the manual placement of a sub-goal. Building on this, the fourth experiment introduces a novel approach to sub-goal generation through betweenness centrality, demonstrating not only a successful strategy for self-discovered, effective sub-goal identification but also a bridge between reinforcement learning and graph theory. Overall, this paper makes effort to the understanding of GCRL, particularly in overcoming the hurdles of sparse rewards, and proposes a sub-goal generation method using betweenness centrality over observed transitions. Bachelor's degree 2024-04-23T05:32:08Z 2024-04-23T05:32:08Z 2024 Final Year Project (FYP) Sharma Divyansh (2024). Exploration of network centrality in goal conditioned reinforcement learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175302 https://hdl.handle.net/10356/175302 en SCSE23-0619 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Reinforcement learning
Goal conditioned reinforcement learning
Policy gradient algorithms
Network centrality
spellingShingle Computer and Information Science
Reinforcement learning
Goal conditioned reinforcement learning
Policy gradient algorithms
Network centrality
Sharma Divyansh
Exploration of network centrality in goal conditioned reinforcement learning
description This final year project explores the domain of Goal Conditioned Reinforcement Learning (GCRL) with a particular focus on addressing the challenges presented by sparse reward environments, common in real-world scenarios. The paper begins by laying a solid foundation in the basic principles of Reinforcement Learning (RL) and Markov Decision Processes (MDPs), setting the stage for a deeper investigation into GCRL. Through the implementation and analysis of two advanced RL algorithms—REINFORCE and REINFORCE with baseline—the paper conducts four successful experiments. The first experiment illustrates the difficulty of achieving convergence to an optimal policy in sparse reward settings. The second experiment evaluates the exploration capabilities of Hindsight Experience Replay (HER), noting its limitations without proper guidance. The third experiment confirms the hypothesis that introducing sub-goals can significantly improve sample efficiency, a finding achieved through the manual placement of a sub-goal. Building on this, the fourth experiment introduces a novel approach to sub-goal generation through betweenness centrality, demonstrating not only a successful strategy for self-discovered, effective sub-goal identification but also a bridge between reinforcement learning and graph theory. Overall, this paper makes effort to the understanding of GCRL, particularly in overcoming the hurdles of sparse rewards, and proposes a sub-goal generation method using betweenness centrality over observed transitions.
author2 Arvind Easwaran
author_facet Arvind Easwaran
Sharma Divyansh
format Final Year Project
author Sharma Divyansh
author_sort Sharma Divyansh
title Exploration of network centrality in goal conditioned reinforcement learning
title_short Exploration of network centrality in goal conditioned reinforcement learning
title_full Exploration of network centrality in goal conditioned reinforcement learning
title_fullStr Exploration of network centrality in goal conditioned reinforcement learning
title_full_unstemmed Exploration of network centrality in goal conditioned reinforcement learning
title_sort exploration of network centrality in goal conditioned reinforcement learning
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175302
_version_ 1806059800410193920