Combining PSR theory with distributional reinforcement learning
This work focuses on using Distributional Reinforcement Learning (DRL) in a partially observable environment that is modelled via Predictive State Representation Theory (PSR). We aim to integrate the benefits of DRL and PSR to obtain a model-based reinforcement learning method that is capable of prov...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/139946 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This work focuses on using Distributional Reinforcement Learning (DRL) in a partially observable environment that is modelled via Predictive State Representation Theory (PSR). We aim to integrate the benefits of DRL and PSR to obtain a model-based reinforcement learning method that is capable of providing complete (distributional) performance information about a policy using an observation-only environment model. PSR theory is one of the advanced techniques used to model a dynamical system on a partially observable environment. Unlike traditional partially observable Markov models, such as POMDP, which capture the uncertainty of the environment using belief states, PSR model describes the partially observable environment based on probabilities of executable and observable future events. Distributional Reinforcement Learning (DRL), proposed by MG Bellemare, is a learning paradigm that aims to improve learning by modelling the rewards as probability distributions instead of scalar expectations. |
---|