Receding horizon cache and extreme learning machine based reinforcement learning

Function approximators have been extensively used in Reinforcement Learning (RL) to deal with large or continuous space problems. However, batch learning Neural Networks (NN), one of the most common approximators, has been rarely applied to RL. In this paper, possible reasons for this are laid out a...

Full description

Saved in:
Bibliographic Details
Main Authors: Shao, Zhifei, Er, Meng Joo, Huang, Guang-Bin
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/97140
http://hdl.handle.net/10220/11704
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Function approximators have been extensively used in Reinforcement Learning (RL) to deal with large or continuous space problems. However, batch learning Neural Networks (NN), one of the most common approximators, has been rarely applied to RL. In this paper, possible reasons for this are laid out and a solution is proposed. Specifically, a Receding Horizon Cache (RHC) structure is designed to collect training data for NN by dynamically archiving state-action pairs and actively updating their Q-values, which makes batch learning NN much easier to implement. Together with Extreme Learning Machine (ELM), a new RL with function approximation algorithm termed as RHC and ELM based RL (RHC-ELM-RL) is proposed. A mountain car task was carried out to test RHC-ELM-RL and compare its performance with other algorithms.