An efficient approach to model-based hierarchical reinforcement learning
We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the comm...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2017
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4398 https://ink.library.smu.edu.sg/context/sis_research/article/5401/viewcontent/14771_66644_1_PB.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-5401 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-54012020-03-25T03:26:25Z An efficient approach to model-based hierarchical reinforcement learning LI, Zhuoru NARAYAN, Akshay LEONG, Tze-Yun We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the common action-feature combinations of the subtasks, and evaluates the subtask execution choices through simulation. The framework is sample efficient, and tolerates uncertain and incomplete problem characterization of the subtasks. We test the framework on common benchmark problems and complex simulated robotic environments. It compares favorably against the stateof-the-art algorithms, and scales well in very large problems. 2017-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4398 https://ink.library.smu.edu.sg/context/sis_research/article/5401/viewcontent/14771_66644_1_PB.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Reinforcement learning hierarchical reinforcement learning MAXQ R-MAX model-based reinforcement learning Bench-mark problems Feature combination Hierarchical reinforcement learning Levels of abstraction Model based approach Problem characterization Robotic environments State-of-the-art algorithms Artificial Intelligence and Robotics Operations Research, Systems Engineering and Industrial Engineering Theory and Algorithms |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Reinforcement learning hierarchical reinforcement learning MAXQ R-MAX model-based reinforcement learning Bench-mark problems Feature combination Hierarchical reinforcement learning Levels of abstraction Model based approach Problem characterization Robotic environments State-of-the-art algorithms Artificial Intelligence and Robotics Operations Research, Systems Engineering and Industrial Engineering Theory and Algorithms |
spellingShingle |
Reinforcement learning hierarchical reinforcement learning MAXQ R-MAX model-based reinforcement learning Bench-mark problems Feature combination Hierarchical reinforcement learning Levels of abstraction Model based approach Problem characterization Robotic environments State-of-the-art algorithms Artificial Intelligence and Robotics Operations Research, Systems Engineering and Industrial Engineering Theory and Algorithms LI, Zhuoru NARAYAN, Akshay LEONG, Tze-Yun An efficient approach to model-based hierarchical reinforcement learning |
description |
We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the common action-feature combinations of the subtasks, and evaluates the subtask execution choices through simulation. The framework is sample efficient, and tolerates uncertain and incomplete problem characterization of the subtasks. We test the framework on common benchmark problems and complex simulated robotic environments. It compares favorably against the stateof-the-art algorithms, and scales well in very large problems. |
format |
text |
author |
LI, Zhuoru NARAYAN, Akshay LEONG, Tze-Yun |
author_facet |
LI, Zhuoru NARAYAN, Akshay LEONG, Tze-Yun |
author_sort |
LI, Zhuoru |
title |
An efficient approach to model-based hierarchical reinforcement learning |
title_short |
An efficient approach to model-based hierarchical reinforcement learning |
title_full |
An efficient approach to model-based hierarchical reinforcement learning |
title_fullStr |
An efficient approach to model-based hierarchical reinforcement learning |
title_full_unstemmed |
An efficient approach to model-based hierarchical reinforcement learning |
title_sort |
efficient approach to model-based hierarchical reinforcement learning |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2017 |
url |
https://ink.library.smu.edu.sg/sis_research/4398 https://ink.library.smu.edu.sg/context/sis_research/article/5401/viewcontent/14771_66644_1_PB.pdf |
_version_ |
1770574697419767808 |