Methods for autonomously decomposing and performing long-horizon sequential decision tasks
Sequential decision-making over long timescales and in complex task environments is an important problem in Artificial Intelligence (AI). An effective approach to tackle this problem is to autonomously decompose a long-horizon task into a sequence of simpler subtasks or subgoals. We refer to this ap...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/155182 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-155182 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1551822022-03-06T05:18:16Z Methods for autonomously decomposing and performing long-horizon sequential decision tasks Pateria, Shubham Quek Hiok Chai School of Computer Science and Engineering ASHCQUEK@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Sequential decision-making over long timescales and in complex task environments is an important problem in Artificial Intelligence (AI). An effective approach to tackle this problem is to autonomously decompose a long-horizon task into a sequence of simpler subtasks or subgoals. We refer to this approach as Autonomous Task Decomposition (ATD) in the thesis and study it for multi-agent coordination using model-free Hierarchical Reinforcement Learning (HRL), single-agent goal-reaching using model-free HRL, and single-agent goal-reaching using model-based planning. The objective of the thesis is to develop novel methods to address three important challenges related to ATD, which are as follows: 1. Effective multi-agent HRL under sparse global rewards and complex inter-dependencies among agents. 2. Efficient unification of autonomous subgoal discovery and single-agent HRL without slow learning. 3. Learning models for planning-based ATD that produce more rewarding and feasible plans. In this regard, the thesis introduces three novel ATD methods as follows: 1. Inter Subtask Empowerment based Multi-agent Options (ISEMO) is introduced for effective multi-agent HRL by using auxiliary rewards that capture the inter-dependencies among HRL agents and their (handcrafted) subtasks. ISEMO leads to better coordinated performance of the inter-dependent agents on a complex Search & Rescue task, compared to a standard multi-agent HRL method. 2. End-to-End Hierarchical Reinforcement Learning with Integrated Discovery of Salient Subgoals (LIDOSS) is introduced for efficient unification of subgoal discovery and HRL for single-agent goal-reaching, by using a probability-based subgoal discovery heuristic integrated with the subgoal selection policy. LIDOSS accelerates end-to-end learning and leads to higher goal-reaching success rates compared to a state-of-the-art HRL method. 3. Finally, Learning Subgoal Graph using Value-based Subgoal Discovery and Automatic Pruning (LSGVP) is introduced to learn subgoal graph-based planning models that produce more rewarding and feasible plans for single-agent goal-reaching. LSGVP uses cumulative reward-based subgoal discovery and automatic pruning of erroneous connections in the subgoal graph. It achieves higher positive cumulative rewards and higher success rates compared to other state-of-the-art subgoal graph-based planning methods, while also being more data-efficient than model-free HRL. Doctor of Philosophy 2022-02-11T01:29:06Z 2022-02-11T01:29:06Z 2022 Thesis-Doctor of Philosophy Pateria, S. (2022). Methods for autonomously decomposing and performing long-horizon sequential decision tasks. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/155182 https://hdl.handle.net/10356/155182 10.32657/10356/155182 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Pateria, Shubham Methods for autonomously decomposing and performing long-horizon sequential decision tasks |
description |
Sequential decision-making over long timescales and in complex task environments is an important problem in Artificial Intelligence (AI). An effective approach to tackle this problem is to autonomously decompose a long-horizon task into a sequence of simpler subtasks or subgoals. We refer to this approach as Autonomous Task Decomposition (ATD) in the thesis and study it for multi-agent coordination using model-free Hierarchical Reinforcement Learning (HRL), single-agent goal-reaching using model-free
HRL, and single-agent goal-reaching using model-based planning. The objective of the thesis is to develop novel methods to address three important challenges related to ATD, which are as follows: 1. Effective multi-agent HRL under sparse global rewards and complex inter-dependencies among agents. 2. Efficient unification of autonomous subgoal discovery and single-agent HRL without slow learning. 3. Learning models for planning-based ATD that produce more rewarding and feasible plans. In this regard, the thesis introduces three novel ATD methods as follows: 1. Inter Subtask Empowerment based Multi-agent Options (ISEMO) is introduced for effective multi-agent HRL by using auxiliary rewards that capture the inter-dependencies among HRL agents and their (handcrafted) subtasks. ISEMO leads to better coordinated performance of the inter-dependent agents on a complex Search & Rescue task, compared to
a standard multi-agent HRL method. 2. End-to-End Hierarchical Reinforcement Learning with Integrated Discovery of Salient Subgoals (LIDOSS) is introduced for efficient unification of subgoal discovery and HRL for single-agent goal-reaching, by using a probability-based subgoal discovery heuristic integrated with the subgoal selection policy. LIDOSS accelerates end-to-end learning and leads to higher goal-reaching success rates compared to a state-of-the-art HRL method. 3. Finally, Learning Subgoal Graph
using Value-based Subgoal Discovery and Automatic Pruning (LSGVP) is introduced to learn subgoal graph-based planning models that produce more rewarding and feasible plans for single-agent goal-reaching. LSGVP uses cumulative reward-based subgoal discovery and automatic pruning of erroneous connections in the subgoal graph. It achieves higher positive cumulative rewards and higher success rates compared to other state-of-the-art subgoal graph-based planning methods, while also being more data-efficient than model-free HRL. |
author2 |
Quek Hiok Chai |
author_facet |
Quek Hiok Chai Pateria, Shubham |
format |
Thesis-Doctor of Philosophy |
author |
Pateria, Shubham |
author_sort |
Pateria, Shubham |
title |
Methods for autonomously decomposing and performing long-horizon sequential decision tasks |
title_short |
Methods for autonomously decomposing and performing long-horizon sequential decision tasks |
title_full |
Methods for autonomously decomposing and performing long-horizon sequential decision tasks |
title_fullStr |
Methods for autonomously decomposing and performing long-horizon sequential decision tasks |
title_full_unstemmed |
Methods for autonomously decomposing and performing long-horizon sequential decision tasks |
title_sort |
methods for autonomously decomposing and performing long-horizon sequential decision tasks |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/155182 |
_version_ |
1726885513459138560 |