Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents
Agents or agent teams deployed to assist humans often face the challenges of monitoring the state of key processes in their environment (including the state of their human users themselves) and making periodic decisions based on such monitoring. POMDPs appear well suited to enable agents to address...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2005
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/938 https://ink.library.smu.edu.sg/context/sis_research/article/1937/viewcontent/p774_varakantham.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-1937 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-19372016-05-16T09:44:33Z Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents VARAKANTHAM, Pradeep Maheswaran, Rajiv Tambe, Milind Agents or agent teams deployed to assist humans often face the challenges of monitoring the state of key processes in their environment (including the state of their human users themselves) and making periodic decisions based on such monitoring. POMDPs appear well suited to enable agents to address these challenges, given the uncertain environment and cost of actions, but optimal policy generation for POMDPs is computationally expensive. This paper introduces three key techniques to speedup POMDP policy generation that exploit the notion of progress or dynamics in personal assistant domains. Policy computation is restricted to the belief space polytope that remains reachable given the progress structure of a domain. We introduce new algorithms; particularly one based on applying Lagrangian methods to compute a bounded belief space support in polynomial time. Our techniques are complementary to many existing exact and approximate POMDP policy generation algorithms. Indeed, we illustrate this by enhancing two of the fastest existing algorithms for exact POMDP policy generation. The order of magnitude speedups demonstrate the utility of our techniques in facilitating the deployment of POMDPs within agents assisting human users. 2005-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/938 info:doi/10.1145/1082473.1082621 https://ink.library.smu.edu.sg/context/sis_research/article/1937/viewcontent/p774_varakantham.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University meeting rescheduling task allocation partially observable markov decision process (POMDP) Artificial Intelligence and Robotics Business Operations Research, Systems Engineering and Industrial Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
meeting rescheduling task allocation partially observable markov decision process (POMDP) Artificial Intelligence and Robotics Business Operations Research, Systems Engineering and Industrial Engineering |
spellingShingle |
meeting rescheduling task allocation partially observable markov decision process (POMDP) Artificial Intelligence and Robotics Business Operations Research, Systems Engineering and Industrial Engineering VARAKANTHAM, Pradeep Maheswaran, Rajiv Tambe, Milind Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents |
description |
Agents or agent teams deployed to assist humans often face the challenges of monitoring the state of key processes in their environment (including the state of their human users themselves) and making periodic decisions based on such monitoring. POMDPs appear well suited to enable agents to address these challenges, given the uncertain environment and cost of actions, but optimal policy generation for POMDPs is computationally expensive. This paper introduces three key techniques to speedup POMDP policy generation that exploit the notion of progress or dynamics in personal assistant domains. Policy computation is restricted to the belief space polytope that remains reachable given the progress structure of a domain. We introduce new algorithms; particularly one based on applying Lagrangian methods to compute a bounded belief space support in polynomial time. Our techniques are complementary to many existing exact and approximate POMDP policy generation algorithms. Indeed, we illustrate this by enhancing two of the fastest existing algorithms for exact POMDP policy generation. The order of magnitude speedups demonstrate the utility of our techniques in facilitating the deployment of POMDPs within agents assisting human users. |
format |
text |
author |
VARAKANTHAM, Pradeep Maheswaran, Rajiv Tambe, Milind |
author_facet |
VARAKANTHAM, Pradeep Maheswaran, Rajiv Tambe, Milind |
author_sort |
VARAKANTHAM, Pradeep |
title |
Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents |
title_short |
Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents |
title_full |
Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents |
title_fullStr |
Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents |
title_full_unstemmed |
Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents |
title_sort |
exploiting belief bounds: practical pomdps for personal assistant agents |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2005 |
url |
https://ink.library.smu.edu.sg/sis_research/938 https://ink.library.smu.edu.sg/context/sis_research/article/1937/viewcontent/p774_varakantham.pdf |
_version_ |
1770570777312100352 |