Reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (Extended abstract)

We consider the problem of trajectory prediction, where a trajectory is an ordered sequence of location visits and corresponding timestamps. The problem arises when an agent makes sequential decisions to visit a set of spatial locations of interest. Each location bears a stochastic utility and the a...

Full description

Saved in:
Bibliographic Details
Main Authors: LE, Truc Viet, LIU, Siyuan, LAU, Hoong Chuin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2016
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/3403
https://ink.library.smu.edu.sg/context/sis_research/article/4404/viewcontent/ReinforcementLearningRameworkSSD_AAMAS_2016.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-4404
record_format dspace
spelling sg-smu-ink.sis_research-44042018-03-08T06:30:53Z Reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (Extended abstract) LE, Truc Viet LIU, Siyuan LAU, Hoong Chuin We consider the problem of trajectory prediction, where a trajectory is an ordered sequence of location visits and corresponding timestamps. The problem arises when an agent makes sequential decisions to visit a set of spatial locations of interest. Each location bears a stochastic utility and the agent has a limited budget to spend. Given the agent's observed partial trajectory, our goal is to predict the remaining trajectory. We propose a solution framework to the problem considering both the uncertainty of utility and the budget constraint. We use reinforcement learning (RL) to model the underlying decision processes and inverse RL to learn the utility distributions of the locations. We then propose two decision models to make predictions: one is based on long-term optimal planning of RL and another uses myopic heuristics. We finally apply the framework to predict real-world human trajectories and are able to explain the underlying processes of the observed actions. 2016-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/3403 https://ink.library.smu.edu.sg/context/sis_research/article/4404/viewcontent/ReinforcementLearningRameworkSSD_AAMAS_2016.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University reinforcement learning budget constraint stochastic utility markov decision process sequential decisions trajectory prediction Artificial Intelligence and Robotics Computer Sciences Operations Research, Systems Engineering and Industrial Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic reinforcement learning
budget constraint
stochastic utility
markov decision process
sequential decisions
trajectory prediction
Artificial Intelligence and Robotics
Computer Sciences
Operations Research, Systems Engineering and Industrial Engineering
spellingShingle reinforcement learning
budget constraint
stochastic utility
markov decision process
sequential decisions
trajectory prediction
Artificial Intelligence and Robotics
Computer Sciences
Operations Research, Systems Engineering and Industrial Engineering
LE, Truc Viet
LIU, Siyuan
LAU, Hoong Chuin
Reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (Extended abstract)
description We consider the problem of trajectory prediction, where a trajectory is an ordered sequence of location visits and corresponding timestamps. The problem arises when an agent makes sequential decisions to visit a set of spatial locations of interest. Each location bears a stochastic utility and the agent has a limited budget to spend. Given the agent's observed partial trajectory, our goal is to predict the remaining trajectory. We propose a solution framework to the problem considering both the uncertainty of utility and the budget constraint. We use reinforcement learning (RL) to model the underlying decision processes and inverse RL to learn the utility distributions of the locations. We then propose two decision models to make predictions: one is based on long-term optimal planning of RL and another uses myopic heuristics. We finally apply the framework to predict real-world human trajectories and are able to explain the underlying processes of the observed actions.
format text
author LE, Truc Viet
LIU, Siyuan
LAU, Hoong Chuin
author_facet LE, Truc Viet
LIU, Siyuan
LAU, Hoong Chuin
author_sort LE, Truc Viet
title Reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (Extended abstract)
title_short Reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (Extended abstract)
title_full Reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (Extended abstract)
title_fullStr Reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (Extended abstract)
title_full_unstemmed Reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (Extended abstract)
title_sort reinforcement learning framework for modeling spatial sequential decisions under uncertainty: (extended abstract)
publisher Institutional Knowledge at Singapore Management University
publishDate 2016
url https://ink.library.smu.edu.sg/sis_research/3403
https://ink.library.smu.edu.sg/context/sis_research/article/4404/viewcontent/ReinforcementLearningRameworkSSD_AAMAS_2016.pdf
_version_ 1770573160980152320