A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint
We consider the problem of trajectory prediction, where a trajectory is an ordered sequence of location visits and corresponding timestamps. The problem arises when an agent makes sequential decisions to visit a set of spatial locations of interest. Each location bears a stochastic utility and the a...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2016
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/3364 https://ink.library.smu.edu.sg/context/sis_research/article/4366/viewcontent/ReinforcementLearningFramework.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-4366 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-43662018-03-09T09:09:38Z A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint LE, Truc Viet LIU, Siyuan LAU, Hoong Chuin We consider the problem of trajectory prediction, where a trajectory is an ordered sequence of location visits and corresponding timestamps. The problem arises when an agent makes sequential decisions to visit a set of spatial locations of interest. Each location bears a stochastic utility and the agent has a limited budget to spend. Given the agent's observed partial trajectory, our goal is to predict the agent's remaining trajectory. We propose a solution framework to the problem that incorporates both the stochastic utility of each location and the budget constraint. We first cluster the agents into groups of homogeneous behaviors called "agent types". Depending on its type, each agent's trajectory is then transformed into a discrete-state sequence representation. Based on such representations, we use reinforcement learning (RL) to model the underlying decision processes and inverse RL to learn the utility distributions of the spatial locations. We finally propose two decision models to make predictions: one is based on long-term optimal planning of RL and another uses myopic heuristics. We apply the framework to predict real-world human trajectories collected in a large theme park and are able to explain the underlying processes of the observed actions. 2016-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/3364 info:doi/10.3233/978-1-61499-672-9-347 https://ink.library.smu.edu.sg/context/sis_research/article/4366/viewcontent/ReinforcementLearningFramework.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics Computer Sciences Numerical Analysis and Scientific Computing |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Artificial Intelligence and Robotics Computer Sciences Numerical Analysis and Scientific Computing |
spellingShingle |
Artificial Intelligence and Robotics Computer Sciences Numerical Analysis and Scientific Computing LE, Truc Viet LIU, Siyuan LAU, Hoong Chuin A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint |
description |
We consider the problem of trajectory prediction, where a trajectory is an ordered sequence of location visits and corresponding timestamps. The problem arises when an agent makes sequential decisions to visit a set of spatial locations of interest. Each location bears a stochastic utility and the agent has a limited budget to spend. Given the agent's observed partial trajectory, our goal is to predict the agent's remaining trajectory. We propose a solution framework to the problem that incorporates both the stochastic utility of each location and the budget constraint. We first cluster the agents into groups of homogeneous behaviors called "agent types". Depending on its type, each agent's trajectory is then transformed into a discrete-state sequence representation. Based on such representations, we use reinforcement learning (RL) to model the underlying decision processes and inverse RL to learn the utility distributions of the spatial locations. We finally propose two decision models to make predictions: one is based on long-term optimal planning of RL and another uses myopic heuristics. We apply the framework to predict real-world human trajectories collected in a large theme park and are able to explain the underlying processes of the observed actions. |
format |
text |
author |
LE, Truc Viet LIU, Siyuan LAU, Hoong Chuin |
author_facet |
LE, Truc Viet LIU, Siyuan LAU, Hoong Chuin |
author_sort |
LE, Truc Viet |
title |
A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint |
title_short |
A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint |
title_full |
A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint |
title_fullStr |
A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint |
title_full_unstemmed |
A reinforcement learning framework for trajectory prediction under uncertainty and budget constraint |
title_sort |
reinforcement learning framework for trajectory prediction under uncertainty and budget constraint |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2016 |
url |
https://ink.library.smu.edu.sg/sis_research/3364 https://ink.library.smu.edu.sg/context/sis_research/article/4366/viewcontent/ReinforcementLearningFramework.pdf |
_version_ |
1770573124145774592 |