Understanding Sequential Decisions via Inverse Reinforcement Learning
The execution of an agent's complex activities, comprising sequences of simpler actions, sometimes leads to the clash of conflicting functions that must be optimized. These functions represent satisfaction, short-term as well as long-term objectives, costs and individual preferences. The way th...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2013
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/3474 https://ink.library.smu.edu.sg/context/sis_research/article/4475/viewcontent/C55___Understanding_Sequential_Decisions_via_Inverse_Reinforcement_Learning__MDM2013_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-4475 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-44752017-03-07T10:05:35Z Understanding Sequential Decisions via Inverse Reinforcement Learning LIU, Siyuan ARAUJO, Miguel BRUNSKILL, Emma ROSSETTI, Rosaldo BARROS, Joao KRISHNAN, Ramayya The execution of an agent's complex activities, comprising sequences of simpler actions, sometimes leads to the clash of conflicting functions that must be optimized. These functions represent satisfaction, short-term as well as long-term objectives, costs and individual preferences. The way that these functions are weighted is usually unknown even to the decision maker. But if we were able to understand the individual motivations and compare such motivations among individuals, then we would be able to actively change the environment so as to increase satisfaction and/or improve performance. In this work, we approach the problem of providing highlevel and intelligible descriptions of the motivations of an agent, based on observations of such an agent during the fulfillment of a series of complex activities (called sequential decisions in our work). A novel algorithm for the analysis of observational records is proposed. We also present a methodology that allows researchers to converge towards a summary description of an agent's behaviors, through the minimization of an error measure between the current description and the observed behaviors. This work was validated using not only a synthetic dataset representing the motivations of a passenger in a public transportation network, but also real taxi drivers' behaviors from their trips in an urban network. Our results show that our method is not only useful, but also performs much better than the previous methods, in terms of accuracy, efficiency and scalability. 2013-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/3474 info:doi/10.1109/MDM.2013.28 https://ink.library.smu.edu.sg/context/sis_research/article/4475/viewcontent/C55___Understanding_Sequential_Decisions_via_Inverse_Reinforcement_Learning__MDM2013_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics Theory and Algorithms |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Artificial Intelligence and Robotics Theory and Algorithms |
spellingShingle |
Artificial Intelligence and Robotics Theory and Algorithms LIU, Siyuan ARAUJO, Miguel BRUNSKILL, Emma ROSSETTI, Rosaldo BARROS, Joao KRISHNAN, Ramayya Understanding Sequential Decisions via Inverse Reinforcement Learning |
description |
The execution of an agent's complex activities, comprising sequences of simpler actions, sometimes leads to the clash of conflicting functions that must be optimized. These functions represent satisfaction, short-term as well as long-term objectives, costs and individual preferences. The way that these functions are weighted is usually unknown even to the decision maker. But if we were able to understand the individual motivations and compare such motivations among individuals, then we would be able to actively change the environment so as to increase satisfaction and/or improve performance. In this work, we approach the problem of providing highlevel and intelligible descriptions of the motivations of an agent, based on observations of such an agent during the fulfillment of a series of complex activities (called sequential decisions in our work). A novel algorithm for the analysis of observational records is proposed. We also present a methodology that allows researchers to converge towards a summary description of an agent's behaviors, through the minimization of an error measure between the current description and the observed behaviors. This work was validated using not only a synthetic dataset representing the motivations of a passenger in a public transportation network, but also real taxi drivers' behaviors from their trips in an urban network. Our results show that our method is not only useful, but also performs much better than the previous methods, in terms of accuracy, efficiency and scalability. |
format |
text |
author |
LIU, Siyuan ARAUJO, Miguel BRUNSKILL, Emma ROSSETTI, Rosaldo BARROS, Joao KRISHNAN, Ramayya |
author_facet |
LIU, Siyuan ARAUJO, Miguel BRUNSKILL, Emma ROSSETTI, Rosaldo BARROS, Joao KRISHNAN, Ramayya |
author_sort |
LIU, Siyuan |
title |
Understanding Sequential Decisions via Inverse Reinforcement Learning |
title_short |
Understanding Sequential Decisions via Inverse Reinforcement Learning |
title_full |
Understanding Sequential Decisions via Inverse Reinforcement Learning |
title_fullStr |
Understanding Sequential Decisions via Inverse Reinforcement Learning |
title_full_unstemmed |
Understanding Sequential Decisions via Inverse Reinforcement Learning |
title_sort |
understanding sequential decisions via inverse reinforcement learning |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2013 |
url |
https://ink.library.smu.edu.sg/sis_research/3474 https://ink.library.smu.edu.sg/context/sis_research/article/4475/viewcontent/C55___Understanding_Sequential_Decisions_via_Inverse_Reinforcement_Learning__MDM2013_.pdf |
_version_ |
1770573227927535616 |