Understanding Sequential Decisions via Inverse Reinforcement Learning

The execution of an agent's complex activities, comprising sequences of simpler actions, sometimes leads to the clash of conflicting functions that must be optimized. These functions represent satisfaction, short-term as well as long-term objectives, costs and individual preferences. The way th...

Full description

Saved in:
Bibliographic Details
Main Authors: LIU, Siyuan, ARAUJO, Miguel, BRUNSKILL, Emma, ROSSETTI, Rosaldo, BARROS, Joao, KRISHNAN, Ramayya
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2013
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/3474
https://ink.library.smu.edu.sg/context/sis_research/article/4475/viewcontent/C55___Understanding_Sequential_Decisions_via_Inverse_Reinforcement_Learning__MDM2013_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:The execution of an agent's complex activities, comprising sequences of simpler actions, sometimes leads to the clash of conflicting functions that must be optimized. These functions represent satisfaction, short-term as well as long-term objectives, costs and individual preferences. The way that these functions are weighted is usually unknown even to the decision maker. But if we were able to understand the individual motivations and compare such motivations among individuals, then we would be able to actively change the environment so as to increase satisfaction and/or improve performance. In this work, we approach the problem of providing highlevel and intelligible descriptions of the motivations of an agent, based on observations of such an agent during the fulfillment of a series of complex activities (called sequential decisions in our work). A novel algorithm for the analysis of observational records is proposed. We also present a methodology that allows researchers to converge towards a summary description of an agent's behaviors, through the minimization of an error measure between the current description and the observed behaviors. This work was validated using not only a synthetic dataset representing the motivations of a passenger in a public transportation network, but also real taxi drivers' behaviors from their trips in an urban network. Our results show that our method is not only useful, but also performs much better than the previous methods, in terms of accuracy, efficiency and scalability.