A review of inverse reinforcement learning theory and recent advances

A major challenge faced by machine learning community is the decision making problems under uncertainty. Reinforcement Learning (RL) techniques provide a powerful solution for it. An agent used by RL interacts with a dynamic environment and finds a policy through a reward function, without using tar...

全面介紹

Saved in:
書目詳細資料
Main Authors: Shao, Zhifei, Er, Meng Joo
其他作者: School of Electrical and Electronic Engineering
格式: Conference or Workshop Item
語言:English
出版: 2013
主題:
在線閱讀:https://hdl.handle.net/10356/96908
http://hdl.handle.net/10220/12003
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
id sg-ntu-dr.10356-96908
record_format dspace
spelling sg-ntu-dr.10356-969082020-03-07T13:24:47Z A review of inverse reinforcement learning theory and recent advances Shao, Zhifei Er, Meng Joo School of Electrical and Electronic Engineering IEEE Congress on Evolutionary Computation (2012 : Brisbane, Australia) DRNTU::Engineering::Electrical and electronic engineering A major challenge faced by machine learning community is the decision making problems under uncertainty. Reinforcement Learning (RL) techniques provide a powerful solution for it. An agent used by RL interacts with a dynamic environment and finds a policy through a reward function, without using target labels like Supervised Learning (SL). However, one fundamental assumption of existing RL algorithms is that reward function, the most succinct representation of the designer's intention, needs to be provided beforehand. In practice, the reward function can be very hard to specify and exhaustive to tune for large and complex problems, and this inspires the development of Inverse Reinforcement Learning (IRL), an extension of RL, which directly tackles this problem by learning the reward function through expert demonstrations. IRL introduces a new way of learning policies by deriving expert's intentions, in contrast to directly learning policies, which can be redundant and have poor generalization ability. In this paper, the original IRL algorithms and its close variants, as well as their recent advances are reviewed and compared. 2013-07-23T02:01:43Z 2019-12-06T19:36:34Z 2013-07-23T02:01:43Z 2019-12-06T19:36:34Z 2012 2012 Conference Paper Shao, Z., & Er, M. J. (2012). A review of inverse reinforcement learning theory and recent advances. 2012 IEEE Congress on Evolutionary Computation (CEC). https://hdl.handle.net/10356/96908 http://hdl.handle.net/10220/12003 10.1109/CEC.2012.6256507 en © 2012 IEEE.
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic DRNTU::Engineering::Electrical and electronic engineering
spellingShingle DRNTU::Engineering::Electrical and electronic engineering
Shao, Zhifei
Er, Meng Joo
A review of inverse reinforcement learning theory and recent advances
description A major challenge faced by machine learning community is the decision making problems under uncertainty. Reinforcement Learning (RL) techniques provide a powerful solution for it. An agent used by RL interacts with a dynamic environment and finds a policy through a reward function, without using target labels like Supervised Learning (SL). However, one fundamental assumption of existing RL algorithms is that reward function, the most succinct representation of the designer's intention, needs to be provided beforehand. In practice, the reward function can be very hard to specify and exhaustive to tune for large and complex problems, and this inspires the development of Inverse Reinforcement Learning (IRL), an extension of RL, which directly tackles this problem by learning the reward function through expert demonstrations. IRL introduces a new way of learning policies by deriving expert's intentions, in contrast to directly learning policies, which can be redundant and have poor generalization ability. In this paper, the original IRL algorithms and its close variants, as well as their recent advances are reviewed and compared.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Shao, Zhifei
Er, Meng Joo
format Conference or Workshop Item
author Shao, Zhifei
Er, Meng Joo
author_sort Shao, Zhifei
title A review of inverse reinforcement learning theory and recent advances
title_short A review of inverse reinforcement learning theory and recent advances
title_full A review of inverse reinforcement learning theory and recent advances
title_fullStr A review of inverse reinforcement learning theory and recent advances
title_full_unstemmed A review of inverse reinforcement learning theory and recent advances
title_sort review of inverse reinforcement learning theory and recent advances
publishDate 2013
url https://hdl.handle.net/10356/96908
http://hdl.handle.net/10220/12003
_version_ 1681035717060329472