Transferring expectations in model-based reinforcement learning
We study how to automatically select and adapt multiple abstractions or representations of the world to support model-based reinforcement learning. We address the challenges of transfer learning in heterogeneous environments with varying tasks. We present an efficient, online framework that, through...
Saved in:
Main Authors: | Nguyen, Trung Thanh, Silander, Tomi, Tze-Yun LEONG |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2012
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/3049 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
SEAPoT-RL: Selective exploration algorithm for policy transfer in RL
by: NARAYAN, Akshay, et al.
Published: (2017) -
Scalable transfer learning in heterogeneous, dynamic environments
by: Nguyen, Trung Thanh, et al.
Published: (2017) -
Transition-informed reinforcement learning for large-scale Stackelberg mean-field games.
by: LI, Pengdeng, et al.
Published: (2024) -
IMPLICIT CURRICULUM IN PROCGEN MADE EXPLICIT
by: TAN ZHENXIONG
Published: (2024) -
Reinforced adaptation network for partial domain adaptation
by: WU, Keyu, et al.
Published: (2023)