Knowledge Transfer for Deep Reinforcement Learning with Hierarchical Experience Replay
The process for transferring knowledge of multiple reinforcement learning policies into a single multi-task policy via distillation technique is known as policy distillation. When policy distillation is under a deep reinforcement learning setting, due to the giant parameter size and the huge state s...
Saved in:
Main Authors: | Yin, Haiyan, Pan, Sinno Jialin |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2017
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/83043 http://hdl.handle.net/10220/42453 https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14478 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Hindsight-Combined and Hindsight-Prioritized Experience Replay
by: Tan, Renzo Roel P, et al.
Published: (2020) -
Goal modelling for deep reinforcement learning agents
by: Leung, Jonathan, et al.
Published: (2022) -
Deep-attack over the deep reinforcement learning
by: Li, Yang, et al.
Published: (2022) -
DEEP REINFORCEMENT LEARNING FOR SOLVING VEHICLE ROUTING PROBLEMS
by: LI JINGWEN
Published: (2022) -
Action selection for composable modular deep reinforcement learning
by: GUPTA, Vaibhav, et al.
Published: (2021)