Knowledge Transfer for Deep Reinforcement Learning with Hierarchical Experience Replay
The process for transferring knowledge of multiple reinforcement learning policies into a single multi-task policy via distillation technique is known as policy distillation. When policy distillation is under a deep reinforcement learning setting, due to the giant parameter size and the huge state s...
Saved in:
Main Authors: | Yin, Haiyan, Pan, Sinno Jialin |
---|---|
其他作者: | School of Computer Science and Engineering |
格式: | Conference or Workshop Item |
語言: | English |
出版: |
2017
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/83043 http://hdl.handle.net/10220/42453 https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14478 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Hindsight-Combined and Hindsight-Prioritized Experience Replay
由: Tan, Renzo Roel P, et al.
出版: (2020) -
Goal modelling for deep reinforcement learning agents
由: Leung, Jonathan, et al.
出版: (2022) -
Deep-attack over the deep reinforcement learning
由: Li, Yang, et al.
出版: (2022) -
DEEP REINFORCEMENT LEARNING FOR SOLVING VEHICLE ROUTING PROBLEMS
由: LI JINGWEN
出版: (2022) -
Action selection for composable modular deep reinforcement learning
由: GUPTA, Vaibhav, et al.
出版: (2021)