Hindsight-Combined and Hindsight-Prioritized Experience Replay

Reinforcement learning has proved to be of great utility; execution, however, may be costly due to sampling inefficiency. An efficient method for training is experience replay, which recalls past experiences. Several experience replay techniques, namely, combined experience replay, hindsight experie...

全面介紹

Saved in:
書目詳細資料
Main Authors: Tan, Renzo Roel P, Ikeda, Kazushi, Vergara, John Paul
格式: text
出版: Archīum Ateneo 2020
主題:
在線閱讀:https://archium.ateneo.edu/mathematics-faculty-pubs/146
https://link.springer.com/chapter/10.1007%2F978-3-030-63833-7_36
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Ateneo De Manila University
實物特徵
總結:Reinforcement learning has proved to be of great utility; execution, however, may be costly due to sampling inefficiency. An efficient method for training is experience replay, which recalls past experiences. Several experience replay techniques, namely, combined experience replay, hindsight experience replay, and prioritized experience replay, have been crafted while their relative merits are unclear. In the study, one proposes hybrid algorithms – hindsight-combined and hindsight-prioritized experience replay – and evaluates their performance against published baselines. Experimental results demonstrate the superior performance of hindsight-combined experience replay on an OpenAI Gym benchmark. Further, insight into the nonconvergence of hindsightprioritized experience replay is presented towards the improvement of the approach.