Probabilistic guided exploration for reinforcement learning in self-organizing neural networks

Exploration is essential in reinforcement learning, which expands the search space of potential solutions to a given problem for performance evaluations. Specifically, carefully designed exploration strategy may help the agent learn faster by taking the advantage of what it has learned previously. H...

全面介紹

Saved in:
書目詳細資料
Main Authors: Wang, Peng, Zhou, Weigui Jair, Wang, Di, Tan, Ah-Hwee
其他作者: School of Computer Science and Engineering
格式: Conference or Workshop Item
語言:English
出版: 2019
主題:
在線閱讀:https://hdl.handle.net/10356/89871
http://hdl.handle.net/10220/49724
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Exploration is essential in reinforcement learning, which expands the search space of potential solutions to a given problem for performance evaluations. Specifically, carefully designed exploration strategy may help the agent learn faster by taking the advantage of what it has learned previously. However, many reinforcement learning mechanisms still adopt simple exploration strategies, which select actions in a pure random manner among all the feasible actions. In this paper, we propose novel mechanisms to improve the existing knowledge-based exploration strategy based on a probabilistic guided approach to select actions. We conduct extensive experiments in a Minefield navigation simulator and the results show that our proposed probabilistic guided exploration approach significantly improves the convergence rate.