Probabilistic guided exploration for reinforcement learning in self-organizing neural networks
Exploration is essential in reinforcement learning, which expands the search space of potential solutions to a given problem for performance evaluations. Specifically, carefully designed exploration strategy may help the agent learn faster by taking the advantage of what it has learned previously. H...
Saved in:
Main Authors: | Wang, Peng, Zhou, Weigui Jair, Wang, Di, Tan, Ah-Hwee |
---|---|
其他作者: | School of Computer Science and Engineering |
格式: | Conference or Workshop Item |
語言: | English |
出版: |
2019
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/89871 http://hdl.handle.net/10220/49724 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Probabilistic guided exploration for reinforcement learning in self-organizing neural networks
由: WANG, Peng, et al.
出版: (2018) -
Knowledge-based exploration for reinforcement learning in self-organizing neural networks
由: TENG, Teck-Hou, et al.
出版: (2012) -
A self-organizing neural architecture integrating desire, intention and reinforcement learning
由: TAN, Ah-hwee, et al.
出版: (2010) -
Integrating temporal difference methods and self‐organizing neural networks for reinforcement learning with delayed evaluative feedback
由: TAN, Ah-hwee, et al.
出版: (2008) -
Hierarchical control of multi-agent reinforcement learning team in real-time strategy (RTS) games
由: ZHOU, Weigui Jair, et al.
出版: (2021)