Probabilistic guided exploration for reinforcement learning in self-organizing neural networks
Exploration is essential in reinforcement learning, which expands the search space of potential solutions to a given problem for performance evaluations. Specifically, carefully designed exploration strategy may help the agent learn faster by taking the advantage of what it has learned previously. H...
Saved in:
Main Authors: | Wang, Peng, Zhou, Weigui Jair, Wang, Di, Tan, Ah-Hwee |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/89871 http://hdl.handle.net/10220/49724 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Probabilistic guided exploration for reinforcement learning in self-organizing neural networks
by: WANG, Peng, et al.
Published: (2018) -
Knowledge-based exploration for reinforcement learning in self-organizing neural networks
by: TENG, Teck-Hou, et al.
Published: (2012) -
A self-organizing neural architecture integrating desire, intention and reinforcement learning
by: TAN, Ah-hwee, et al.
Published: (2010) -
Self-organizing neural models integrating rules and reinforcement learning
by: TENG, Teck-Hou, et al.
Published: (2008) -
Hierarchical control of multi-agent reinforcement learning team in real-time strategy (RTS) games
by: ZHOU, Weigui Jair, et al.
Published: (2021)