Probabilistic guided exploration for reinforcement learning in self-organizing neural networks
Exploration is essential in reinforcement learning, which expands the search space of potential solutions to a given problem for performance evaluations. Specifically, carefully designed exploration strategy may help the agent learn faster by taking the advantage of what it has learned previously. H...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/89871 http://hdl.handle.net/10220/49724 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-89871 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-898712020-03-07T11:48:46Z Probabilistic guided exploration for reinforcement learning in self-organizing neural networks Wang, Peng Zhou, Weigui Jair Wang, Di Tan, Ah-Hwee School of Computer Science and Engineering 2018 IEEE International Conference on Agents (ICA) Reinforcement Learning Self-organizing Neural Networks Engineering::Computer science and engineering Exploration is essential in reinforcement learning, which expands the search space of potential solutions to a given problem for performance evaluations. Specifically, carefully designed exploration strategy may help the agent learn faster by taking the advantage of what it has learned previously. However, many reinforcement learning mechanisms still adopt simple exploration strategies, which select actions in a pure random manner among all the feasible actions. In this paper, we propose novel mechanisms to improve the existing knowledge-based exploration strategy based on a probabilistic guided approach to select actions. We conduct extensive experiments in a Minefield navigation simulator and the results show that our proposed probabilistic guided exploration approach significantly improves the convergence rate. NRF (Natl Research Foundation, S’pore) Accepted version 2019-08-21T03:56:40Z 2019-12-06T17:35:31Z 2019-08-21T03:56:40Z 2019-12-06T17:35:31Z 2018-07-01 2018 Conference Paper Wang, P., Zhou, W. J., Wang, D., & Tan, A.-H. (2018). Probabilistic guided exploration for reinforcement learning in self-organizing neural networks. 2018 IEEE International Conference on Agents (ICA). doi:10.1109/agents.2018.8460067 https://hdl.handle.net/10356/89871 http://hdl.handle.net/10220/49724 10.1109/agents.2018.8460067 209585 en © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/agents.2018.8460067 4 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Reinforcement Learning Self-organizing Neural Networks Engineering::Computer science and engineering |
spellingShingle |
Reinforcement Learning Self-organizing Neural Networks Engineering::Computer science and engineering Wang, Peng Zhou, Weigui Jair Wang, Di Tan, Ah-Hwee Probabilistic guided exploration for reinforcement learning in self-organizing neural networks |
description |
Exploration is essential in reinforcement learning, which expands the search space of potential solutions to a given problem for performance evaluations. Specifically, carefully designed exploration strategy may help the agent learn faster by taking the advantage of what it has learned previously. However, many reinforcement learning mechanisms still adopt simple exploration strategies, which select actions in a pure random manner among all the feasible actions. In this paper, we propose novel mechanisms to improve the existing knowledge-based exploration strategy based on a probabilistic guided approach to select actions. We conduct extensive experiments in
a Minefield navigation simulator and the results show that our proposed probabilistic guided exploration approach significantly improves the convergence rate. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Wang, Peng Zhou, Weigui Jair Wang, Di Tan, Ah-Hwee |
format |
Conference or Workshop Item |
author |
Wang, Peng Zhou, Weigui Jair Wang, Di Tan, Ah-Hwee |
author_sort |
Wang, Peng |
title |
Probabilistic guided exploration for reinforcement learning in self-organizing neural networks |
title_short |
Probabilistic guided exploration for reinforcement learning in self-organizing neural networks |
title_full |
Probabilistic guided exploration for reinforcement learning in self-organizing neural networks |
title_fullStr |
Probabilistic guided exploration for reinforcement learning in self-organizing neural networks |
title_full_unstemmed |
Probabilistic guided exploration for reinforcement learning in self-organizing neural networks |
title_sort |
probabilistic guided exploration for reinforcement learning in self-organizing neural networks |
publishDate |
2019 |
url |
https://hdl.handle.net/10356/89871 http://hdl.handle.net/10220/49724 |
_version_ |
1681049019953971200 |