Knowledge-based exploration for reinforcement learning in self-organizing neural networks
Exploration is necessary during reinforcement learning to discover new solutions in a given problem space. Most reinforcement learning systems, however, adopt a simple strategy, by randomly selecting an action among all the available actions. This paper proposes a novel exploration strategy, known a...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2012
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6275 https://ink.library.smu.edu.sg/context/sis_research/article/7278/viewcontent/Knowledge_based_Exploration___IAT_2012.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-7278 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-72782021-11-23T08:04:06Z Knowledge-based exploration for reinforcement learning in self-organizing neural networks TENG, Teck-Hou TAN, Ah-hwee Exploration is necessary during reinforcement learning to discover new solutions in a given problem space. Most reinforcement learning systems, however, adopt a simple strategy, by randomly selecting an action among all the available actions. This paper proposes a novel exploration strategy, known as Knowledge-based Exploration, for guiding the exploration of a family of self-organizing neural networks in reinforcement learning. Specifically, exploration is directed towards unexplored and favorable action choices while steering away from those negative action choices that are likely to fail. This is achieved by using the learned knowledge of the agent to identify prior action choices leading to low Q-values in similar situations. Consequently, the agent is expected to learn the right solutions in a shorter time, improving overall learning efficiency. Using a Pursuit-Evasion problem domain, we evaluate the efficacy of the knowledge-based exploration strategy, in terms of task performance, rate of learning and model complexity. Comparison with random exploration and three other heuristic-based directed exploration strategies show that Knowledge-based Exploration is significantly more effective and robust for reinforcement learning in real time. 2012-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6275 info:doi/10.1109/WI-IAT.2012.154 https://ink.library.smu.edu.sg/context/sis_research/article/7278/viewcontent/Knowledge_based_Exploration___IAT_2012.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Reinforcement Learning Self-Organizing Neural Network Directed Exploration Rule-Based System Artificial Intelligence and Robotics Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Reinforcement Learning Self-Organizing Neural Network Directed Exploration Rule-Based System Artificial Intelligence and Robotics Databases and Information Systems |
spellingShingle |
Reinforcement Learning Self-Organizing Neural Network Directed Exploration Rule-Based System Artificial Intelligence and Robotics Databases and Information Systems TENG, Teck-Hou TAN, Ah-hwee Knowledge-based exploration for reinforcement learning in self-organizing neural networks |
description |
Exploration is necessary during reinforcement learning to discover new solutions in a given problem space. Most reinforcement learning systems, however, adopt a simple strategy, by randomly selecting an action among all the available actions. This paper proposes a novel exploration strategy, known as Knowledge-based Exploration, for guiding the exploration of a family of self-organizing neural networks in reinforcement learning. Specifically, exploration is directed towards unexplored and favorable action choices while steering away from those negative action choices that are likely to fail. This is achieved by using the learned knowledge of the agent to identify prior action choices leading to low Q-values in similar situations. Consequently, the agent is expected to learn the right solutions in a shorter time, improving overall learning efficiency. Using a Pursuit-Evasion problem domain, we evaluate the efficacy of the knowledge-based exploration strategy, in terms of task performance, rate of learning and model complexity. Comparison with random exploration and three other heuristic-based directed exploration strategies show that Knowledge-based Exploration is significantly more effective and robust for reinforcement learning in real time. |
format |
text |
author |
TENG, Teck-Hou TAN, Ah-hwee |
author_facet |
TENG, Teck-Hou TAN, Ah-hwee |
author_sort |
TENG, Teck-Hou |
title |
Knowledge-based exploration for reinforcement learning in self-organizing neural networks |
title_short |
Knowledge-based exploration for reinforcement learning in self-organizing neural networks |
title_full |
Knowledge-based exploration for reinforcement learning in self-organizing neural networks |
title_fullStr |
Knowledge-based exploration for reinforcement learning in self-organizing neural networks |
title_full_unstemmed |
Knowledge-based exploration for reinforcement learning in self-organizing neural networks |
title_sort |
knowledge-based exploration for reinforcement learning in self-organizing neural networks |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2012 |
url |
https://ink.library.smu.edu.sg/sis_research/6275 https://ink.library.smu.edu.sg/context/sis_research/article/7278/viewcontent/Knowledge_based_Exploration___IAT_2012.pdf |
_version_ |
1770575914328915968 |