Q-learning algorithms using fuzzy inference system
53 p.
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Published: |
2011
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/47027 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
id |
sg-ntu-dr.10356-47027 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-470272023-07-04T15:32:21Z Q-learning algorithms using fuzzy inference system Kattur Rajarethinam Ganeshrajhan Er Meng Joo School of Electrical and Electronic Engineering DRNTU::Engineering 53 p. Reinforcement Learning is the learning algorithm that learns by direct experimentation. It works by maximizing the numeric reward received from the environment. Out of the famous Reinforcement Algorithms, the most popular one is the Q-Learning algorithm. It stores a Qvalue with respect to the quality of the action performed for a specific state. It has a limitation of generalization in continuous environment. This limitation is solved by adopting the Fuzzy Inference System in Fuzzy Q-Learning (FQL). Since FQL only involves parameter modification only, structure identification is done in Dynamic Fuzzy Q-Learning (DFQL) by automatic generation of fuzzy agents. Although the DFQL is an online algorithm where rules are generated automatically, there is a possibility that some rules can become redundant. Hence, pruning of redundant rules, which is adopted in Dynamic Self-Generated Fuzzy QLearning (DSGFQL) is proposed. An Extended Self Organizing Map was added to the DSGFQL algorithm and a new algorithm Enhanced Dynamic Self-Generated Fuzzy QLearning (EDSGFQL) scheme is proposed. Another method replacing the structure identification method in DFQL by Incremental proposed Topological Preserving Map approach was proposed. The scheme is known as Incremental-Topological-Preserving-Map- Based Fuzzy Q-Learning. These methods involve Q-Learning where continuous actions were not generated. Another method that can generate continuous action known as Continuous Action Q-Learning was proposed. Implementing the FIS in the continuous action Q-Learning will give Fuzzy Continuous Action Q-Learning. Master of Science (Computer Control and Automation) 2011-12-27T05:55:50Z 2011-12-27T05:55:50Z 2009 2009 Thesis http://hdl.handle.net/10356/47027 Nanyang Technological University application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
topic |
DRNTU::Engineering |
spellingShingle |
DRNTU::Engineering Kattur Rajarethinam Ganeshrajhan Q-learning algorithms using fuzzy inference system |
description |
53 p. |
author2 |
Er Meng Joo |
author_facet |
Er Meng Joo Kattur Rajarethinam Ganeshrajhan |
format |
Theses and Dissertations |
author |
Kattur Rajarethinam Ganeshrajhan |
author_sort |
Kattur Rajarethinam Ganeshrajhan |
title |
Q-learning algorithms using fuzzy inference system |
title_short |
Q-learning algorithms using fuzzy inference system |
title_full |
Q-learning algorithms using fuzzy inference system |
title_fullStr |
Q-learning algorithms using fuzzy inference system |
title_full_unstemmed |
Q-learning algorithms using fuzzy inference system |
title_sort |
q-learning algorithms using fuzzy inference system |
publishDate |
2011 |
url |
http://hdl.handle.net/10356/47027 |
_version_ |
1772825754721058816 |