DEVELOPMENT OF CNN-BASED DEEP Q-LEARNING FOR PATH PLANNING SIMULATOR
This Final Project Report examines the development of a Deep Q-learning (DQL) model based on Convolutional Neural Network (CNN) for a path planning simulator. Path planning is the process of finding an optimal collision-free route from a starting point to an endpoint in a given environment. The Q...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/85048 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:85048 |
---|---|
spelling |
id-itb.:850482024-08-19T13:55:50ZDEVELOPMENT OF CNN-BASED DEEP Q-LEARNING FOR PATH PLANNING SIMULATOR Rionaldo Pasaribu, Jeremy Indonesia Final Project Path Planning, Convolutional Neural Network, Deep Q-Learning, Deep Q-Network, Feed Forward Neural Network INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/85048 This Final Project Report examines the development of a Deep Q-learning (DQL) model based on Convolutional Neural Network (CNN) for a path planning simulator. Path planning is the process of finding an optimal collision-free route from a starting point to an endpoint in a given environment. The Q-learning method used in traditional path planning often encounters challenges in storing large Q- tables as the number of states and actions increases. To address this issue, the DQL method is used, utilizing neural networks to replace the Q-table. This research aims to build a DQL model with CNN architecture and compare it to a Feed Forward Neural Network (FFNN) architecture. CNN is chosen for its ability to recognize spatial patterns in the simulation environment compared to FFNN. This study examines two DQL models: FFNN using the method based on Sumarudin et al. (2023) and CNN using an image-based method as a solution. The experimental results are conducted in two stages: training and testing for one maze and ten mazes. For a single maze, the FFNN model's success rate of 0.846 for testing surpasses the performance of the CNN model. For ten mazes, the CNN model's success rate of 0.5040 for testing exceeds the performance of the FFNN model. The experimental results show that the FFNN model performs better in testing on a single maze, but for testing on ten mazes, the CNN model demonstrates superior performance. This research suggests further exploration in testing with constantly changing mazes and hyperparameter optimization to achieve more effective model performance. text |
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
This Final Project Report examines the development of a Deep Q-learning (DQL)
model based on Convolutional Neural Network (CNN) for a path planning
simulator. Path planning is the process of finding an optimal collision-free route
from a starting point to an endpoint in a given environment. The Q-learning method
used in traditional path planning often encounters challenges in storing large Q-
tables as the number of states and actions increases. To address this issue, the DQL
method is used, utilizing neural networks to replace the Q-table. This research aims
to build a DQL model with CNN architecture and compare it to a Feed Forward
Neural Network (FFNN) architecture. CNN is chosen for its ability to recognize
spatial patterns in the simulation environment compared to FFNN. This study
examines two DQL models: FFNN using the method based on Sumarudin et al.
(2023) and CNN using an image-based method as a solution. The experimental
results are conducted in two stages: training and testing for one maze and ten mazes.
For a single maze, the FFNN model's success rate of 0.846 for testing surpasses the
performance of the CNN model. For ten mazes, the CNN model's success rate of
0.5040 for testing exceeds the performance of the FFNN model.
The experimental results show that the FFNN model performs better in testing on a
single maze, but for testing on ten mazes, the CNN model demonstrates superior
performance. This research suggests further exploration in testing with constantly
changing mazes and hyperparameter optimization to achieve more effective model
performance. |
format |
Final Project |
author |
Rionaldo Pasaribu, Jeremy |
spellingShingle |
Rionaldo Pasaribu, Jeremy DEVELOPMENT OF CNN-BASED DEEP Q-LEARNING FOR PATH PLANNING SIMULATOR |
author_facet |
Rionaldo Pasaribu, Jeremy |
author_sort |
Rionaldo Pasaribu, Jeremy |
title |
DEVELOPMENT OF CNN-BASED DEEP Q-LEARNING FOR PATH PLANNING SIMULATOR |
title_short |
DEVELOPMENT OF CNN-BASED DEEP Q-LEARNING FOR PATH PLANNING SIMULATOR |
title_full |
DEVELOPMENT OF CNN-BASED DEEP Q-LEARNING FOR PATH PLANNING SIMULATOR |
title_fullStr |
DEVELOPMENT OF CNN-BASED DEEP Q-LEARNING FOR PATH PLANNING SIMULATOR |
title_full_unstemmed |
DEVELOPMENT OF CNN-BASED DEEP Q-LEARNING FOR PATH PLANNING SIMULATOR |
title_sort |
development of cnn-based deep q-learning for path planning simulator |
url |
https://digilib.itb.ac.id/gdl/view/85048 |
_version_ |
1822998901112700928 |