Development of a learning algorithm for convolutional neural networks
Over the years, deep learning has become one of the most popular topics in computer science. By training artificial neural networks on large datasets, deep learning algorithms can learn to recognize patterns and features in data and use this learning to make intelligent decisions. That is why it has...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164462 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-164462 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1644622023-01-26T08:48:46Z Development of a learning algorithm for convolutional neural networks Fu, Jiadi Cheah Chien Chern School of Electrical and Electronic Engineering ECCCheah@ntu.edu.sg Engineering::Electrical and electronic engineering Over the years, deep learning has become one of the most popular topics in computer science. By training artificial neural networks on large datasets, deep learning algorithms can learn to recognize patterns and features in data and use this learning to make intelligent decisions. That is why it has been widely used in linguistics and image processing. There are different training methods faced to various scenarios, while layer-wise training is one of the typical ways. Layer-wise training is a method in which the layers of the model are trained one at each time, rather than all at once. Reduced complexity helps layer-wise training be more efficient than training the entire model at once. However, it brings potential issues like suboptimal models and overfitting. Layer-wise training can be effective if only applied to appropriate scenarios and methods. This report aimed at exploring the possibility of improving the performance of layer-wise learning by using adjustable ReLu and autoencoder structures. To prove the ability of this training method, two groups of models are constructed based on VGG-11 and Autoencoder, applying in classification and image reconstruction tasks separately. The results related to these experiments are proposed to prove their feasibility. The optimized models improve the learning efficiency and convergence speed. Apart from achieving similar accuracy to the original end-to-end models, layer-wise training improves the convergence speed of the model and reduces the gradient calculation between layers, which has been proved to be a more effective method. Master of Science (Computer Control and Automation) 2023-01-26T08:48:46Z 2023-01-26T08:48:46Z 2022 Thesis-Master by Coursework Fu, J. (2022). Development of a learning algorithm for convolutional neural networks. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/164462 https://hdl.handle.net/10356/164462 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering |
spellingShingle |
Engineering::Electrical and electronic engineering Fu, Jiadi Development of a learning algorithm for convolutional neural networks |
description |
Over the years, deep learning has become one of the most popular topics in computer science. By training artificial neural networks on large datasets, deep learning algorithms can learn to recognize patterns and features in data and use this learning to make intelligent decisions. That is why it has been widely used in linguistics and image processing.
There are different training methods faced to various scenarios, while layer-wise training is one of the typical ways. Layer-wise training is a method in which the layers of the model are trained one at each time, rather than all at once. Reduced complexity helps layer-wise training be more efficient than training the entire model at once. However, it brings potential issues like suboptimal models and overfitting. Layer-wise training can be effective if only applied to appropriate scenarios and methods.
This report aimed at exploring the possibility of improving the performance of layer-wise learning by using adjustable ReLu and autoencoder structures. To prove the ability of this training method, two groups of models are constructed based on VGG-11 and Autoencoder, applying in classification and image reconstruction tasks separately.
The results related to these experiments are proposed to prove their feasibility. The optimized models improve the learning efficiency and convergence speed. Apart from achieving similar accuracy to the original end-to-end models, layer-wise training improves the convergence speed of the model and reduces the gradient calculation between layers, which has been proved to be a more effective method. |
author2 |
Cheah Chien Chern |
author_facet |
Cheah Chien Chern Fu, Jiadi |
format |
Thesis-Master by Coursework |
author |
Fu, Jiadi |
author_sort |
Fu, Jiadi |
title |
Development of a learning algorithm for convolutional neural networks |
title_short |
Development of a learning algorithm for convolutional neural networks |
title_full |
Development of a learning algorithm for convolutional neural networks |
title_fullStr |
Development of a learning algorithm for convolutional neural networks |
title_full_unstemmed |
Development of a learning algorithm for convolutional neural networks |
title_sort |
development of a learning algorithm for convolutional neural networks |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/164462 |
_version_ |
1756370589588652032 |