Layer-wise deep learning for object classifications

Although global backpropagation has become the mainstream training method for convolutional neural networks, there are still some inherent disadvantages, such as backward locking and memory reuse problems. Moreover, the neural network trained by the global backpropagation method is also regarded as...

Full description

Saved in:
Bibliographic Details
Main Author: Xu, Lei
Other Authors: Cheah Chien Chern
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/168048
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Although global backpropagation has become the mainstream training method for convolutional neural networks, there are still some inherent disadvantages, such as backward locking and memory reuse problems. Moreover, the neural network trained by the global backpropagation method is also regarded as a black box and hard to explain. In view of this, layer-wise learning attracted more attention as an alternative to the global backpropagation training approach recently. In this dissertation, we first applied the layer-wise learning method to the ResNet-18 model and then evaluated its performance on some common benchmark datasets. The experimental results proved a better convergence ability of the layer-wise learning method with the ResNet-18 network. The results also showed a reasonable trade-off between the performance and the number of parameters inside the network. Although the testing accuracy was slightly lower than the global backpropagation method, the layer-wise learning method indicated a structure with fewer layers to achieve reasonable accuracies. Moreover, it also shows the potential of employing the layer-wise learning method to determine the appropriate number of layers. Then, we changed the hierarchical structure of the original ResNet-18 model to improve its performance. The modified network was able to further decline in the amounts of parameters inside the network with a slightly lower performance with the global BP and SGD method and a similar or better performance with the original layer-wise learning method, according to the experiments. Keywords: Deep learning, Layer-Wise Learning, ResNet, CNNs, Separability.