Techniques in enhancing computation and understanding of convolutional neural networks
Convolutional Neural Networks (CNNs) are effective in solving a large number of complex tasks. The performance of CNNs is currently equaling or even surpassing the human performance level in a wide range of real-world problems. Such high performance is achieved at the cost of high computational and...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/154072 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Convolutional Neural Networks (CNNs) are effective in solving a large number of complex tasks. The performance of CNNs is currently equaling or even surpassing the human performance level in a wide range of real-world problems. Such high performance is achieved at the cost of high computational and storage requirements. To satisfy these computational requirements, specialized hardware such as Graphics Processing Units (GPUs) or Tensor Processing Units (TPUs) is required. Besides, CNNs are mainly used as a black-box tool, and only several attempts were made for their understanding.
In this thesis, two studies are provided to address the problems of lack of understanding and high computational requirements of CNNs.
The first study, introduced in Chapter 3, investigates and proposes a method for enhancing CNN computation by reducing the number of computational operations performed.
We propose a new method for the computation enhancement in CNNs that substitutes Multiply and Accumulate (MAC) operations with a codebook lookup. The proposed method, Quantized-by-Lookup Network (QL-Net), combines several concepts: (i) a codebook construction, (ii) a layer-wise retraining strategy, and (iii) substitution of the MAC operations with the lookup of the convolution responses at inference time.
The proposed QL-Net achieves good performance on datasets such as MNIST and CIFAR-10.
The second study provides a better CNN understanding by studying the importance of each learned feature for an individual object class recognition.
The experimental work in Chapter 4 extends the current understanding of the CNN filters' roles, their mutual interactions, and their relationship to classification accuracy. Additionally, the study showed that the classification accuracy of some classes from the target objects' set could be improved by removing the sub-set of filters with the least contribution to these classes. |
---|