Exploring low complexity embedded architectures for deep neural networks

Deep neural networks have shown significant improvements in computer vision applications over the last few years. Performance improvements have been brought about mostly by using pre-trained models like Inception-v4, ResNet-152, and VGG 19. However, these improvements have been accompanied by an inc...

Full description

Saved in:
Bibliographic Details
Main Author: Chatterjee, Soham
Other Authors: Arindam Basu
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/150553
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Deep neural networks have shown significant improvements in computer vision applications over the last few years. Performance improvements have been brought about mostly by using pre-trained models like Inception-v4, ResNet-152, and VGG 19. However, these improvements have been accompanied by an increase in the size and computational complexity of the models. This makes it difficult to deploy such models in energy-constrained mobile applications which have become ever crucial with the advent of the Internet of Things (IoT). This is especially problematic in a battery-powered IoT system, where executing complex neural networks can consume a lot of energy. Hence, some methods to reduce this complexity in software, like using depthwise separable convolutions and quantization, have been proposed. Also, a very different computing paradigm of spiking neural networks (SNN) has been introduced as a method to introduce a parameterizable tradeoff between accuracy and classification energy. The security of such edge deployed neural networks is also a matter of concern since the IoT devices are easily accessible to hackers. In this work, a study of the effect of using depthwise separable convolutions and Dynamic Fixed Point (DFP) weight quantization on both model accuracy and complexity is done for a DNN used for classifying traffic images captured by a neuromorphic vision sensor. Initial results show that the DFP weight quantization can significantly reduce the computational complexity of neural networks with less than a 2% drop in accuracy. Finally, the vulnerability of neural networks to side-channel and cold boot attacks is also being studied. To do this, trained models are deployed to edge devices like the Neural Compute Stick, EdgeTPU DevBoard, and the EdgeTPU accelerator and then attacked to retrieve the model weights, architecture and other parameters. We show that using cold boot attacks, it is possible to recover the model architecture and weights, as well as the original model accuracy. Further, we show that with side-channel attacks, it is possible to isolate and identify the execution of individual neurons in a model. Since quantized networks have fewer and smaller weight values, they should be easier to attack. On the other hand, larger neural networks with complex architectures and dataflows should be comparatively safer from side-channel attacks.