Detecting adversarial samples for deep neural networks through mutation testing

Deep Neural Networks (DNNs) are adept at many tasks, with the more well-known task of image recognition using a subset of DNNs called Convolutional Neural Networks (CNNs). However, they are prone to attacks called adversarial attacks. Adversarial attacks are malicious modifications made on input sam...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Kye Yen
Other Authors: Chang Chip Hong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/138719
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Deep Neural Networks (DNNs) are adept at many tasks, with the more well-known task of image recognition using a subset of DNNs called Convolutional Neural Networks (CNNs). However, they are prone to attacks called adversarial attacks. Adversarial attacks are malicious modifications made on input samples to the DNN that causes the DNN to fail at its task. In the case of image recognition, which is the focus of this project, adversarial attacks result in misclassification of images by the CNN. These attacks are conducted by deliberately adding perturbations imperceptible to humans in images before being fed into the CNN. This is a serious breach of security in CNNs which may lead to disastrous consequences in security reliant applications. Finding a defence mechanism for these attacks are imperative in ensuring the safe operation of CNNs. The first line of defence for CNNs against adversarial attacks is the detection of the adversarial images. This method of defence has been a topic for scrutiny to achieve not only high accuracy but also being real-time. Currently, high detection rate is computationally intensive, leading to increased time to detect the adversaries. Therefore, in this final year project, two methods were proposed to detect adversarial images with lower computational effort. The first method employs network prediction inconsistency concept, which has shown that adversarial inputs are more sensitive to model mutation than the natural inputs. It optimizes previous mutation testing method by implementing partial mutation to the statistically determined most distinguishable areas of the CNNs, instead of blindly implemented random mutations. These specific mutations in the CNNs causes changes in the output prediction which determines the inputs as adversarial. The second method makes use of the difference in layer-wise firing neuron rate distribution between adversarial and normal images to build a decision tree for adversarial detection. Both methods had shown reasonable detection rate.