Vulnerability analysis on noise-injection based hardware attack on deep neural networks

Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorith...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Wenye, Wang, Si, Chang, Chip-Hong
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/136863
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-136863
record_format dspace
spelling sg-ntu-dr.10356-1368632020-02-03T01:55:56Z Vulnerability analysis on noise-injection based hardware attack on deep neural networks Liu, Wenye Wang, Si Chang, Chip-Hong School of Electrical and Electronic Engineering 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST) Centre for Integrated Circuits and Systems Engineering::Electrical and electronic engineering Deep Neural Networks Hardware Attacks Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorithms only concentrate on the inputs of the model, effect of tampering internal nodes is seldom studied. Adversarial attack, if extends to deployed hardware system, can perturb or alter intermediate data during real time processing. To investigate the vulnerability implication of deep neural network hardware under potential adversarial attacks, we comprehensively evaluate 10 popular DNN models by injecting noise into each layer of these models. Our experimental results indicate that more accurate networks are more prone to disturbance of selective internal layers. For traditional convolutional network structures (AlexNet and VGG family), the last convolution layer is most assailable. For state-of-the-art architectures (Inception, ResNet and DenseNet families), as little as 0.1\% or one element per channel of perturbations can subvert the original predictions, and over 65\% of computational layers suffer from this vulnerability. Our findings reveal that optimization of accuracy, model size and computational efficiency can unconsciously sacrifice the robustness of deep learning system. MOE (Min. of Education, S’pore) Accepted version 2020-02-03T01:55:56Z 2020-02-03T01:55:56Z 2019 Conference Paper Liu, W., Wang, S. & Chang, C.-H. (2019). Vulnerability analysis on noise-injection based hardware attack on deep neural networks. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). https://hdl.handle.net/10356/136863 en MOE-2015-T2-2-013 © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Deep Neural Networks
Hardware Attacks
spellingShingle Engineering::Electrical and electronic engineering
Deep Neural Networks
Hardware Attacks
Liu, Wenye
Wang, Si
Chang, Chip-Hong
Vulnerability analysis on noise-injection based hardware attack on deep neural networks
description Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorithms only concentrate on the inputs of the model, effect of tampering internal nodes is seldom studied. Adversarial attack, if extends to deployed hardware system, can perturb or alter intermediate data during real time processing. To investigate the vulnerability implication of deep neural network hardware under potential adversarial attacks, we comprehensively evaluate 10 popular DNN models by injecting noise into each layer of these models. Our experimental results indicate that more accurate networks are more prone to disturbance of selective internal layers. For traditional convolutional network structures (AlexNet and VGG family), the last convolution layer is most assailable. For state-of-the-art architectures (Inception, ResNet and DenseNet families), as little as 0.1\% or one element per channel of perturbations can subvert the original predictions, and over 65\% of computational layers suffer from this vulnerability. Our findings reveal that optimization of accuracy, model size and computational efficiency can unconsciously sacrifice the robustness of deep learning system.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Liu, Wenye
Wang, Si
Chang, Chip-Hong
format Conference or Workshop Item
author Liu, Wenye
Wang, Si
Chang, Chip-Hong
author_sort Liu, Wenye
title Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_short Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_full Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_fullStr Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_full_unstemmed Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_sort vulnerability analysis on noise-injection based hardware attack on deep neural networks
publishDate 2020
url https://hdl.handle.net/10356/136863
_version_ 1681039253408055296