Vulnerability analysis on noise-injection based hardware attack on deep neural networks

Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorith...

全面介紹

Saved in:
書目詳細資料
Main Authors: Liu, Wenye, Wang, Si, Chang, Chip-Hong
其他作者: School of Electrical and Electronic Engineering
格式: Conference or Workshop Item
語言:English
出版: 2020
主題:
在線閱讀:https://hdl.handle.net/10356/136863
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
id sg-ntu-dr.10356-136863
record_format dspace
spelling sg-ntu-dr.10356-1368632020-02-03T01:55:56Z Vulnerability analysis on noise-injection based hardware attack on deep neural networks Liu, Wenye Wang, Si Chang, Chip-Hong School of Electrical and Electronic Engineering 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST) Centre for Integrated Circuits and Systems Engineering::Electrical and electronic engineering Deep Neural Networks Hardware Attacks Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorithms only concentrate on the inputs of the model, effect of tampering internal nodes is seldom studied. Adversarial attack, if extends to deployed hardware system, can perturb or alter intermediate data during real time processing. To investigate the vulnerability implication of deep neural network hardware under potential adversarial attacks, we comprehensively evaluate 10 popular DNN models by injecting noise into each layer of these models. Our experimental results indicate that more accurate networks are more prone to disturbance of selective internal layers. For traditional convolutional network structures (AlexNet and VGG family), the last convolution layer is most assailable. For state-of-the-art architectures (Inception, ResNet and DenseNet families), as little as 0.1\% or one element per channel of perturbations can subvert the original predictions, and over 65\% of computational layers suffer from this vulnerability. Our findings reveal that optimization of accuracy, model size and computational efficiency can unconsciously sacrifice the robustness of deep learning system. MOE (Min. of Education, S’pore) Accepted version 2020-02-03T01:55:56Z 2020-02-03T01:55:56Z 2019 Conference Paper Liu, W., Wang, S. & Chang, C.-H. (2019). Vulnerability analysis on noise-injection based hardware attack on deep neural networks. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). https://hdl.handle.net/10356/136863 en MOE-2015-T2-2-013 © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Deep Neural Networks
Hardware Attacks
spellingShingle Engineering::Electrical and electronic engineering
Deep Neural Networks
Hardware Attacks
Liu, Wenye
Wang, Si
Chang, Chip-Hong
Vulnerability analysis on noise-injection based hardware attack on deep neural networks
description Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorithms only concentrate on the inputs of the model, effect of tampering internal nodes is seldom studied. Adversarial attack, if extends to deployed hardware system, can perturb or alter intermediate data during real time processing. To investigate the vulnerability implication of deep neural network hardware under potential adversarial attacks, we comprehensively evaluate 10 popular DNN models by injecting noise into each layer of these models. Our experimental results indicate that more accurate networks are more prone to disturbance of selective internal layers. For traditional convolutional network structures (AlexNet and VGG family), the last convolution layer is most assailable. For state-of-the-art architectures (Inception, ResNet and DenseNet families), as little as 0.1\% or one element per channel of perturbations can subvert the original predictions, and over 65\% of computational layers suffer from this vulnerability. Our findings reveal that optimization of accuracy, model size and computational efficiency can unconsciously sacrifice the robustness of deep learning system.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Liu, Wenye
Wang, Si
Chang, Chip-Hong
format Conference or Workshop Item
author Liu, Wenye
Wang, Si
Chang, Chip-Hong
author_sort Liu, Wenye
title Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_short Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_full Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_fullStr Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_full_unstemmed Vulnerability analysis on noise-injection based hardware attack on deep neural networks
title_sort vulnerability analysis on noise-injection based hardware attack on deep neural networks
publishDate 2020
url https://hdl.handle.net/10356/136863
_version_ 1681039253408055296