Imperceptible misclassification attack on deep learning accelerator by glitch injection

The convergence of edge computing and deep learning empowers endpoint hardwares or edge devices to perform inferences locally with the help of deep neural network (DNN) accelerator. This trend of edge intelligence invites new attack vectors, which are methodologically different from the well-known s...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Wenye, Chang, Chip-Hong, Zhang, Fan, Lou, Xiaoxuan
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/145856
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-145856
record_format dspace
spelling sg-ntu-dr.10356-1458562021-01-12T06:54:29Z Imperceptible misclassification attack on deep learning accelerator by glitch injection Liu, Wenye Chang, Chip-Hong Zhang, Fan Lou, Xiaoxuan School of Electrical and Electronic Engineering 2020 57th ACM/IEEE Design Automation Conference (DAC) Centre for Integrated Circuits and Systems Engineering::Electrical and electronic engineering Machine Learning Hardware The convergence of edge computing and deep learning empowers endpoint hardwares or edge devices to perform inferences locally with the help of deep neural network (DNN) accelerator. This trend of edge intelligence invites new attack vectors, which are methodologically different from the well-known software oriented deep learning attacks like the input of adversarial examples. Current studies of threats on DNN hardware focus mainly on model parameters interpolation. Such kind of manipulation is not stealthy as it will leave non-erasable traces or create conspicuous output patterns. In this paper, we present and investigate an imperceptible misclassification attack on DNN hardware by introducing infrequent instantaneous glitches into the clock signal. Comparing with falsifying model parameters by permanent faults, corruption of targeted intermediate results of convolution layer(s) by disrupting associated computations intermittently leaves no trace. We demonstrated our attack on nine state-of-the-art ImageNet models running on Xilinx FPGA based deep learning accelerator. With no knowledge about the models, our attack can achieve over 98% misclassification on 8 out of 9 models with only 10% glitches launched into the computation clock cycles. Given the model details and inputs, all the test images applied to ResNet50 can be successfully misclassified with no more than 1.7% glitch injection. Ministry of Education (MOE) Accepted version This research is supported by Singapore Ministry of Education AcRF Tier 1 Grant No. 2018-T1-001-131. 2021-01-12T06:54:29Z 2021-01-12T06:54:29Z 2020 Conference Paper Liu, W., Chang, C.-H., Zhang, F., & Lou, X. (2020). Imperceptible misclassification attack on deep learning accelerator by glitch injection. Proceedings of the 2020 57th ACM/IEEE Design Automation Conference (DAC). doi:10.1109/DAC18072.2020.9218577 978-1-7281-1085-1 https://hdl.handle.net/10356/145856 10.1109/DAC18072.2020.9218577 1 6 en MOE2018-T1-001-131 (RG87/18) © 2020 Association for Computing Machinery (ACM). All rights reserved. This paper was published in 2020 57th ACM/IEEE Design Automation Conference (DAC) and is made available with permission of Association for Computing Machinery (ACM). application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Machine Learning
Hardware
spellingShingle Engineering::Electrical and electronic engineering
Machine Learning
Hardware
Liu, Wenye
Chang, Chip-Hong
Zhang, Fan
Lou, Xiaoxuan
Imperceptible misclassification attack on deep learning accelerator by glitch injection
description The convergence of edge computing and deep learning empowers endpoint hardwares or edge devices to perform inferences locally with the help of deep neural network (DNN) accelerator. This trend of edge intelligence invites new attack vectors, which are methodologically different from the well-known software oriented deep learning attacks like the input of adversarial examples. Current studies of threats on DNN hardware focus mainly on model parameters interpolation. Such kind of manipulation is not stealthy as it will leave non-erasable traces or create conspicuous output patterns. In this paper, we present and investigate an imperceptible misclassification attack on DNN hardware by introducing infrequent instantaneous glitches into the clock signal. Comparing with falsifying model parameters by permanent faults, corruption of targeted intermediate results of convolution layer(s) by disrupting associated computations intermittently leaves no trace. We demonstrated our attack on nine state-of-the-art ImageNet models running on Xilinx FPGA based deep learning accelerator. With no knowledge about the models, our attack can achieve over 98% misclassification on 8 out of 9 models with only 10% glitches launched into the computation clock cycles. Given the model details and inputs, all the test images applied to ResNet50 can be successfully misclassified with no more than 1.7% glitch injection.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Liu, Wenye
Chang, Chip-Hong
Zhang, Fan
Lou, Xiaoxuan
format Conference or Workshop Item
author Liu, Wenye
Chang, Chip-Hong
Zhang, Fan
Lou, Xiaoxuan
author_sort Liu, Wenye
title Imperceptible misclassification attack on deep learning accelerator by glitch injection
title_short Imperceptible misclassification attack on deep learning accelerator by glitch injection
title_full Imperceptible misclassification attack on deep learning accelerator by glitch injection
title_fullStr Imperceptible misclassification attack on deep learning accelerator by glitch injection
title_full_unstemmed Imperceptible misclassification attack on deep learning accelerator by glitch injection
title_sort imperceptible misclassification attack on deep learning accelerator by glitch injection
publishDate 2021
url https://hdl.handle.net/10356/145856
_version_ 1690658329520504832