Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint

Deep neural network (DNN) accelerators overcome the power and memory walls for executing neural-net models locally on edge-computing devices to support sophisticated AI applications. The advocacy of 'model once, run optimized anywhere' paradigm introduces potential new security threat to e...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Wenye, Chang, Chip-Hong, Zhang, Fan
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/146196
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-146196
record_format dspace
spelling sg-ntu-dr.10356-1461962021-02-01T07:23:39Z Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint Liu, Wenye Chang, Chip-Hong Zhang, Fan School of Electrical and Electronic Engineering Zhejiang University Centre for Integrated Circuits and Systems Engineering::Electrical and electronic engineering::Integrated circuits Engineering::Electrical and electronic engineering::Computer hardware, software and systems Artificial Intelligence Deep Learning Deep neural network (DNN) accelerators overcome the power and memory walls for executing neural-net models locally on edge-computing devices to support sophisticated AI applications. The advocacy of 'model once, run optimized anywhere' paradigm introduces potential new security threat to edge intelligence that is methodologically different from the well-known adversarial examples. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or surreptitious on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of target's samples are required to extract the rare anomalies that can cross the decision boundary. The feasible perturbations are substantial and visually perceptible. In this paper, we propose a new glitch injection attack on DNN accelerator that is capable of misclassifying a target under variational viewpoints. The glitches injected into the computation clock signal induce transitory but disruptive errors in the intermediate results of the multiply-and-accumulate (MAC) operations. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. Two modes of attack patterns are derived, and their effectiveness are demonstrated on four representative ImageNet models implemented on the Deep-learning Processing Unit (DPU) of FPGA edge and its DNN development toolchain. The attack success rates are evaluated on 118 objects in 61 diverse sensing conditions, including 25 viewing angles (-60° to 60°), 24 illumination directions and 12 color temperatures. In the covert mode, the success rates of our attack exceed existing stealthy adversarial examples by more than 16.3%, with only two glitches injected into ten thousands to a million cycles for one complete inference. In the robust mode, the attack success rates on all four DNNs are more than 96.2% with an average glitch intensity of 1.4% and a maximum glitch intensity of 10.2%. National Research Foundation (NRF) Accepted version This research is supported by the National Research Foundation, Singapore, under its National Cybersecurity Research & Development Programme/Cyber- Hardware Forensic & Assurance Evaluation R&D Programme (Award: CHFA-GC1AW01). 2021-02-01T07:23:39Z 2021-02-01T07:23:39Z 2020 Journal Article Liu, W., Chang, C.-H., & Zhang, F. (2020). Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint. IEEE Transactions on Information Forensics and Security, 16, 1928-1942. doi:10.1109/TIFS.2020.3046858 1556-6021 https://hdl.handle.net/10356/146196 10.1109/TIFS.2020.3046858 16 1928 1942 en CHFA-GC1-AW01 IEEE Transactions on Information Forensics and Security © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TIFS.2020.3046858 application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Integrated circuits
Engineering::Electrical and electronic engineering::Computer hardware, software and systems
Artificial Intelligence
Deep Learning
spellingShingle Engineering::Electrical and electronic engineering::Integrated circuits
Engineering::Electrical and electronic engineering::Computer hardware, software and systems
Artificial Intelligence
Deep Learning
Liu, Wenye
Chang, Chip-Hong
Zhang, Fan
Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint
description Deep neural network (DNN) accelerators overcome the power and memory walls for executing neural-net models locally on edge-computing devices to support sophisticated AI applications. The advocacy of 'model once, run optimized anywhere' paradigm introduces potential new security threat to edge intelligence that is methodologically different from the well-known adversarial examples. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or surreptitious on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of target's samples are required to extract the rare anomalies that can cross the decision boundary. The feasible perturbations are substantial and visually perceptible. In this paper, we propose a new glitch injection attack on DNN accelerator that is capable of misclassifying a target under variational viewpoints. The glitches injected into the computation clock signal induce transitory but disruptive errors in the intermediate results of the multiply-and-accumulate (MAC) operations. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. Two modes of attack patterns are derived, and their effectiveness are demonstrated on four representative ImageNet models implemented on the Deep-learning Processing Unit (DPU) of FPGA edge and its DNN development toolchain. The attack success rates are evaluated on 118 objects in 61 diverse sensing conditions, including 25 viewing angles (-60° to 60°), 24 illumination directions and 12 color temperatures. In the covert mode, the success rates of our attack exceed existing stealthy adversarial examples by more than 16.3%, with only two glitches injected into ten thousands to a million cycles for one complete inference. In the robust mode, the attack success rates on all four DNNs are more than 96.2% with an average glitch intensity of 1.4% and a maximum glitch intensity of 10.2%.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Liu, Wenye
Chang, Chip-Hong
Zhang, Fan
format Article
author Liu, Wenye
Chang, Chip-Hong
Zhang, Fan
author_sort Liu, Wenye
title Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint
title_short Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint
title_full Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint
title_fullStr Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint
title_full_unstemmed Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint
title_sort stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint
publishDate 2021
url https://hdl.handle.net/10356/146196
_version_ 1692012986282016768