A new lightweight in-situ adversarial sample detector for edge deep neural network

The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the de...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Si, Liu, Wenye, Chang, Chip-Hong
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148567
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-148567
record_format dspace
spelling sg-ntu-dr.10356-1485672024-07-25T02:25:04Z A new lightweight in-situ adversarial sample detector for edge deep neural network Wang, Si Liu, Wenye Chang, Chip-Hong School of Electrical and Electronic Engineering Engineering Edge AI Security Adversarial Examples The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This paradigm shift in deep learning computations does not, however, reduce the vulnerability of deep neural networks (DNN) against adversarial attacks but introduces a difficult catch-up. This is because existing methodologies rely mainly on offline analysis to detect adversarial inputs, assuming that the deep learning model is implemented on a 32-bit floating-point graphical processing unit (GPU) instance. In this paper, we propose a new hardware-oriented approach for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge. Our method exploits controlled glitch injection into the clock signal of the DNN accelerator to maximize the information gain for the discrimination of adversarial and benign inputs. A light gradient boosting machine (lightGBM) is constructed by analyzing the prediction probability of unmutated and mutated models and the label change inconsistency between the adversarial and benign samples in the training dataset. With negligibly small hardware overhead, the glitch injection circuit and the trained lightGBM detector can be easily implemented alongside with the deep learning model on a Xilinx ZU9EG chip. The effectiveness of the proposed detector is validated against four state-of-the-art adversarial attacks on two different types and scales of DNN models, VGG16 and ResNet50, for a thousand-class visual object recognition application. The results show a significant increase in true positive rate and a substantial reduction in false positive rate on the Fast Gradient Sign Method (FGSM), Iterative-FGSM (I-FGSM), C&W and universal perturbation attacks compared with modern software-oriented adversarial sample detection methods. National Research Foundation (NRF) This research is supported by the National Research Foundation, Singapore, under its National Cybersecurity Research & Development Programme / Cyber-Hardware Forensic & Assurance Evaluation R&D Programme (Award: CHFA-GC1-AW01). 2021-07-01T07:01:08Z 2021-07-01T07:01:08Z 2021 Journal Article Wang, S., Liu, W. & Chang, C. (2021). A new lightweight in-situ adversarial sample detector for edge deep neural network. IEEE Journal of Emerging and Selected Topics in Circuits and Systems, 11(2), 252-266. https://dx.doi.org/10.1109/JETCAS.2021.3076101 2156-3357 https://hdl.handle.net/10356/148567 10.1109/JETCAS.2021.3076101 2 11 252 266 en CHFA-GC1-AW01 IEEE Journal of Emerging and Selected Topics in Circuits and Systems doi:10.21979/N9/8LWB8D © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/JETCAS.2021.3076101. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Edge AI Security
Adversarial Examples
spellingShingle Engineering
Edge AI Security
Adversarial Examples
Wang, Si
Liu, Wenye
Chang, Chip-Hong
A new lightweight in-situ adversarial sample detector for edge deep neural network
description The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This paradigm shift in deep learning computations does not, however, reduce the vulnerability of deep neural networks (DNN) against adversarial attacks but introduces a difficult catch-up. This is because existing methodologies rely mainly on offline analysis to detect adversarial inputs, assuming that the deep learning model is implemented on a 32-bit floating-point graphical processing unit (GPU) instance. In this paper, we propose a new hardware-oriented approach for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge. Our method exploits controlled glitch injection into the clock signal of the DNN accelerator to maximize the information gain for the discrimination of adversarial and benign inputs. A light gradient boosting machine (lightGBM) is constructed by analyzing the prediction probability of unmutated and mutated models and the label change inconsistency between the adversarial and benign samples in the training dataset. With negligibly small hardware overhead, the glitch injection circuit and the trained lightGBM detector can be easily implemented alongside with the deep learning model on a Xilinx ZU9EG chip. The effectiveness of the proposed detector is validated against four state-of-the-art adversarial attacks on two different types and scales of DNN models, VGG16 and ResNet50, for a thousand-class visual object recognition application. The results show a significant increase in true positive rate and a substantial reduction in false positive rate on the Fast Gradient Sign Method (FGSM), Iterative-FGSM (I-FGSM), C&W and universal perturbation attacks compared with modern software-oriented adversarial sample detection methods.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Wang, Si
Liu, Wenye
Chang, Chip-Hong
format Article
author Wang, Si
Liu, Wenye
Chang, Chip-Hong
author_sort Wang, Si
title A new lightweight in-situ adversarial sample detector for edge deep neural network
title_short A new lightweight in-situ adversarial sample detector for edge deep neural network
title_full A new lightweight in-situ adversarial sample detector for edge deep neural network
title_fullStr A new lightweight in-situ adversarial sample detector for edge deep neural network
title_full_unstemmed A new lightweight in-situ adversarial sample detector for edge deep neural network
title_sort new lightweight in-situ adversarial sample detector for edge deep neural network
publishDate 2021
url https://hdl.handle.net/10356/148567
_version_ 1806059893624406016