Inconspicuous data augmentation based backdoor attack on deep neural networks

With new applications made possible by the fusion of edge computing and artificial intelligence (AI) technologies, the global market capitalization of edge AI has risen tremendously in recent years. Deployment of pre-trained deep neural network (DNN) models on edge computing platforms, however, does...

Full description

Saved in:
Bibliographic Details
Main Authors: Xu, Chaohui, Liu, Wenyu, Zheng, Yue, Wang, Si, Chang, Chip Hong
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/165251
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-165251
record_format dspace
spelling sg-ntu-dr.10356-1652512023-03-31T16:00:59Z Inconspicuous data augmentation based backdoor attack on deep neural networks Xu, Chaohui Liu, Wenyu Zheng, Yue Wang, Si Chang, Chip Hong School of Electrical and Electronic Engineering 2022 IEEE 35th International System-on-Chip Conference (SOCC) Centre for Integrated Circuits and Systems Engineering::Electrical and electronic engineering Deep Learning Visualization With new applications made possible by the fusion of edge computing and artificial intelligence (AI) technologies, the global market capitalization of edge AI has risen tremendously in recent years. Deployment of pre-trained deep neural network (DNN) models on edge computing platforms, however, does not alleviate the fundamental trust assurance issue arising from the lack of interpretability of end-to-end DNN solutions. The most notorious threat of DNNs is the backdoor attack. Most backdoor attacks require a relatively large injection rate (≈ 10%) to achieve a high attack success rate. The trigger patterns are not always stealthy and can be easily detected or removed by backdoor detectors. Moreover, these attacks are only tested on DNN models implemented on general-purpose computing platforms. This paper proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the Gradient Magnitude Similarity Deviation, which is used to objectively assess the visual imperceptibility of the poisoned samples. A rich set of composite triggers can be created for different dirty labels. The proposed attacks are evaluated on pre-activation ResNet18 trained with CIFAR-10 and GTSRB datasets, and EfficientNet-B0 trained with adapted 10-class ImageNet dataset. A high attack success rate of above 97% with only 1% injection rate is achieved on these DNN models implemented on both general-purpose computing platforms and Intel Neural Compute Stick 2 edge AI device. The accuracy loss of the poisoned DNNs on benign inputs is kept below 0.6%. The proposed attack is also tested to be resilient to state-of-the-art backdoor defense methods. National Research Foundation (NRF) Submitted/Accepted version This research is supported by the National Research Foundation, Singapore, under its National Cybersecurity Research & Development Programme/Cyber-Hardware Forensic & Assurance Evaluation R&D Programme (Award: CHFA-GC1- AW01). 2023-03-28T01:53:56Z 2023-03-28T01:53:56Z 2022 Conference Paper Xu, C., Liu, W., Zheng, Y., Wang, S. & Chang, C. H. (2022). Inconspicuous data augmentation based backdoor attack on deep neural networks. 2022 IEEE 35th International System-on-Chip Conference (SOCC). https://dx.doi.org/10.1109/SOCC56010.2022.9908113 9781665459853 https://hdl.handle.net/10356/165251 10.1109/SOCC56010.2022.9908113 2-s2.0-85140720988 en CHFA-GC1- AW01 © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/ 10.1109/SOCC56010.2022.9908113. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Deep Learning
Visualization
spellingShingle Engineering::Electrical and electronic engineering
Deep Learning
Visualization
Xu, Chaohui
Liu, Wenyu
Zheng, Yue
Wang, Si
Chang, Chip Hong
Inconspicuous data augmentation based backdoor attack on deep neural networks
description With new applications made possible by the fusion of edge computing and artificial intelligence (AI) technologies, the global market capitalization of edge AI has risen tremendously in recent years. Deployment of pre-trained deep neural network (DNN) models on edge computing platforms, however, does not alleviate the fundamental trust assurance issue arising from the lack of interpretability of end-to-end DNN solutions. The most notorious threat of DNNs is the backdoor attack. Most backdoor attacks require a relatively large injection rate (≈ 10%) to achieve a high attack success rate. The trigger patterns are not always stealthy and can be easily detected or removed by backdoor detectors. Moreover, these attacks are only tested on DNN models implemented on general-purpose computing platforms. This paper proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the Gradient Magnitude Similarity Deviation, which is used to objectively assess the visual imperceptibility of the poisoned samples. A rich set of composite triggers can be created for different dirty labels. The proposed attacks are evaluated on pre-activation ResNet18 trained with CIFAR-10 and GTSRB datasets, and EfficientNet-B0 trained with adapted 10-class ImageNet dataset. A high attack success rate of above 97% with only 1% injection rate is achieved on these DNN models implemented on both general-purpose computing platforms and Intel Neural Compute Stick 2 edge AI device. The accuracy loss of the poisoned DNNs on benign inputs is kept below 0.6%. The proposed attack is also tested to be resilient to state-of-the-art backdoor defense methods.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Xu, Chaohui
Liu, Wenyu
Zheng, Yue
Wang, Si
Chang, Chip Hong
format Conference or Workshop Item
author Xu, Chaohui
Liu, Wenyu
Zheng, Yue
Wang, Si
Chang, Chip Hong
author_sort Xu, Chaohui
title Inconspicuous data augmentation based backdoor attack on deep neural networks
title_short Inconspicuous data augmentation based backdoor attack on deep neural networks
title_full Inconspicuous data augmentation based backdoor attack on deep neural networks
title_fullStr Inconspicuous data augmentation based backdoor attack on deep neural networks
title_full_unstemmed Inconspicuous data augmentation based backdoor attack on deep neural networks
title_sort inconspicuous data augmentation based backdoor attack on deep neural networks
publishDate 2023
url https://hdl.handle.net/10356/165251
_version_ 1762031105258553344