Stealthy and robust backdoor attack on deep neural networks based on data augmentation
This work proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the G...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174145 https://ieee-ceda.org/event/2022-asian-hardware-oriented-security-and-trust-symposium |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-174145 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1741452024-03-22T15:39:56Z Stealthy and robust backdoor attack on deep neural networks based on data augmentation Xu, Chaohui Chang, Chip Hong School of Electrical and Electronic Engineering PhD Forum 2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST) Engineering Deep neural networks Data augmentation This work proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the Gradient Magnitude Similarity Deviation, which is used to objectively assess the visual imperceptibility of the poisoned samples. The proposed attacks are evaluated on pre-activation ResNet18 trained with CIFAR-10 and GTSRB datasets, and EfficientNet-B0 trained with adapted 10-class ImageNet dataset. A high attack success rate of above 97% with only 1% injection rate is achieved on these DNN models implemented on both general-purpose computing platforms and Intel Neural Compute Stick 2 edge AI device. The accuracy loss of the poisoned DNNs on benign inputs is kept below 0.6%. The proposed attack is also tested to be resilient to state-of-the-art backdoor defense methods. Ministry of Education (MOE) Submitted/Accepted version This research is supported by the Ministry of Education, Singapore, under its AcRF Tier 2 Award No MOET2EP50220- 0003. 2024-03-18T07:26:04Z 2024-03-18T07:26:04Z 2022 Conference Paper Xu, C. & Chang, C. H. (2022). Stealthy and robust backdoor attack on deep neural networks based on data augmentation. PhD Forum 2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). https://hdl.handle.net/10356/174145 https://ieee-ceda.org/event/2022-asian-hardware-oriented-security-and-trust-symposium en MOET2EP50220- 0003 © 2022 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. application/pdf application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Deep neural networks Data augmentation |
spellingShingle |
Engineering Deep neural networks Data augmentation Xu, Chaohui Chang, Chip Hong Stealthy and robust backdoor attack on deep neural networks based on data augmentation |
description |
This work proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the Gradient Magnitude Similarity Deviation, which is used to objectively assess the visual imperceptibility of the poisoned samples. The proposed attacks are evaluated on pre-activation ResNet18 trained with CIFAR-10 and GTSRB datasets, and EfficientNet-B0 trained with adapted 10-class ImageNet dataset. A high attack success rate of above 97% with only 1% injection rate is achieved on these DNN models implemented on both general-purpose computing platforms and Intel Neural Compute Stick 2 edge AI device. The accuracy loss of the poisoned DNNs on benign inputs is kept below 0.6%. The proposed attack is also tested to be resilient to state-of-the-art backdoor defense methods. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Xu, Chaohui Chang, Chip Hong |
format |
Conference or Workshop Item |
author |
Xu, Chaohui Chang, Chip Hong |
author_sort |
Xu, Chaohui |
title |
Stealthy and robust backdoor attack on deep neural networks based on data augmentation |
title_short |
Stealthy and robust backdoor attack on deep neural networks based on data augmentation |
title_full |
Stealthy and robust backdoor attack on deep neural networks based on data augmentation |
title_fullStr |
Stealthy and robust backdoor attack on deep neural networks based on data augmentation |
title_full_unstemmed |
Stealthy and robust backdoor attack on deep neural networks based on data augmentation |
title_sort |
stealthy and robust backdoor attack on deep neural networks based on data augmentation |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/174145 https://ieee-ceda.org/event/2022-asian-hardware-oriented-security-and-trust-symposium |
_version_ |
1794549367475011584 |