Stealthy and robust backdoor attack on deep neural networks based on data augmentation

This work proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the G...

Full description

Saved in:
Bibliographic Details
Main Authors: Xu, Chaohui, Chang, Chip Hong
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174145
https://ieee-ceda.org/event/2022-asian-hardware-oriented-security-and-trust-symposium
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This work proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the Gradient Magnitude Similarity Deviation, which is used to objectively assess the visual imperceptibility of the poisoned samples. The proposed attacks are evaluated on pre-activation ResNet18 trained with CIFAR-10 and GTSRB datasets, and EfficientNet-B0 trained with adapted 10-class ImageNet dataset. A high attack success rate of above 97% with only 1% injection rate is achieved on these DNN models implemented on both general-purpose computing platforms and Intel Neural Compute Stick 2 edge AI device. The accuracy loss of the poisoned DNNs on benign inputs is kept below 0.6%. The proposed attack is also tested to be resilient to state-of-the-art backdoor defense methods.