An imperceptible data augmentation based blackbox clean-label backdoor attack on deep neural networks
Deep neural networks (DNNs) have permeated into many diverse application domains, making them attractive targets of malicious attacks. DNNs are particularly susceptible to data poisoning attacks. Such attacks can be made more venomous and harder to detect by poisoning the training samples without ch...
Saved in:
Main Authors: | Xu, Chaohui, Liu, Wenye, Zheng, Yue, Wang, Si, Chang, Chip Hong |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173118 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
BadSFL: backdoor attack in scaffold federated learning
by: Zhang, Xuanye
Published: (2024) -
Evaluation of backdoor attacks and defenses to deep neural networks
by: Ooi, Ying Xuan
Published: (2024) -
Stealthy backdoor attack for code models
by: YANG, Zhou, et al.
Published: (2024) -
An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
by: He, Weiyang, et al.
Published: (2024) -
Backdoor attacks in neural networks
by: Low, Wen Wen
Published: (2023)