Clean-label backdoor attack and defense: an examination of language model vulnerability
Prompt-based learning, a paradigm that creates a bridge between pre-training and fine-tuning stages, has proven to be highly effective concerning various NLP tasks, particularly in few-shot scenarios. However, such a paradigm is not immune to backdoor attacks. Textual backdoor attacks aim at implant...
Saved in:
Main Authors: | Zhao, Shuai, Xu, Xiaoyu, Xiao, Luwei, Wen, Jinming, Tuan, Luu Anh |
---|---|
Other Authors: | College of Computing and Data Science |
Format: | Article |
Language: | English |
Published: |
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182201 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Evaluation of backdoor attacks and defenses to deep neural networks
by: Ooi, Ying Xuan
Published: (2024) -
BadSFL: backdoor attack in scaffold federated learning
by: Zhang, Xuanye
Published: (2024) -
Privacy-enhancing and robust backdoor defense for federated learning on heterogeneous data
by: CHEN, Zekai, et al.
Published: (2024) -
SampDetox : Black-box backdoor defense via perturbation-based sample detoxification
by: YANG, Yanxin, et al.
Published: (2024) -
BADFL: Backdoor attack defense in federated learning from local model perspective
by: ZHANG, Haiyan, et al.
Published: (2024)