Linkbreaker: Breaking the backdoor-trigger link in DNNs via neurons consistency check
Backdoor attacks cause model misbehaving by first implanting backdoors in deep neural networks (DNNs) during training and then activating the backdoor via samples with triggers during inference. The compromised models could pose serious security risks to artificial intelligence systems, such as misi...
Saved in:
Main Authors: | CHEN, Zhenzhu, WANG, Shang, FU, Anmin, GAO, Yansong, YU, Shui, DENG, Robert H. |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7250 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Stealthy backdoor attack for code models
by: YANG, Zhou, et al.
Published: (2024) -
Evaluation of backdoor attacks and defenses to deep neural networks
by: Ooi, Ying Xuan
Published: (2024) -
Privacy-enhancing and robust backdoor defense for federated learning on heterogeneous data
by: CHEN, Zekai, et al.
Published: (2024) -
BADFL: Backdoor attack defense in federated learning from local model perspective
by: ZHANG, Haiyan, et al.
Published: (2024) -
Efficient and secure federated learning against backdoor attacks
by: MIAO, Yinbin, et al.
Published: (2024)