Privacy-enhancing and robust backdoor defense for federated learning on heterogeneous data
Federated learning (FL) allows multiple clients to train deep learning models collaboratively while protecting sensitive local datasets. However, FL has been highly susceptible to security for federated backdoor attacks (FBA) through injecting triggers and privacy for potential data leakage from upl...
Saved in:
Main Authors: | CHEN, Zekai, YU, Shengxing, FAN, Mingyuan, LIU, Ximeng, DENG, Robert H. |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8631 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Evaluation of backdoor attacks and defenses to deep neural networks
by: Ooi, Ying Xuan
Published: (2024) -
Efficient and secure federated learning against backdoor attacks
by: MIAO, Yinbin, et al.
Published: (2024) -
An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
by: He, Weiyang, et al.
Published: (2024) -
Privacy and robustness in federated learning: attacks and defenses
by: Lyu, Lingjuan, et al.
Published: (2023) -
Linkbreaker: Breaking the backdoor-trigger link in DNNs via neurons consistency check
by: CHEN, Zhenzhu, et al.
Published: (2022)