BadSFL: backdoor attack in scaffold federated learning
Federated learning (FL) enables the training of deep learning models on distributed clients aiming at the preservation of data privacy. However, malicious clients can potentially embed a backdoor functionality into the global model by uploading poisoned local models that cause target misclassificati...
Saved in:
Main Author: | Zhang, Xuanye |
---|---|
Other Authors: | Zhang Tianwei |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174843 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Privacy-enhancing and robust backdoor defense for federated learning on heterogeneous data
by: CHEN, Zekai, et al.
Published: (2024) -
BADFL: Backdoor attack defense in federated learning from local model perspective
by: ZHANG, Haiyan, et al.
Published: (2024) -
Efficient and secure federated learning against backdoor attacks
by: MIAO, Yinbin, et al.
Published: (2024) -
Evaluation of backdoor attacks and defenses to deep neural networks
by: Ooi, Ying Xuan
Published: (2024) -
An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
by: He, Weiyang, et al.
Published: (2024)