An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla F...
Saved in:
Main Authors: | He, Weiyang, Liu, Zizhen, Chang, Chip Hong |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173117 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
BadSFL: backdoor attack in scaffold federated learning
by: Zhang, Xuanye
Published: (2024) -
Privacy-enhancing and robust backdoor defense for federated learning on heterogeneous data
by: CHEN, Zekai, et al.
Published: (2024) -
BADFL: Backdoor attack defense in federated learning from local model perspective
by: ZHANG, Haiyan, et al.
Published: (2024) -
One-class knowledge distillation for face presentation attack detection
by: Li, Zhi, et al.
Published: (2023) -
Efficient and secure federated learning against backdoor attacks
by: MIAO, Yinbin, et al.
Published: (2024)