An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla F...
Saved in:
Main Authors: | He, Weiyang, Liu, Zizhen, Chang, Chip Hong |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173117 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
BadSFL: backdoor attack in scaffold federated learning
by: Zhang, Xuanye
Published: (2024) -
Towards efficient and certified recovery from poisoning attacks in federated learning
by: Jiang, Yu, et al.
Published: (2025) -
Privacy-enhancing and robust backdoor defense for federated learning on heterogeneous data
by: CHEN, Zekai, et al.
Published: (2024) -
Personalized federated learning with dynamic clustering and model distillation
by: Bao, Junyan
Published: (2025) -
One-class knowledge distillation for face presentation attack detection
by: Li, Zhi, et al.
Published: (2023)