An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks

While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla F...

Full description

Saved in:
Bibliographic Details
Main Authors: He, Weiyang, Liu, Zizhen, Chang, Chip Hong
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173117
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-173117
record_format dspace
spelling sg-ntu-dr.10356-1731172024-01-12T15:40:44Z An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks He, Weiyang Liu, Zizhen Chang, Chip Hong School of Electrical and Electronic Engineering 2023 IEEE 32nd Asian Test Symposium (ATS) Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Federated Learning Knowledge Distillation Backdoor Attacks While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla FL. From recent countermeasures built around KD, we conjecture that the way knowledge is distilled from the global model to the local models and the type of knowledge transfer by KD themselves offer some resilience against targeted poisoning attacks in FL. To attest this hypothesis, we systematize various adversary agnostic state-of-the-art KD-based FL algorithms for the evaluation of their resistance to different targeted poisoning attacks on two vision recognition tasks. Our empirical security-utility trade-off study indicates surprisingly good inherent immunity of certain KD-based FL algorithms that are not designed to mitigate these attacks. By probing into the causes of their robustness, the KD space exploration provides further insights into the balancing of security, privacy and efficiency triad in different FL settings. National Research Foundation (NRF) Submitted/Accepted version This research is supported by the National Research Foundation, Singapore, and Cyber Security Agency of Singapore under its National Cybersecurity Research & Development Programme (Cyber-Hardware Forensic & Assurance Evaluation R&D Programme <NRF2018NCRNCR009-0001>). This work is also supported in part by the National Key Research and Development Program of China under grant No. 2020YFB1600201, National Natural Science Foundation of China (NSFC) under grant No. (U20A20202, 62090024, 61876173), and Youth Innovation Promotion Association CAS. 2024-01-12T08:25:42Z 2024-01-12T08:25:42Z 2023 Conference Paper He, W., Liu, Z. & Chang, C. H. (2023). An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks. 2023 IEEE 32nd Asian Test Symposium (ATS). https://dx.doi.org/10.1109/ATS59501.2023.10317993 9798350303100 2377-5386 https://hdl.handle.net/10356/173117 10.1109/ATS59501.2023.10317993 2-s2.0-85179180717 en NRF2018NCRNCR009-0001 © 2023 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/ATS59501.2023.10317993. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Federated Learning
Knowledge Distillation
Backdoor Attacks
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Federated Learning
Knowledge Distillation
Backdoor Attacks
He, Weiyang
Liu, Zizhen
Chang, Chip Hong
An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
description While the integration of Knowledge Distillation (KD) into Federated Learning (FL) has recently emerged as a promising solution to address the challenges of heterogeneity and communication efficiency, little is known about the security of these schemes against poisoning attacks prevalent in vanilla FL. From recent countermeasures built around KD, we conjecture that the way knowledge is distilled from the global model to the local models and the type of knowledge transfer by KD themselves offer some resilience against targeted poisoning attacks in FL. To attest this hypothesis, we systematize various adversary agnostic state-of-the-art KD-based FL algorithms for the evaluation of their resistance to different targeted poisoning attacks on two vision recognition tasks. Our empirical security-utility trade-off study indicates surprisingly good inherent immunity of certain KD-based FL algorithms that are not designed to mitigate these attacks. By probing into the causes of their robustness, the KD space exploration provides further insights into the balancing of security, privacy and efficiency triad in different FL settings.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
He, Weiyang
Liu, Zizhen
Chang, Chip Hong
format Conference or Workshop Item
author He, Weiyang
Liu, Zizhen
Chang, Chip Hong
author_sort He, Weiyang
title An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_short An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_full An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_fullStr An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_full_unstemmed An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
title_sort empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
publishDate 2024
url https://hdl.handle.net/10356/173117
_version_ 1789483088592502784