BadSFL: backdoor attack in scaffold federated learning

Federated learning (FL) enables the training of deep learning models on distributed clients aiming at the preservation of data privacy. However, malicious clients can potentially embed a backdoor functionality into the global model by uploading poisoned local models that cause target misclassificati...

Full description

Saved in:
Bibliographic Details
Main Author: Zhang, Xuanye
Other Authors: Zhang Tianwei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174843
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-174843
record_format dspace
spelling sg-ntu-dr.10356-1748432024-04-19T15:45:02Z BadSFL: backdoor attack in scaffold federated learning Zhang, Xuanye Zhang Tianwei School of Computer Science and Engineering tianwei.zhang@ntu.edu.sg Computer and Information Science Backdoor attack Federated learning Scaffold Federated learning (FL) enables the training of deep learning models on distributed clients aiming at the preservation of data privacy. However, malicious clients can potentially embed a backdoor functionality into the global model by uploading poisoned local models that cause target misclassification. Existing backdoor attacks primarily focus on FL scenarios with independently and identically distributed (IID) data, while real-world FL training data are typically NON-IID. Current NON-IID backdoor attack strategies suffer from limitations in effectiveness and durability. In this paper, we address this gap by proposing a novel backdoor attack BadSFL specifically targeting the FL framework with Scaffold aggregation algorithm tailed for NON-IID scenarios. Our strategy leverages a Generative Adversarial Network (GAN) based on the global model and achieves high accuracy in both backdoor and benign samples. It maintains stealthiness by selecting a specific feature as a backdoor trigger and utilizes Scaffold's control variate to predict the global model's convergence direction, ensuring the persistence of the backdoor function the within global model. Our evaluation results demonstrate the effectiveness of our attack with stealthiness, durability, and high accuracy in both backdoor and primary tasks. Bachelor's degree 2024-04-15T01:25:57Z 2024-04-15T01:25:57Z 2024 Final Year Project (FYP) Zhang, X. (2024). BadSFL: backdoor attack in scaffold federated learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/174843 https://hdl.handle.net/10356/174843 en SCSE23-0763 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Backdoor attack
Federated learning
Scaffold
spellingShingle Computer and Information Science
Backdoor attack
Federated learning
Scaffold
Zhang, Xuanye
BadSFL: backdoor attack in scaffold federated learning
description Federated learning (FL) enables the training of deep learning models on distributed clients aiming at the preservation of data privacy. However, malicious clients can potentially embed a backdoor functionality into the global model by uploading poisoned local models that cause target misclassification. Existing backdoor attacks primarily focus on FL scenarios with independently and identically distributed (IID) data, while real-world FL training data are typically NON-IID. Current NON-IID backdoor attack strategies suffer from limitations in effectiveness and durability. In this paper, we address this gap by proposing a novel backdoor attack BadSFL specifically targeting the FL framework with Scaffold aggregation algorithm tailed for NON-IID scenarios. Our strategy leverages a Generative Adversarial Network (GAN) based on the global model and achieves high accuracy in both backdoor and benign samples. It maintains stealthiness by selecting a specific feature as a backdoor trigger and utilizes Scaffold's control variate to predict the global model's convergence direction, ensuring the persistence of the backdoor function the within global model. Our evaluation results demonstrate the effectiveness of our attack with stealthiness, durability, and high accuracy in both backdoor and primary tasks.
author2 Zhang Tianwei
author_facet Zhang Tianwei
Zhang, Xuanye
format Final Year Project
author Zhang, Xuanye
author_sort Zhang, Xuanye
title BadSFL: backdoor attack in scaffold federated learning
title_short BadSFL: backdoor attack in scaffold federated learning
title_full BadSFL: backdoor attack in scaffold federated learning
title_fullStr BadSFL: backdoor attack in scaffold federated learning
title_full_unstemmed BadSFL: backdoor attack in scaffold federated learning
title_sort badsfl: backdoor attack in scaffold federated learning
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/174843
_version_ 1814047272856453120