ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning

Privacy-Preserving Federated Learning (PPFL) is an emerging secure distributed learning paradigm that aggregates user-trained local gradients into a federated model through a cryptographic protocol. Unfortunately, PPFL is vulnerable to model poisoning attacks launched by a Byzantine adversary, who c...

Full description

Saved in:
Bibliographic Details
Main Authors: MA, Zhuoran, MA, Jianfeng, MIAO, Yinbin, LI, Yingjiu, DENG, Robert H.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7252
https://doi.org/10.1109/TIFS.2022.3169918
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8255
record_format dspace
spelling sg-smu-ink.sis_research-82552024-02-28T03:00:43Z ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning MA, Zhuoran MA, Jianfeng MIAO, Yinbin LI, Yingjiu DENG, Robert H. Privacy-Preserving Federated Learning (PPFL) is an emerging secure distributed learning paradigm that aggregates user-trained local gradients into a federated model through a cryptographic protocol. Unfortunately, PPFL is vulnerable to model poisoning attacks launched by a Byzantine adversary, who crafts malicious local gradients to harm the accuracy of the federated model. To resist model poisoning attacks, existing defense strategies focus on identifying suspicious local gradients over plaintexts. However, the Byzantine adversary submits encrypted poisonous gradients to circumvent existing defense strategies in PPFL, resulting in encrypted model poisoning. To address the issue, in this paper we design a privacy-preserving defense strategy using two-trapdoor homomorphic encryption (referred to as ShieldFL), which can resist encrypted model poisoning without compromising privacy in PPFL. Specially, we first present the secure cosine similarity method aiming to measure the distance between two encrypted gradients. Then, we propose the Byzantine-tolerance aggregation using cosine similarity, which can achieve robustness for both Independently Identically Distribution (IID) and non-IID data. Extensive evaluations on three benchmark datasets (i.e., MNIST, KDDCup99, and Amazon) show that ShieldFL outperforms existing defense strategies. Especially, ShieldFL can achieve 30%-80% accuracy improvement to defend two state-of-the-art model poisoning attacks in both non-IID and IID settings. 2022-04-01T07:00:00Z text https://ink.library.smu.edu.sg/sis_research/7252 info:doi/10.1109/TIFS.2022.3169918 https://doi.org/10.1109/TIFS.2022.3169918 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Cryptography Data models Privacy Computational modeling Servers Data privacy Homomorphic encryption Privacy-preserving homomorphic encryption defense strategy model poisoning attack federated learning Databases and Information Systems Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Cryptography
Data models
Privacy
Computational modeling
Servers
Data privacy
Homomorphic encryption
Privacy-preserving
homomorphic encryption
defense strategy
model poisoning attack
federated learning
Databases and Information Systems
Information Security
spellingShingle Cryptography
Data models
Privacy
Computational modeling
Servers
Data privacy
Homomorphic encryption
Privacy-preserving
homomorphic encryption
defense strategy
model poisoning attack
federated learning
Databases and Information Systems
Information Security
MA, Zhuoran
MA, Jianfeng
MIAO, Yinbin
LI, Yingjiu
DENG, Robert H.
ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
description Privacy-Preserving Federated Learning (PPFL) is an emerging secure distributed learning paradigm that aggregates user-trained local gradients into a federated model through a cryptographic protocol. Unfortunately, PPFL is vulnerable to model poisoning attacks launched by a Byzantine adversary, who crafts malicious local gradients to harm the accuracy of the federated model. To resist model poisoning attacks, existing defense strategies focus on identifying suspicious local gradients over plaintexts. However, the Byzantine adversary submits encrypted poisonous gradients to circumvent existing defense strategies in PPFL, resulting in encrypted model poisoning. To address the issue, in this paper we design a privacy-preserving defense strategy using two-trapdoor homomorphic encryption (referred to as ShieldFL), which can resist encrypted model poisoning without compromising privacy in PPFL. Specially, we first present the secure cosine similarity method aiming to measure the distance between two encrypted gradients. Then, we propose the Byzantine-tolerance aggregation using cosine similarity, which can achieve robustness for both Independently Identically Distribution (IID) and non-IID data. Extensive evaluations on three benchmark datasets (i.e., MNIST, KDDCup99, and Amazon) show that ShieldFL outperforms existing defense strategies. Especially, ShieldFL can achieve 30%-80% accuracy improvement to defend two state-of-the-art model poisoning attacks in both non-IID and IID settings.
format text
author MA, Zhuoran
MA, Jianfeng
MIAO, Yinbin
LI, Yingjiu
DENG, Robert H.
author_facet MA, Zhuoran
MA, Jianfeng
MIAO, Yinbin
LI, Yingjiu
DENG, Robert H.
author_sort MA, Zhuoran
title ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
title_short ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
title_full ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
title_fullStr ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
title_full_unstemmed ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
title_sort shieldfl: mitigating model poisoning attacks in privacy-preserving federated learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7252
https://doi.org/10.1109/TIFS.2022.3169918
_version_ 1794549717212856320