RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack

Federated learning not only realizes collaborative training of models, but also effectively maintains user privacy. However, with the widespread application of privacy-preserving federated learning, poisoning attacks threaten the model utility. Existing defense schemes suffer from a series of proble...

Full description

Saved in:
Bibliographic Details
Main Authors: MIAO, Yinbin, YAN, Xinru, LI, Xinghua, XU, Shujiang, LIU, Ximeng, LI, Hongwei, DENG, Robert H.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8817
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Federated learning not only realizes collaborative training of models, but also effectively maintains user privacy. However, with the widespread application of privacy-preserving federated learning, poisoning attacks threaten the model utility. Existing defense schemes suffer from a series of problems, including low accuracy, low robustness and reliance on strong assumptions, which limit the practicability of federated learning. To solve these problems, we propose a Robustness-enhanced privacy-preserving Federated learning with scaled dot-product attention (RFed) under dual-server model. Specifically, we design a highly robust defense mechanism that uses a dual-server model instead of traditional single-server model to significantly improve model accuracy and completely eliminate the reliance on strong assumptions. Formal security analysis proves that our scheme achieves convergence and provides privacy protection, and extensive experiments demonstrate that our scheme reduces high computational overhead while guaranteeing privacy preservation and model accuracy, and ensures that the failure rate of poisoning attacks is higher than 96%.