RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack
Federated learning not only realizes collaborative training of models, but also effectively maintains user privacy. However, with the widespread application of privacy-preserving federated learning, poisoning attacks threaten the model utility. Existing defense schemes suffer from a series of proble...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8817 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9820 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-98202024-05-30T07:06:03Z RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack MIAO, Yinbin YAN, Xinru LI, Xinghua XU, Shujiang LIU, Ximeng LI, Hongwei DENG, Robert H. Federated learning not only realizes collaborative training of models, but also effectively maintains user privacy. However, with the widespread application of privacy-preserving federated learning, poisoning attacks threaten the model utility. Existing defense schemes suffer from a series of problems, including low accuracy, low robustness and reliance on strong assumptions, which limit the practicability of federated learning. To solve these problems, we propose a Robustness-enhanced privacy-preserving Federated learning with scaled dot-product attention (RFed) under dual-server model. Specifically, we design a highly robust defense mechanism that uses a dual-server model instead of traditional single-server model to significantly improve model accuracy and completely eliminate the reliance on strong assumptions. Formal security analysis proves that our scheme achieves convergence and provides privacy protection, and extensive experiments demonstrate that our scheme reduces high computational overhead while guaranteeing privacy preservation and model accuracy, and ensures that the failure rate of poisoning attacks is higher than 96%. 2024-01-01T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/8817 info:doi/10.1109/TIFS.2024.3402113 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Computational modeling Federated learning poisoning attack Privacy privacy protection Robustness scaled dot-product attention mechanism Security Servers Training Information Security |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Computational modeling Federated learning poisoning attack Privacy privacy protection Robustness scaled dot-product attention mechanism Security Servers Training Information Security |
spellingShingle |
Computational modeling Federated learning poisoning attack Privacy privacy protection Robustness scaled dot-product attention mechanism Security Servers Training Information Security MIAO, Yinbin YAN, Xinru LI, Xinghua XU, Shujiang LIU, Ximeng LI, Hongwei DENG, Robert H. RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack |
description |
Federated learning not only realizes collaborative training of models, but also effectively maintains user privacy. However, with the widespread application of privacy-preserving federated learning, poisoning attacks threaten the model utility. Existing defense schemes suffer from a series of problems, including low accuracy, low robustness and reliance on strong assumptions, which limit the practicability of federated learning. To solve these problems, we propose a Robustness-enhanced privacy-preserving Federated learning with scaled dot-product attention (RFed) under dual-server model. Specifically, we design a highly robust defense mechanism that uses a dual-server model instead of traditional single-server model to significantly improve model accuracy and completely eliminate the reliance on strong assumptions. Formal security analysis proves that our scheme achieves convergence and provides privacy protection, and extensive experiments demonstrate that our scheme reduces high computational overhead while guaranteeing privacy preservation and model accuracy, and ensures that the failure rate of poisoning attacks is higher than 96%. |
format |
text |
author |
MIAO, Yinbin YAN, Xinru LI, Xinghua XU, Shujiang LIU, Ximeng LI, Hongwei DENG, Robert H. |
author_facet |
MIAO, Yinbin YAN, Xinru LI, Xinghua XU, Shujiang LIU, Ximeng LI, Hongwei DENG, Robert H. |
author_sort |
MIAO, Yinbin |
title |
RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack |
title_short |
RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack |
title_full |
RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack |
title_fullStr |
RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack |
title_full_unstemmed |
RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack |
title_sort |
rfed: robustness-enhanced privacy-preserving federated learning against poisoning attack |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/8817 |
_version_ |
1814047565285425152 |