ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
Privacy-Preserving Federated Learning (PPFL) is an emerging secure distributed learning paradigm that aggregates user-trained local gradients into a federated model through a cryptographic protocol. Unfortunately, PPFL is vulnerable to model poisoning attacks launched by a Byzantine adversary, who c...
Saved in:
Main Authors: | MA, Zhuoran, MA, Jianfeng, MIAO, Yinbin, LI, Yingjiu, DENG, Robert H. |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7252 https://doi.org/10.1109/TIFS.2022.3169918 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
FlGan: GAN-based unbiased federated learning under non-IID settings
by: MA, Zhuoran, et al.
Published: (2024) -
Authenticable data analytics over encrypted data in the cloud
by: CHEN, Lanxing, et al.
Published: (2023) -
DESIGN METHODOLOGIES FOR TRUSTED AND EFFICIENT OUTSOURCING OF PRIVACY-PRESERVING AI
by: MEFTAH SOUHAIL
Published: (2023) -
A privacy-preserving outsourced functional computation framework across large-scale multiple encrypted domains
by: LIU, Ximeng, et al.
Published: (2016) -
Privacy-preserving outsourced calculation toolkit in the cloud
by: LIU, Ximeng, et al.
Published: (2020)