Hercules: Boosting the performance of privacy-preserving federated learning

In this paper, we address the problem of privacy-preserving federated neural network training with N users. We present Hercules, an efficient and high-precision training framework that can tolerate collusion of up to N−1 users. Hercules follows the POSEIDON framework proposed by Sav et al. (NDSS’21)...

Full description

Saved in:
Bibliographic Details
Main Authors: XU, Guowen, HAN, Xingshuo, XU, Shengmin, ZHANG, Tianwei, LI, Hongwei, HUANG, Xinyi, DENG, Robert H.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8397
https://ink.library.smu.edu.sg/context/sis_research/article/9400/viewcontent/2207.04620__1_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9400
record_format dspace
spelling sg-smu-ink.sis_research-94002024-02-16T02:12:10Z Hercules: Boosting the performance of privacy-preserving federated learning XU, Guowen HAN, Xingshuo XU, Shengmin ZHANG, Tianwei LI, Hongwei HUANG, Xinyi DENG, Robert H. In this paper, we address the problem of privacy-preserving federated neural network training with N users. We present Hercules, an efficient and high-precision training framework that can tolerate collusion of up to N−1 users. Hercules follows the POSEIDON framework proposed by Sav et al. (NDSS’21), but makes a qualitative leap in performance with the following contributions: (i) we design a novel parallel homomorphic computation method for matrix operations, which enables fast Single Instruction and Multiple Data (SIMD) operations over ciphertexts. For the multiplication of two h×h dimensional matrices, our method reduces the computation complexity from O(h3) to O(h) . This greatly improves the training efficiency of the neural network since the ciphertext computation is dominated by the convolution operations; (ii) we present an efficient approximation on the sign function based on the composite polynomial approximation. It is used to approximate non-polynomial functions (i.e., ReLU and max ), with the optimal asymptotic complexity. Extensive experiments on various benchmark datasets (BCW, ESR, CREDIT, MNIST, SVHN, CIFAR-10 and CIFAR-100) show that compared with POSEIDON, Hercules obtains up to 4% increase in model accuracy, and up to 60× reduction in the computation and communication cost. 2023-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8397 info:doi/10.1109/TDSC.2022.3218793 https://ink.library.smu.edu.sg/context/sis_research/article/9400/viewcontent/2207.04620__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University federated learning; polynomial approximation; Privacy protection Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic federated learning; polynomial approximation; Privacy protection
Information Security
spellingShingle federated learning; polynomial approximation; Privacy protection
Information Security
XU, Guowen
HAN, Xingshuo
XU, Shengmin
ZHANG, Tianwei
LI, Hongwei
HUANG, Xinyi
DENG, Robert H.
Hercules: Boosting the performance of privacy-preserving federated learning
description In this paper, we address the problem of privacy-preserving federated neural network training with N users. We present Hercules, an efficient and high-precision training framework that can tolerate collusion of up to N−1 users. Hercules follows the POSEIDON framework proposed by Sav et al. (NDSS’21), but makes a qualitative leap in performance with the following contributions: (i) we design a novel parallel homomorphic computation method for matrix operations, which enables fast Single Instruction and Multiple Data (SIMD) operations over ciphertexts. For the multiplication of two h×h dimensional matrices, our method reduces the computation complexity from O(h3) to O(h) . This greatly improves the training efficiency of the neural network since the ciphertext computation is dominated by the convolution operations; (ii) we present an efficient approximation on the sign function based on the composite polynomial approximation. It is used to approximate non-polynomial functions (i.e., ReLU and max ), with the optimal asymptotic complexity. Extensive experiments on various benchmark datasets (BCW, ESR, CREDIT, MNIST, SVHN, CIFAR-10 and CIFAR-100) show that compared with POSEIDON, Hercules obtains up to 4% increase in model accuracy, and up to 60× reduction in the computation and communication cost.
format text
author XU, Guowen
HAN, Xingshuo
XU, Shengmin
ZHANG, Tianwei
LI, Hongwei
HUANG, Xinyi
DENG, Robert H.
author_facet XU, Guowen
HAN, Xingshuo
XU, Shengmin
ZHANG, Tianwei
LI, Hongwei
HUANG, Xinyi
DENG, Robert H.
author_sort XU, Guowen
title Hercules: Boosting the performance of privacy-preserving federated learning
title_short Hercules: Boosting the performance of privacy-preserving federated learning
title_full Hercules: Boosting the performance of privacy-preserving federated learning
title_fullStr Hercules: Boosting the performance of privacy-preserving federated learning
title_full_unstemmed Hercules: Boosting the performance of privacy-preserving federated learning
title_sort hercules: boosting the performance of privacy-preserving federated learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8397
https://ink.library.smu.edu.sg/context/sis_research/article/9400/viewcontent/2207.04620__1_.pdf
_version_ 1794549704078393344