Lightweight privacy-preserving GAN framework for model training and image synthesis
Generative adversarial network (GAN) has excellent performance for data generation and is widely used in image synthesis. Outsourcing GAN to cloud platform is a popular way to save local computation resources and improve the efficiency, but it still faces the privacy leakage concerns: (1) the sensit...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7247 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Generative adversarial network (GAN) has excellent performance for data generation and is widely used in image synthesis. Outsourcing GAN to cloud platform is a popular way to save local computation resources and improve the efficiency, but it still faces the privacy leakage concerns: (1) the sensitive information of the training dataset may be disclosed in the cloud; (2) the trained model may reveal the privacy of training samples since it extracts the characteristics from the data. In this paper, we propose a lightweight privacy-preserving GAN framework (LP-GAN) for model training and image synthesis based on secret sharing scheme. Specifically, we design a series of efficient secure interactive protocols for different layers (convolution, batch normalization, ReLU, Sigmoid) of neural network (NN) used in GAN. Our protocols are scalable to build secure training or inference tasks for NN-based applications. We utilize edge computing to reduce the latency and all the protocols are executed on two edge servers collaboratively. Compared with the existing schemes, the proposed solution greatly improves efficiency, reduces communication overhead, and guarantees the privacy. We prove the correctness and security of LP-GAN by theoretical analysis. Extensive experiments on different real-world datasets demonstrate the effectiveness, accuracy, and efficiency of our scheme. |
---|