Differential privacy protection over deep learning: An investigation of its impacted factors
Deep learning (DL) has been widely applied to achieve promising results in many fields, but it still exists various privacy concerns and issues. Applying differential privacy (DP) to DL models is an effective way to ensure privacy-preserving training and classification. In this paper, we revisit the...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2020
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/5402 https://ink.library.smu.edu.sg/context/sis_research/article/6405/viewcontent/DifferentialPrivacy_av_2020.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-6405 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-64052020-12-10T03:05:11Z Differential privacy protection over deep learning: An investigation of its impacted factors LIN, Ying BAO, Ling-Yan LI, Ze-Minghui SI, Shu-Sheng CHU, Chao-Hsien Deep learning (DL) has been widely applied to achieve promising results in many fields, but it still exists various privacy concerns and issues. Applying differential privacy (DP) to DL models is an effective way to ensure privacy-preserving training and classification. In this paper, we revisit the DP stochastic gradient descent (DP-SGD) method, which has been used by several algorithms and systems and achieved good privacy protection. However, several factors, such as the sequence of adding noise, the models used etc., may impact its performance with various degrees. We empirically show that adding noise first and clipping second will not only significantly achieve high accuracy, but also accelerate convergence. Rigorous experiments have been conducted on three different datasets to train two popular DL models, Convolutional Neural Network (CNN) and Long and Short-Term Memory (LSTM). For the CNN, the accuracy rate can be increased by 3%, 8% and 10% on average for the respective datasets, and the loss value is reduced by 18%, 14% and 22% on average. For the LSTM, the accuracy rate can be increased by 18%, 13% and 12% on average, and the loss value can be reduced by 55%, 25% and 23% on average. Meanwhile, we have compared the performance of our proposed method with a state-of-the-art SGD-based technique. The results show that under the premise of a reasonable clipping threshold, the proposed method not only has better performance, but also achieve ideal privacy protection effects. The proposed alternative can be applied to many existing privacy preserving solutions. 2020-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/5402 info:doi/10.1016/j.cose.2020.102061 https://ink.library.smu.edu.sg/context/sis_research/article/6405/viewcontent/DifferentialPrivacy_av_2020.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Differential privacy Privacy preserving Deep learning Stochastic gradient descent (SGD) Databases and Information Systems Information Security |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Differential privacy Privacy preserving Deep learning Stochastic gradient descent (SGD) Databases and Information Systems Information Security |
spellingShingle |
Differential privacy Privacy preserving Deep learning Stochastic gradient descent (SGD) Databases and Information Systems Information Security LIN, Ying BAO, Ling-Yan LI, Ze-Minghui SI, Shu-Sheng CHU, Chao-Hsien Differential privacy protection over deep learning: An investigation of its impacted factors |
description |
Deep learning (DL) has been widely applied to achieve promising results in many fields, but it still exists various privacy concerns and issues. Applying differential privacy (DP) to DL models is an effective way to ensure privacy-preserving training and classification. In this paper, we revisit the DP stochastic gradient descent (DP-SGD) method, which has been used by several algorithms and systems and achieved good privacy protection. However, several factors, such as the sequence of adding noise, the models used etc., may impact its performance with various degrees. We empirically show that adding noise first and clipping second will not only significantly achieve high accuracy, but also accelerate convergence. Rigorous experiments have been conducted on three different datasets to train two popular DL models, Convolutional Neural Network (CNN) and Long and Short-Term Memory (LSTM). For the CNN, the accuracy rate can be increased by 3%, 8% and 10% on average for the respective datasets, and the loss value is reduced by 18%, 14% and 22% on average. For the LSTM, the accuracy rate can be increased by 18%, 13% and 12% on average, and the loss value can be reduced by 55%, 25% and 23% on average. Meanwhile, we have compared the performance of our proposed method with a state-of-the-art SGD-based technique. The results show that under the premise of a reasonable clipping threshold, the proposed method not only has better performance, but also achieve ideal privacy protection effects. The proposed alternative can be applied to many existing privacy preserving solutions. |
format |
text |
author |
LIN, Ying BAO, Ling-Yan LI, Ze-Minghui SI, Shu-Sheng CHU, Chao-Hsien |
author_facet |
LIN, Ying BAO, Ling-Yan LI, Ze-Minghui SI, Shu-Sheng CHU, Chao-Hsien |
author_sort |
LIN, Ying |
title |
Differential privacy protection over deep learning: An investigation of its impacted factors |
title_short |
Differential privacy protection over deep learning: An investigation of its impacted factors |
title_full |
Differential privacy protection over deep learning: An investigation of its impacted factors |
title_fullStr |
Differential privacy protection over deep learning: An investigation of its impacted factors |
title_full_unstemmed |
Differential privacy protection over deep learning: An investigation of its impacted factors |
title_sort |
differential privacy protection over deep learning: an investigation of its impacted factors |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2020 |
url |
https://ink.library.smu.edu.sg/sis_research/5402 https://ink.library.smu.edu.sg/context/sis_research/article/6405/viewcontent/DifferentialPrivacy_av_2020.pdf |
_version_ |
1770575445587132416 |