Secure and verifiable inference in deep neural networks

Outsourced inference service has enormously promoted the popularity of deep learning, and helped users to customize a range of personalized applications. However, it also entails a variety of security and privacy issues brought by untrusted service providers. Particularly, a malicious adversary may...

Full description

Saved in:
Bibliographic Details
Main Authors: XU, Guowen, LI, Hongwei, REN, Hao, SUN, Jianfei, XU, Shengmin, NING, Jianting, YANG, Haoming, YANG, Kan, DENG, Robert H.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/5910
https://ink.library.smu.edu.sg/context/sis_research/article/6913/viewcontent/3427228.3427232.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-6913
record_format dspace
spelling sg-smu-ink.sis_research-69132021-05-07T06:34:21Z Secure and verifiable inference in deep neural networks XU, Guowen LI, Hongwei REN, Hao SUN, Jianfei XU, Shengmin NING, Jianting YANG, Haoming YANG, Kan DENG, Robert H. Outsourced inference service has enormously promoted the popularity of deep learning, and helped users to customize a range of personalized applications. However, it also entails a variety of security and privacy issues brought by untrusted service providers. Particularly, a malicious adversary may violate user privacy during the inference process, or worse, return incorrect results to the client through compromising the integrity of the outsourced model. To address these problems, we propose SecureDL to protect the model’s integrity and user’s privacy in Deep Neural Networks (DNNs) inference process. In SecureDL, we first transform complicated non-linear activation functions of DNNs to low-degree polynomials. Then, we give a novel method to generate sensitive-samples, which can verify the integrity of a model’s parameters outsourced to the server with high accuracy. Finally, We exploit Leveled Homomorphic Encryption (LHE) to achieve the privacy-preserving inference. We shown that our sensitive-samples are indeed very sensitive to model changes, such that even a small change in parameters can be reflected in the model outputs. Based on the experiments conducted on real data and different types of attacks, we demonstrate the superior performance of SecureDL in terms of detection accuracy, inference accuracy, computation, and communication overheads. 2020-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/5910 info:doi/10.1145/3427228.3427232 https://ink.library.smu.edu.sg/context/sis_research/article/6913/viewcontent/3427228.3427232.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep learning Privacy protection Variable inference Network security Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Deep learning
Privacy protection
Variable inference
Network security
Information Security
spellingShingle Deep learning
Privacy protection
Variable inference
Network security
Information Security
XU, Guowen
LI, Hongwei
REN, Hao
SUN, Jianfei
XU, Shengmin
NING, Jianting
YANG, Haoming
YANG, Kan
DENG, Robert H.
Secure and verifiable inference in deep neural networks
description Outsourced inference service has enormously promoted the popularity of deep learning, and helped users to customize a range of personalized applications. However, it also entails a variety of security and privacy issues brought by untrusted service providers. Particularly, a malicious adversary may violate user privacy during the inference process, or worse, return incorrect results to the client through compromising the integrity of the outsourced model. To address these problems, we propose SecureDL to protect the model’s integrity and user’s privacy in Deep Neural Networks (DNNs) inference process. In SecureDL, we first transform complicated non-linear activation functions of DNNs to low-degree polynomials. Then, we give a novel method to generate sensitive-samples, which can verify the integrity of a model’s parameters outsourced to the server with high accuracy. Finally, We exploit Leveled Homomorphic Encryption (LHE) to achieve the privacy-preserving inference. We shown that our sensitive-samples are indeed very sensitive to model changes, such that even a small change in parameters can be reflected in the model outputs. Based on the experiments conducted on real data and different types of attacks, we demonstrate the superior performance of SecureDL in terms of detection accuracy, inference accuracy, computation, and communication overheads.
format text
author XU, Guowen
LI, Hongwei
REN, Hao
SUN, Jianfei
XU, Shengmin
NING, Jianting
YANG, Haoming
YANG, Kan
DENG, Robert H.
author_facet XU, Guowen
LI, Hongwei
REN, Hao
SUN, Jianfei
XU, Shengmin
NING, Jianting
YANG, Haoming
YANG, Kan
DENG, Robert H.
author_sort XU, Guowen
title Secure and verifiable inference in deep neural networks
title_short Secure and verifiable inference in deep neural networks
title_full Secure and verifiable inference in deep neural networks
title_fullStr Secure and verifiable inference in deep neural networks
title_full_unstemmed Secure and verifiable inference in deep neural networks
title_sort secure and verifiable inference in deep neural networks
publisher Institutional Knowledge at Singapore Management University
publishDate 2020
url https://ink.library.smu.edu.sg/sis_research/5910
https://ink.library.smu.edu.sg/context/sis_research/article/6913/viewcontent/3427228.3427232.pdf
_version_ 1770575660996100096