Data security issues in deep learning: attacks, countermeasures, and opportunities
Benefiting from the advancement of algorithms in massive data and powerful computing resources, deep learning has been explored in a wide variety of fields and produced unparalleled performance results. It plays a vital role in daily applications and is also subtly changing the rules, habits, and be...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4673 https://ink.library.smu.edu.sg/context/sis_research/article/5676/viewcontent/DataSecurityIssues_DeepLearning_av.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-5676 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-56762020-07-03T03:29:44Z Data security issues in deep learning: attacks, countermeasures, and opportunities XU, Guowen LI, Hongwei REN, Hao YANG, Kan DENG, Robert H. Benefiting from the advancement of algorithms in massive data and powerful computing resources, deep learning has been explored in a wide variety of fields and produced unparalleled performance results. It plays a vital role in daily applications and is also subtly changing the rules, habits, and behaviors of society. However, inevitably, data-based learning strategies are bound to cause potential security and privacy threats, and arouse public as well as government concerns about its promotion to the real world. In this article, we mainly focus on data security issues in deep learning. We first investigate the potential threats of deep learning in this area, and then present the latest countermeasures based on various underlying technologies, where the challenges and research opportunities on offense and defense are also discussed. Then, we propose SecureNet, the first verifiable and privacy-preserving prediction protocol to protect model integrity and user privacy in DNNs. It can significantly resist various security and privacy threats during the prediction process. We simulate SecureNet under a real dataset, and the experimental results show the superior performance of SecureNet for detecting various integrity attacks against DNN models. 2019-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4673 info:doi/10.1109/MCOM.001.1900091 https://ink.library.smu.edu.sg/context/sis_research/article/5676/viewcontent/DataSecurityIssues_DeepLearning_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Integrity attacks Learning strategy Potential threats Prediction process Privacy preserving Information Security |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Integrity attacks Learning strategy Potential threats Prediction process Privacy preserving Information Security |
spellingShingle |
Integrity attacks Learning strategy Potential threats Prediction process Privacy preserving Information Security XU, Guowen LI, Hongwei REN, Hao YANG, Kan DENG, Robert H. Data security issues in deep learning: attacks, countermeasures, and opportunities |
description |
Benefiting from the advancement of algorithms in massive data and powerful computing resources, deep learning has been explored in a wide variety of fields and produced unparalleled performance results. It plays a vital role in daily applications and is also subtly changing the rules, habits, and behaviors of society. However, inevitably, data-based learning strategies are bound to cause potential security and privacy threats, and arouse public as well as government concerns about its promotion to the real world. In this article, we mainly focus on data security issues in deep learning. We first investigate the potential threats of deep learning in this area, and then present the latest countermeasures based on various underlying technologies, where the challenges and research opportunities on offense and defense are also discussed. Then, we propose SecureNet, the first verifiable and privacy-preserving prediction protocol to protect model integrity and user privacy in DNNs. It can significantly resist various security and privacy threats during the prediction process. We simulate SecureNet under a real dataset, and the experimental results show the superior performance of SecureNet for detecting various integrity attacks against DNN models. |
format |
text |
author |
XU, Guowen LI, Hongwei REN, Hao YANG, Kan DENG, Robert H. |
author_facet |
XU, Guowen LI, Hongwei REN, Hao YANG, Kan DENG, Robert H. |
author_sort |
XU, Guowen |
title |
Data security issues in deep learning: attacks, countermeasures, and opportunities |
title_short |
Data security issues in deep learning: attacks, countermeasures, and opportunities |
title_full |
Data security issues in deep learning: attacks, countermeasures, and opportunities |
title_fullStr |
Data security issues in deep learning: attacks, countermeasures, and opportunities |
title_full_unstemmed |
Data security issues in deep learning: attacks, countermeasures, and opportunities |
title_sort |
data security issues in deep learning: attacks, countermeasures, and opportunities |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2019 |
url |
https://ink.library.smu.edu.sg/sis_research/4673 https://ink.library.smu.edu.sg/context/sis_research/article/5676/viewcontent/DataSecurityIssues_DeepLearning_av.pdf |
_version_ |
1770574960845127680 |