Data security issues in deep learning: attacks, countermeasures, and opportunities

Benefiting from the advancement of algorithms in massive data and powerful computing resources, deep learning has been explored in a wide variety of fields and produced unparalleled performance results. It plays a vital role in daily applications and is also subtly changing the rules, habits, and be...

全面介紹

Saved in:
書目詳細資料
Main Authors: XU, Guowen, LI, Hongwei, REN, Hao, YANG, Kan, DENG, Robert H.
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2019
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/4673
https://ink.library.smu.edu.sg/context/sis_research/article/5676/viewcontent/DataSecurityIssues_DeepLearning_av.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Singapore Management University
語言: English
實物特徵
總結:Benefiting from the advancement of algorithms in massive data and powerful computing resources, deep learning has been explored in a wide variety of fields and produced unparalleled performance results. It plays a vital role in daily applications and is also subtly changing the rules, habits, and behaviors of society. However, inevitably, data-based learning strategies are bound to cause potential security and privacy threats, and arouse public as well as government concerns about its promotion to the real world. In this article, we mainly focus on data security issues in deep learning. We first investigate the potential threats of deep learning in this area, and then present the latest countermeasures based on various underlying technologies, where the challenges and research opportunities on offense and defense are also discussed. Then, we propose SecureNet, the first verifiable and privacy-preserving prediction protocol to protect model integrity and user privacy in DNNs. It can significantly resist various security and privacy threats during the prediction process. We simulate SecureNet under a real dataset, and the experimental results show the superior performance of SecureNet for detecting various integrity attacks against DNN models.