Protecting neural networks from adversarial attacks

Deep learning has become very popular in recent years and naturally, there are rising concerns about protecting the Intellectual Property (IP) rights of these models. Building and training deep learning models, such as Convolutional Neural Networks (CNNs), require in-depth technical expertise, compu...

全面介紹

Saved in:
書目詳細資料
主要作者: Lim, Xin Yi
其他作者: Anupam Chattopadhyay
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/175191
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
id sg-ntu-dr.10356-175191
record_format dspace
spelling sg-ntu-dr.10356-1751912024-04-19T15:42:38Z Protecting neural networks from adversarial attacks Lim, Xin Yi Anupam Chattopadhyay School of Computer Science and Engineering anupam@ntu.edu.sg Computer and Information Science Neural networks Deep learning has become very popular in recent years and naturally, there are rising concerns about protecting the Intellectual Property (IP) rights of these models. Building and training deep learning models, such as Convolutional Neural Networks (CNNs), require in-depth technical expertise, computational resources, large amounts of data, and time. Hence, the motivation to prevent the theft of such valuable models. There exist two robust frameworks to do so, namely watermarking and locking. Watermarking allows validation of the original ownership of a model, whereas locking aims to encrypt the model such that only authorized access can produce accurate results. This report presents a workflow applying both watermarking and locking techniques to various image classification models and shows how both techniques can work hand in hand without compromising the model’s performance. Bachelor's degree 2024-04-19T13:00:12Z 2024-04-19T13:00:12Z 2024 Final Year Project (FYP) Lim, X. Y. (2024). Protecting neural networks from adversarial attacks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175191 https://hdl.handle.net/10356/175191 en SCSE23-0259 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Neural networks
spellingShingle Computer and Information Science
Neural networks
Lim, Xin Yi
Protecting neural networks from adversarial attacks
description Deep learning has become very popular in recent years and naturally, there are rising concerns about protecting the Intellectual Property (IP) rights of these models. Building and training deep learning models, such as Convolutional Neural Networks (CNNs), require in-depth technical expertise, computational resources, large amounts of data, and time. Hence, the motivation to prevent the theft of such valuable models. There exist two robust frameworks to do so, namely watermarking and locking. Watermarking allows validation of the original ownership of a model, whereas locking aims to encrypt the model such that only authorized access can produce accurate results. This report presents a workflow applying both watermarking and locking techniques to various image classification models and shows how both techniques can work hand in hand without compromising the model’s performance.
author2 Anupam Chattopadhyay
author_facet Anupam Chattopadhyay
Lim, Xin Yi
format Final Year Project
author Lim, Xin Yi
author_sort Lim, Xin Yi
title Protecting neural networks from adversarial attacks
title_short Protecting neural networks from adversarial attacks
title_full Protecting neural networks from adversarial attacks
title_fullStr Protecting neural networks from adversarial attacks
title_full_unstemmed Protecting neural networks from adversarial attacks
title_sort protecting neural networks from adversarial attacks
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175191
_version_ 1800916223663276032