Investigating vulnerability of watermarking neural network
A neural network with great performance often incurs a high cost to train. The data used to train a neural network can be confidential or need additional substantial processing. Hence, a trained neural network is regarded as intellectual property. To protect a neural network from infringement of i...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/138234 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-138234 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1382342020-04-29T06:46:18Z Investigating vulnerability of watermarking neural network Chua, Viroy Sheng Yang Anupam Chattopadhyay School of Computer Science and Engineering anupam@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence A neural network with great performance often incurs a high cost to train. The data used to train a neural network can be confidential or need additional substantial processing. Hence, a trained neural network is regarded as intellectual property. To protect a neural network from infringement of intellectual property, the idea to watermark a neural network has been introduced. This project investigates the vulnerability of a state-of-the-art deep learning watermarking scheme. The project focus on investigating the behavior of backdoor-based watermarking scheme then proposes 2 methods to remove the watermark using the concept of transfer learning. Method 1 retrains the last convolutional layer of a model, sothe newly trained layer cannot represent the abstract features of a watermarked sample to the classifier. Method 2 involves using the basic features learn by the early convolutional layers of the watermarked model to train a model with comparable performance. The given methods show that an adversary in the same domain as the owner of the watermarked model can remove the backdoor-based watermark and invalidate any potential claim on the model. The investigation and methods aim to identify the vulnerability of backdoor-based watermark so a countermeasure can be developed to protect neural network. Bachelor of Engineering (Computer Science) 2020-04-29T06:46:18Z 2020-04-29T06:46:18Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/138234 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Chua, Viroy Sheng Yang Investigating vulnerability of watermarking neural network |
description |
A neural network with great performance often incurs a high cost to train. The data used to train a neural network can be confidential or need additional substantial processing. Hence, a trained neural network is regarded as intellectual property. To protect a neural network from infringement of intellectual property, the idea to watermark a neural network has been introduced. This project investigates the vulnerability of a state-of-the-art deep learning watermarking scheme. The project focus on investigating the behavior of backdoor-based watermarking scheme then proposes 2 methods to remove the watermark using the concept of transfer learning. Method 1 retrains the last convolutional layer of a model, sothe newly trained layer cannot represent the abstract features of a watermarked sample to the classifier. Method 2 involves using the basic features learn by the early convolutional layers of the watermarked model to train a model with comparable performance. The given methods show that an adversary in the same domain as the owner of the watermarked model can remove the backdoor-based watermark and invalidate any potential claim on the model. The investigation and methods aim to identify the vulnerability of backdoor-based watermark so a countermeasure can be developed to protect neural network. |
author2 |
Anupam Chattopadhyay |
author_facet |
Anupam Chattopadhyay Chua, Viroy Sheng Yang |
format |
Final Year Project |
author |
Chua, Viroy Sheng Yang |
author_sort |
Chua, Viroy Sheng Yang |
title |
Investigating vulnerability of watermarking neural network |
title_short |
Investigating vulnerability of watermarking neural network |
title_full |
Investigating vulnerability of watermarking neural network |
title_fullStr |
Investigating vulnerability of watermarking neural network |
title_full_unstemmed |
Investigating vulnerability of watermarking neural network |
title_sort |
investigating vulnerability of watermarking neural network |
publisher |
Nanyang Technological University |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/138234 |
_version_ |
1681058347857477632 |