Effects of incremental training on watermarked neural networks

Deep learning has achieved extraordinary results in many different areas, ranging from autonomous driving [1], medical devices [2] to speech recognition and natural language processing [3]. Generating a high-performance neural network is costly in aspects of time, computational resources, and exp...

全面介紹

Saved in:
書目詳細資料
主要作者: Heng, Chuan Song
其他作者: Anupam Chattopadhyay
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2023
主題:
在線閱讀:https://hdl.handle.net/10356/167143
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Deep learning has achieved extraordinary results in many different areas, ranging from autonomous driving [1], medical devices [2] to speech recognition and natural language processing [3]. Generating a high-performance neural network is costly in aspects of time, computational resources, and expertise, making the models valuable intellectual property (IP). As a result, there has been a notable growth in attention and investments in the paradigm of machine learning. In recent years, watermarking methods have been developed in order to protect the Intellectual Property Rights (IPR) of neural networks, and many schemes have successfully prevented adversaries from stealing such models. However, little has been studied on how Incremental Training would affect the persistence of watermarks in such watermarking schemes. This investigation aims to discover the effects of Incremental Training on in existing watermarking schemes. Keywords: Intellectual Property Rights (IPR), Watermarking, Incremental Training