DeepDIST: a black-box anti-collusion framework for secure distribution of deep models

Due to enormous computing and storage overhead for well-trained Deep Neural Network (DNN) models, protecting the intellectual property of model owners is a pressing need. As the commercialization of deep models is becoming increasingly popular, the pre-trained models delivered to users may suffer fr...

Full description

Saved in:
Bibliographic Details
Main Authors: Cheng, Hang, Li, Xibin, Wang, Huaxiong, Zhang, Xinpeng, Liu, Ximeng, Wang, Meiqing, Li, Fengyong
Other Authors: School of Physical and Mathematical Sciences
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/171797
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-171797
record_format dspace
spelling sg-ntu-dr.10356-1717972023-11-08T02:56:24Z DeepDIST: a black-box anti-collusion framework for secure distribution of deep models Cheng, Hang Li, Xibin Wang, Huaxiong Zhang, Xinpeng Liu, Ximeng Wang, Meiqing Li, Fengyong School of Physical and Mathematical Sciences Science::Mathematics Deep Neural Networks Anti-collusion Due to enormous computing and storage overhead for well-trained Deep Neural Network (DNN) models, protecting the intellectual property of model owners is a pressing need. As the commercialization of deep models is becoming increasingly popular, the pre-trained models delivered to users may suffer from being illegally copied, redistributed, or abused. In this paper, we propose DeepDIST, the first end-to-end secure DNNs distribution framework in a black-box scenario. Specifically, our framework adopts a dual-level fingerprint (FP) mechanism to provide reliable ownership verification, and proposes two equivalent transformations that can resist collusion attacks, plus a newly designed similarity loss term to improve the security of the transformations. Unlike the existing passive defense schemes that detect colluding participants, we introduce an active defense strategy, namely damaging the performance of the model after the malicious collusion. The extensive experimental results show that DeepDIST can maintain the accuracy of the host DNN after embedding fingerprint conducted for true traitor tracing, and is robust against several popular model modifications. Furthermore, the anti-collusion effect is evaluated on two typical classification tasks (10-class and 100-class), and the proposed DeepDIST can drop the prediction accuracy of the collusion model to 10% and 1% (random guess), respectively. This work was supported in part by the National Natural Science Foundation of China under Grant 62172098, Grant 62072109, and Grant 61702105; in part by the Natural Science Foundation of Fujian Province under Grant 2020J01497; and in part by the Education Research Project for Young and Middle-Aged Teachers of the Education Department of Fujian Province under Grant JAT200064. 2023-11-08T02:56:24Z 2023-11-08T02:56:24Z 2023 Journal Article Cheng, H., Li, X., Wang, H., Zhang, X., Liu, X., Wang, M. & Li, F. (2023). DeepDIST: a black-box anti-collusion framework for secure distribution of deep models. IEEE Transactions On Circuits and Systems for Video Technology. https://dx.doi.org/10.1109/TCSVT.2023.3284914 1051-8215 https://hdl.handle.net/10356/171797 10.1109/TCSVT.2023.3284914 2-s2.0-85162685502 en IEEE Transactions on Circuits and Systems for Video Technology © 2023 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Science::Mathematics
Deep Neural Networks
Anti-collusion
spellingShingle Science::Mathematics
Deep Neural Networks
Anti-collusion
Cheng, Hang
Li, Xibin
Wang, Huaxiong
Zhang, Xinpeng
Liu, Ximeng
Wang, Meiqing
Li, Fengyong
DeepDIST: a black-box anti-collusion framework for secure distribution of deep models
description Due to enormous computing and storage overhead for well-trained Deep Neural Network (DNN) models, protecting the intellectual property of model owners is a pressing need. As the commercialization of deep models is becoming increasingly popular, the pre-trained models delivered to users may suffer from being illegally copied, redistributed, or abused. In this paper, we propose DeepDIST, the first end-to-end secure DNNs distribution framework in a black-box scenario. Specifically, our framework adopts a dual-level fingerprint (FP) mechanism to provide reliable ownership verification, and proposes two equivalent transformations that can resist collusion attacks, plus a newly designed similarity loss term to improve the security of the transformations. Unlike the existing passive defense schemes that detect colluding participants, we introduce an active defense strategy, namely damaging the performance of the model after the malicious collusion. The extensive experimental results show that DeepDIST can maintain the accuracy of the host DNN after embedding fingerprint conducted for true traitor tracing, and is robust against several popular model modifications. Furthermore, the anti-collusion effect is evaluated on two typical classification tasks (10-class and 100-class), and the proposed DeepDIST can drop the prediction accuracy of the collusion model to 10% and 1% (random guess), respectively.
author2 School of Physical and Mathematical Sciences
author_facet School of Physical and Mathematical Sciences
Cheng, Hang
Li, Xibin
Wang, Huaxiong
Zhang, Xinpeng
Liu, Ximeng
Wang, Meiqing
Li, Fengyong
format Article
author Cheng, Hang
Li, Xibin
Wang, Huaxiong
Zhang, Xinpeng
Liu, Ximeng
Wang, Meiqing
Li, Fengyong
author_sort Cheng, Hang
title DeepDIST: a black-box anti-collusion framework for secure distribution of deep models
title_short DeepDIST: a black-box anti-collusion framework for secure distribution of deep models
title_full DeepDIST: a black-box anti-collusion framework for secure distribution of deep models
title_fullStr DeepDIST: a black-box anti-collusion framework for secure distribution of deep models
title_full_unstemmed DeepDIST: a black-box anti-collusion framework for secure distribution of deep models
title_sort deepdist: a black-box anti-collusion framework for secure distribution of deep models
publishDate 2023
url https://hdl.handle.net/10356/171797
_version_ 1783955563049123840