Securing edge deep neural network against input evasion and IP theft

Deep learning is a key driver that puts artificial intelligence (AI) on the radar screen for technology investment. Deep Neural Network (DNN) automatically learns high-level features directly from raw data in a hierarchical manner, which eliminates manual extraction of effective features in traditio...

Full description

Saved in:
Bibliographic Details
Main Author: Wang, Si
Other Authors: Chang Chip Hong
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/152267
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-152267
record_format dspace
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Integrated circuits
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle Engineering::Electrical and electronic engineering::Integrated circuits
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Wang, Si
Securing edge deep neural network against input evasion and IP theft
description Deep learning is a key driver that puts artificial intelligence (AI) on the radar screen for technology investment. Deep Neural Network (DNN) automatically learns high-level features directly from raw data in a hierarchical manner, which eliminates manual extraction of effective features in traditional machine learning solutions. The ability to solve problems end-to-end enables a system to learn complex functions mapping for ill-posed problems with prediction accuracy often exceeding sophisticated statistical models and other machine learning methods. Computer vision is one specific domain that DNN has demonstrated this remarkable abstraction power. Unlike other mainstream classification approaches, DNN can usually achieve better results with more data and larger model. Over the last decade, the model complexity and regulation mechanism of DNN have grown tremendously to overcome the performance plateau and improve the generalization ability. The flourishing of Internet of Things (IoT) has changed the way data are generated and curated. Consequently, DNN hardware accelerators, open-source AI model compilers and commercially available toolkits like Intel(R) OpenVINO(TM), have evolved to enable more user-centric deep learning applications to be run on edge devices without being limited by the network latency.This research is motivated by the two major security threats of deep learning. One is the adversarial example obtained by deliberately adding imperceptibly small perturbations onto the benign input. Such input evasion can delude a well-trained classifier into wrong decision making. Adversarial examples can be generated fast at low cost. Their attack surface can also be extended beyond the software boundary and made more robust with high transferability across models. Existing countermeasures against adversarial examples are mainly designed and evaluated based on software models of DNNs implemented with 32-bit floating-point arithmetic. To support secure embedded intelligence, the defense should take hardware optimization and resource constraints of edge platforms into consideration. The other threat is the Intellectual Property (IP) theft. As training a good DNN model needs huge capital investment in manpower, time and physical resources, which may not be affordable or accessible by small corporations, the trained model is often a pricey proprietary asset of a business and is normally kept confidential. However, the emerging model extraction attacks and reverse engineering techniques enable the DNN model to be stolen to build similar quality AI product at a low cost. In order to protect the interest and revenue of the model owner, a pragmatic solution without reverse engineering the DNN chip to detect the pirated AI chip is required. A comprehensive review of DNN has been conducted, which highlights the prevalent adversarial input generation methodologies and IP theft techniques, and corresponding countermeasures. Three original contributions, including two hardware-oriented approaches, a new lightweight in-situ adversarial input detector for edge DNN, and a method for fingerprinting DNN to attest the model ownership, are presented in this thesis.
author2 Chang Chip Hong
author_facet Chang Chip Hong
Wang, Si
format Thesis-Doctor of Philosophy
author Wang, Si
author_sort Wang, Si
title Securing edge deep neural network against input evasion and IP theft
title_short Securing edge deep neural network against input evasion and IP theft
title_full Securing edge deep neural network against input evasion and IP theft
title_fullStr Securing edge deep neural network against input evasion and IP theft
title_full_unstemmed Securing edge deep neural network against input evasion and IP theft
title_sort securing edge deep neural network against input evasion and ip theft
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/152267
_version_ 1772825940087275520
spelling sg-ntu-dr.10356-1522672023-07-04T17:06:09Z Securing edge deep neural network against input evasion and IP theft Wang, Si Chang Chip Hong School of Electrical and Electronic Engineering VIRTUS, IC Design Centre of Excellence ECHChang@ntu.edu.sg Engineering::Electrical and electronic engineering::Integrated circuits Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Deep learning is a key driver that puts artificial intelligence (AI) on the radar screen for technology investment. Deep Neural Network (DNN) automatically learns high-level features directly from raw data in a hierarchical manner, which eliminates manual extraction of effective features in traditional machine learning solutions. The ability to solve problems end-to-end enables a system to learn complex functions mapping for ill-posed problems with prediction accuracy often exceeding sophisticated statistical models and other machine learning methods. Computer vision is one specific domain that DNN has demonstrated this remarkable abstraction power. Unlike other mainstream classification approaches, DNN can usually achieve better results with more data and larger model. Over the last decade, the model complexity and regulation mechanism of DNN have grown tremendously to overcome the performance plateau and improve the generalization ability. The flourishing of Internet of Things (IoT) has changed the way data are generated and curated. Consequently, DNN hardware accelerators, open-source AI model compilers and commercially available toolkits like Intel(R) OpenVINO(TM), have evolved to enable more user-centric deep learning applications to be run on edge devices without being limited by the network latency.This research is motivated by the two major security threats of deep learning. One is the adversarial example obtained by deliberately adding imperceptibly small perturbations onto the benign input. Such input evasion can delude a well-trained classifier into wrong decision making. Adversarial examples can be generated fast at low cost. Their attack surface can also be extended beyond the software boundary and made more robust with high transferability across models. Existing countermeasures against adversarial examples are mainly designed and evaluated based on software models of DNNs implemented with 32-bit floating-point arithmetic. To support secure embedded intelligence, the defense should take hardware optimization and resource constraints of edge platforms into consideration. The other threat is the Intellectual Property (IP) theft. As training a good DNN model needs huge capital investment in manpower, time and physical resources, which may not be affordable or accessible by small corporations, the trained model is often a pricey proprietary asset of a business and is normally kept confidential. However, the emerging model extraction attacks and reverse engineering techniques enable the DNN model to be stolen to build similar quality AI product at a low cost. In order to protect the interest and revenue of the model owner, a pragmatic solution without reverse engineering the DNN chip to detect the pirated AI chip is required. A comprehensive review of DNN has been conducted, which highlights the prevalent adversarial input generation methodologies and IP theft techniques, and corresponding countermeasures. Three original contributions, including two hardware-oriented approaches, a new lightweight in-situ adversarial input detector for edge DNN, and a method for fingerprinting DNN to attest the model ownership, are presented in this thesis. Doctor of Philosophy 2021-07-28T05:28:04Z 2021-07-28T05:28:04Z 2021 Thesis-Doctor of Philosophy Wang, S. (2021). Securing edge deep neural network against input evasion and IP theft. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/152267 https://hdl.handle.net/10356/152267 10.32657/10356/152267 en MOE-2015-T2-2-013 CHFA-GC1-AW01 This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University