Defences and threats in safe deep learning

Deep learning systems are gaining wider adoption due to their remarkable performances in computer vision and natural language tasks. As its applications reach into high stakes and mission-critical areas such as self-driving vehicle, safety of these systems become paramount. A lapse in safety in deep...

Full description

Saved in:
Bibliographic Details
Main Author: Chan, Alvin Guo Wei
Other Authors: Ong Yew Soon
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/152976
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-152976
record_format dspace
spelling sg-ntu-dr.10356-1529762021-11-05T06:03:42Z Defences and threats in safe deep learning Chan, Alvin Guo Wei Ong Yew Soon School of Computer Science and Engineering ASYSOng@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Deep learning systems are gaining wider adoption due to their remarkable performances in computer vision and natural language tasks. As its applications reach into high stakes and mission-critical areas such as self-driving vehicle, safety of these systems become paramount. A lapse in safety in deep learning models could result in loss of lives and erode trust from the society, marring progress made by technological advances in this field. This thesis addresses the current threats in the safety of deep learning models and defences to counter these threats. Two of the most pressing safety concerns are adversarial examples and data poisoning where malicious actors can subjugate deep learning systems through targeting a model and its training dataset respectively. In this thesis, I make several novel contributions in the fight against these threats. Firstly, I introduce a new defence paradigm against adversarial examples that can boost a model's robustness while absolving the need for high computational resources. Secondly, I propose an approach to transfer resistance against adversarial examples from a model to other models which may be of a different architecture or task, enhancing safety in scenarios where data or computational resources are limited. Thirdly, I present a comprehensive defence pipeline to counter data poisoning by identifying and then neutralizing the poison in a trained model. Finally, I uncover a new data poisoning vulnerability in text-based deep learning models to raise the alarm on the importance and subtlety of such threat. Doctor of Philosophy 2021-10-26T01:44:04Z 2021-10-26T01:44:04Z 2021 Thesis-Doctor of Philosophy Chan, A. G. W. (2021). Defences and threats in safe deep learning. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/152976 https://hdl.handle.net/10356/152976 10.32657/10356/152976 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Chan, Alvin Guo Wei
Defences and threats in safe deep learning
description Deep learning systems are gaining wider adoption due to their remarkable performances in computer vision and natural language tasks. As its applications reach into high stakes and mission-critical areas such as self-driving vehicle, safety of these systems become paramount. A lapse in safety in deep learning models could result in loss of lives and erode trust from the society, marring progress made by technological advances in this field. This thesis addresses the current threats in the safety of deep learning models and defences to counter these threats. Two of the most pressing safety concerns are adversarial examples and data poisoning where malicious actors can subjugate deep learning systems through targeting a model and its training dataset respectively. In this thesis, I make several novel contributions in the fight against these threats. Firstly, I introduce a new defence paradigm against adversarial examples that can boost a model's robustness while absolving the need for high computational resources. Secondly, I propose an approach to transfer resistance against adversarial examples from a model to other models which may be of a different architecture or task, enhancing safety in scenarios where data or computational resources are limited. Thirdly, I present a comprehensive defence pipeline to counter data poisoning by identifying and then neutralizing the poison in a trained model. Finally, I uncover a new data poisoning vulnerability in text-based deep learning models to raise the alarm on the importance and subtlety of such threat.
author2 Ong Yew Soon
author_facet Ong Yew Soon
Chan, Alvin Guo Wei
format Thesis-Doctor of Philosophy
author Chan, Alvin Guo Wei
author_sort Chan, Alvin Guo Wei
title Defences and threats in safe deep learning
title_short Defences and threats in safe deep learning
title_full Defences and threats in safe deep learning
title_fullStr Defences and threats in safe deep learning
title_full_unstemmed Defences and threats in safe deep learning
title_sort defences and threats in safe deep learning
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/152976
_version_ 1718368097618886656