Constructing adversarial samples against deep learning-based sensing system (part II)
As deep learning become more popular and have grown to become crucial components in the daily device we use. Despite their effectiveness, they are not invincible. Adversarial examples initially discovered and applied to computer vision systems are now becoming a noticeable issue in DeepSpeech proces...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/137412 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-137412 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1374122020-03-24T07:34:15Z Constructing adversarial samples against deep learning-based sensing system (part II) Lin, Beiyi Tan Rui School of Computer Science and Engineering tanrui@ntu.edu.sg Engineering::Computer science and engineering As deep learning become more popular and have grown to become crucial components in the daily device we use. Despite their effectiveness, they are not invincible. Adversarial examples initially discovered and applied to computer vision systems are now becoming a noticeable issue in DeepSpeech processing classifier as well. Adversarial examples are input samples that have very bad classification accuracy. They are generated by adding imperceptible perturbations by human. These adversarial examples result in misclassification of results. In late 2017, an attack was shown to be quite effective against the Speech Commands classification model. Speech commands are used very frequently in many applications, such as Google Assistant, Amazon Alexa and Apple's Siri. Thus, adversarial examples produced by this attack could have real-world consequences. While previous work in defending again these malicious attacks has investigated using gradient masking to hide information of the model and audio pre-processing to reduce or distort adversarial noise, this project explores the idea of simple pink noise injection at different loudness to detect adversarial examples. This technique of noise injection does not require retraining or modifying the model. It is also possible to transfer this technique from the model used in this project to other DeepSpeech models. Bachelor of Engineering (Computer Science) 2020-03-24T07:34:15Z 2020-03-24T07:34:15Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/137412 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering |
spellingShingle |
Engineering::Computer science and engineering Lin, Beiyi Constructing adversarial samples against deep learning-based sensing system (part II) |
description |
As deep learning become more popular and have grown to become crucial components in the daily device we use. Despite their effectiveness, they are not invincible. Adversarial examples initially discovered and applied to computer vision systems are now becoming a noticeable issue in DeepSpeech processing classifier as well.
Adversarial examples are input samples that have very bad classification accuracy. They are generated by adding imperceptible perturbations by human. These adversarial examples result in misclassification of results.
In late 2017, an attack was shown to be quite effective against the Speech Commands classification model. Speech commands are used very frequently in many applications, such as Google Assistant, Amazon Alexa and Apple's Siri. Thus, adversarial examples produced by this attack could have real-world consequences.
While previous work in defending again these malicious attacks has investigated using gradient masking to hide information of the model and audio pre-processing to reduce or distort adversarial noise, this project explores the idea of simple pink noise injection at different loudness to detect adversarial examples. This technique of noise injection does not require retraining or modifying the model. It is also possible to transfer this technique from the model used in this project to other DeepSpeech models. |
author2 |
Tan Rui |
author_facet |
Tan Rui Lin, Beiyi |
format |
Final Year Project |
author |
Lin, Beiyi |
author_sort |
Lin, Beiyi |
title |
Constructing adversarial samples against deep learning-based sensing system (part II) |
title_short |
Constructing adversarial samples against deep learning-based sensing system (part II) |
title_full |
Constructing adversarial samples against deep learning-based sensing system (part II) |
title_fullStr |
Constructing adversarial samples against deep learning-based sensing system (part II) |
title_full_unstemmed |
Constructing adversarial samples against deep learning-based sensing system (part II) |
title_sort |
constructing adversarial samples against deep learning-based sensing system (part ii) |
publisher |
Nanyang Technological University |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/137412 |
_version_ |
1681048078526709760 |