Adversarial attacks on RNN-based deep learning systems

Automatic Speech Recognition (ASR) systems have been growing in prevalence together with the advancement in deep learning. Built within many Intelligent Voice Control (IVC) systems such as Alexa, Siri and Google Assistant, ASR has become an attractive target for adversarial attacks. In this research...

Full description

Saved in:
Bibliographic Details
Main Author: Loi, Chii Lek
Other Authors: Liu Yang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/137926
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Automatic Speech Recognition (ASR) systems have been growing in prevalence together with the advancement in deep learning. Built within many Intelligent Voice Control (IVC) systems such as Alexa, Siri and Google Assistant, ASR has become an attractive target for adversarial attacks. In this research project, the objective is to create a black-box over-the-air (OTA) attack system that can mutate an audio into its adversarial form with imperceptible difference, such that it will be interpreted as the targeted word by the ASR. In this paper, we demonstrate the feasibility and effectiveness of such an attack system in generating perturbation for the DeepSpeech ASR.