Adversarial attacks and defenses in natural language processing
Deep neural networks (DNNs) are becoming increasingly successful in many fields. However, DNNs are shown to be strikingly susceptible to adversarial examples. For instance, models pre-trained on very large corpora can still be easily fooled by word substitution attacks using only synonyms. This ph...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159029 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-159029 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1590292022-06-05T12:07:48Z Adversarial attacks and defenses in natural language processing Dong, Xinshuai Luu Anh Tuan School of Computer Science and Engineering anhtuan.luu@ntu.edu.sg Engineering::Computer science and engineering Deep neural networks (DNNs) are becoming increasingly successful in many fields. However, DNNs are shown to be strikingly susceptible to adversarial examples. For instance, models pre-trained on very large corpora can still be easily fooled by word substitution attacks using only synonyms. This phenomenon has raised grand security challenges to modern machine learning systems, such as self-driving, spam filtering, and speech recognition, where DNNs are widely deployed. In this thesis, we first give a brief introduction of adversarial attacks and defenses. We focus on Natural Language Processing (NLP) and review some recent advances in attack algorithms and defense methods in Chapter 2. We also give a formalized definition of the research objective in this thesis, i.e., how to improve the adversarial robustness of NLP models. To this end, we propose novel and effective solutions to enhance NLP models towards robustness in the following chapters. In Chapter 3, for the classical NLP models like Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN), we present a novel adversarial training method, Adversarial Sparse Convex Combination (ASCC) defense, for adversarial robustness against word substitution attacks. To be specific, we model the substitution attack space as a convex hull and employ a regularizer to encourage the modeled perturbation towards an actual substitution. Therefore, we are able to align the modeling better with the discrete textual space. We empirically validate ASCC-defense in our experiments and it surpasses all compared state-of-the-arts on prevailing NLP tasks like sentiment analysis and natural language inference consistently under multiple attacks. To date, pre-trained language models, e.g., Bidirectional Transformers (BERT), are getting increasingly popular and fine-tuning a pre-trained language model for downstream tasks is becoming the new NLP paradigm. As such, how to fine-tune pre-trained language models towards adversarial robustness is of great importance. In Chapter 4, we first demonstrate that the prevalent defense technique, adversarial training, does not directly fit a conventional fine-tuning scenario. The reason lies in that conventional adversarial fine-tuning suffers severely from catastrophic forgetting and the fine-tuned models often fail to retain the generic and robust linguistic features captured during the pre-training stage. To this end, we propose Robust Informative Fine-Tuning (RIFT), a novel adversarial fine-tuning method from an information-theoretical perspective. In particular, RIFT encourages a model to memorize all the useful features learned before throughout the entire fine-tuning process, whereas a conventional fine-tuning framework only uses the weights of the pre-trained model for initialization. In experiments, we demonstrate that RIFT consistently surpasses state-of-the-arts under different attacks across various pre-trained language models. Last, we conclude this thesis in Chapter 5 and discuss some promising future directions for further exploration. Master of Engineering 2022-06-05T12:07:47Z 2022-06-05T12:07:47Z 2022 Thesis-Master by Research Dong, X. (2022). Adversarial attacks and defenses in natural language processing. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/159029 https://hdl.handle.net/10356/159029 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering |
spellingShingle |
Engineering::Computer science and engineering Dong, Xinshuai Adversarial attacks and defenses in natural language processing |
description |
Deep neural networks (DNNs) are becoming increasingly successful in many fields. However, DNNs are shown to be strikingly susceptible to adversarial examples. For instance,
models pre-trained on very large corpora can still be easily fooled by word substitution
attacks using only synonyms. This phenomenon has raised grand security challenges
to modern machine learning systems, such as self-driving, spam filtering, and speech
recognition, where DNNs are widely deployed.
In this thesis, we first give a brief introduction of adversarial attacks and defenses.
We focus on Natural Language Processing (NLP) and review some recent advances in
attack algorithms and defense methods in Chapter 2. We also give a formalized definition
of the research objective in this thesis, i.e., how to improve the adversarial robustness
of NLP models. To this end, we propose novel and effective solutions to enhance NLP
models towards robustness in the following chapters.
In Chapter 3, for the classical NLP models like Long Short-Term Memory (LSTM) and
Convolutional Neural Network (CNN), we present a novel adversarial training method,
Adversarial Sparse Convex Combination (ASCC) defense, for adversarial robustness
against word substitution attacks. To be specific, we model the substitution attack
space as a convex hull and employ a regularizer to encourage the modeled perturbation
towards an actual substitution. Therefore, we are able to align the modeling better with
the discrete textual space. We empirically validate ASCC-defense in our experiments
and it surpasses all compared state-of-the-arts on prevailing NLP tasks like sentiment
analysis and natural language inference consistently under multiple attacks.
To date, pre-trained language models, e.g., Bidirectional Transformers (BERT), are
getting increasingly popular and fine-tuning a pre-trained language model for downstream tasks is becoming the new NLP paradigm. As such, how to fine-tune pre-trained language models towards adversarial robustness is of great importance. In Chapter 4,
we first demonstrate that the prevalent defense technique, adversarial training, does not
directly fit a conventional fine-tuning scenario. The reason lies in that conventional
adversarial fine-tuning suffers severely from catastrophic forgetting and the fine-tuned
models often fail to retain the generic and robust linguistic features captured during the
pre-training stage. To this end, we propose Robust Informative Fine-Tuning (RIFT),
a novel adversarial fine-tuning method from an information-theoretical perspective. In
particular, RIFT encourages a model to memorize all the useful features learned before
throughout the entire fine-tuning process, whereas a conventional fine-tuning framework
only uses the weights of the pre-trained model for initialization. In experiments, we
demonstrate that RIFT consistently surpasses state-of-the-arts under different attacks
across various pre-trained language models.
Last, we conclude this thesis in Chapter 5 and discuss some promising future directions
for further exploration. |
author2 |
Luu Anh Tuan |
author_facet |
Luu Anh Tuan Dong, Xinshuai |
format |
Thesis-Master by Research |
author |
Dong, Xinshuai |
author_sort |
Dong, Xinshuai |
title |
Adversarial attacks and defenses in natural language processing |
title_short |
Adversarial attacks and defenses in natural language processing |
title_full |
Adversarial attacks and defenses in natural language processing |
title_fullStr |
Adversarial attacks and defenses in natural language processing |
title_full_unstemmed |
Adversarial attacks and defenses in natural language processing |
title_sort |
adversarial attacks and defenses in natural language processing |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/159029 |
_version_ |
1735491192356864000 |