Adversarial attacks and defenses in natural language processing
Deep neural networks (DNNs) are becoming increasingly successful in many fields. However, DNNs are shown to be strikingly susceptible to adversarial examples. For instance, models pre-trained on very large corpora can still be easily fooled by word substitution attacks using only synonyms. This ph...
Saved in:
Main Author: | Dong, Xinshuai |
---|---|
Other Authors: | Luu Anh Tuan |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159029 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Adversarial attacks and defenses for visual signals
by: Cheng, Yupeng
Published: (2023) -
Adversarial attack defenses for neural networks
by: Puah, Yi Hao
Published: (2024) -
Review of adversarial attacks and defenses on edge machine learning
by: Chua, Jim Sean
Published: (2024) -
Defense on unrestricted adversarial examples
by: Sim, Chee Xian
Published: (2023) -
Attack as defense: Characterizing adversarial examples using robustness
by: ZHAO, Zhe, et al.
Published: (2021)