Adversarial attacks and defenses in natural language processing

Deep neural networks (DNNs) are becoming increasingly successful in many fields. However, DNNs are shown to be strikingly susceptible to adversarial examples. For instance, models pre-trained on very large corpora can still be easily fooled by word substitution attacks using only synonyms. This ph...

全面介紹

Saved in:
書目詳細資料
主要作者: Dong, Xinshuai
其他作者: Luu Anh Tuan
格式: Thesis-Master by Research
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/159029
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!