Adversarial training using meta-learning for BERT

Deep learning is currently the most successful method of semantic analysis in natural language processing. However, in recent years, many variants of carefully crafted inputs designed to cause misclassification, known as adversarial attacks, have been engineered with tremendous success. One well-...

全面介紹

Saved in:
書目詳細資料
主要作者: Low, Timothy Jing Haen
其他作者: Joty Shafiq Rayhan
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/156635
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Deep learning is currently the most successful method of semantic analysis in natural language processing. However, in recent years, many variants of carefully crafted inputs designed to cause misclassification, known as adversarial attacks, have been engineered with tremendous success. One well-known, efficient method to develop models to be robust against adversarial attacks is known as adversarial training, where models are iteratively trained on samples produces by the specific attack algorithm. However, adversarial training only works when the model has access to the attack generation algorithm or a large dataset of attack samples, and so cannot defend against attacks of which they have access to a low number of samples. This project proposes to overcome this challenge using meta-learning, which uses a large number of similar tasks from a different domain to train a classifier to learn another task for which a small number of labelled samples are available. We show that by using the Model-Agnostic Meta-Learning algorithm in adversarial training, a model trained on a large number of different adversarial attacks can become more robust to an adversarial attack that it has few samples of. This project will also explore augmenting the training set with a large number of non-adversarial perturbations, in order to possibly better mitigate adversarial attacks