Adversarial training using meta-learning for BERT

Deep learning is currently the most successful method of semantic analysis in natural language processing. However, in recent years, many variants of carefully crafted inputs designed to cause misclassification, known as adversarial attacks, have been engineered with tremendous success. One well-...

Full description

Saved in:
Bibliographic Details
Main Author: Low, Timothy Jing Haen
Other Authors: Joty Shafiq Rayhan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/156635
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Deep learning is currently the most successful method of semantic analysis in natural language processing. However, in recent years, many variants of carefully crafted inputs designed to cause misclassification, known as adversarial attacks, have been engineered with tremendous success. One well-known, efficient method to develop models to be robust against adversarial attacks is known as adversarial training, where models are iteratively trained on samples produces by the specific attack algorithm. However, adversarial training only works when the model has access to the attack generation algorithm or a large dataset of attack samples, and so cannot defend against attacks of which they have access to a low number of samples. This project proposes to overcome this challenge using meta-learning, which uses a large number of similar tasks from a different domain to train a classifier to learn another task for which a small number of labelled samples are available. We show that by using the Model-Agnostic Meta-Learning algorithm in adversarial training, a model trained on a large number of different adversarial attacks can become more robust to an adversarial attack that it has few samples of. This project will also explore augmenting the training set with a large number of non-adversarial perturbations, in order to possibly better mitigate adversarial attacks