Prompt sensitivity of transformer variants for text classification

This study investigates the sensitivity of various Transformer model architectures, encoder-only (BERT), decoder-only (GPT-2), and encoder-decoder (T5), in response to various types of prompt modifications on text classification tasks. By leveraging a fine-tuning approach, the models were evaluated...

Full description

Saved in:
Bibliographic Details
Main Author: Ong, Li Han
Other Authors: Wang Wenya
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181519
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This study investigates the sensitivity of various Transformer model architectures, encoder-only (BERT), decoder-only (GPT-2), and encoder-decoder (T5), in response to various types of prompt modifications on text classification tasks. By leveraging a fine-tuning approach, the models were evaluated across chosen benchmark datasets from GLUE, with modifications encompassing lexical, positioning, and syntactic changes. The findings reveal that encoder-based models (BERT and T5) demonstrate greater sensitivity to prompt modifications than the decoder-only model (GPT-2), with varying impacts based on task and modification type. We reason that the complete bidirectional nature of the encoder self-attention mechanism causes models to overfit on subtle linguistic artifacts in the training data, reducing the ability to generalise to unseen examples. As such, we recommend that models used in production that deal with potentially unpredictable input (ie. client-facing applications), be trained on more diverse data to enhance model robustness. This can be obtained through manual collection or noise-based data augmentation such as the prompt modification techniques covered in this study. Future research is recommended to explore additional modification categories, tasks, and scalability effects across larger models.