Prompt sensitivity of transformer variants for text classification
This study investigates the sensitivity of various Transformer model architectures, encoder-only (BERT), decoder-only (GPT-2), and encoder-decoder (T5), in response to various types of prompt modifications on text classification tasks. By leveraging a fine-tuning approach, the models were evaluated...
محفوظ في:
المؤلف الرئيسي: | |
---|---|
مؤلفون آخرون: | |
التنسيق: | Final Year Project |
اللغة: | English |
منشور في: |
Nanyang Technological University
2024
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://hdl.handle.net/10356/181519 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
المؤسسة: | Nanyang Technological University |
اللغة: | English |
الملخص: | This study investigates the sensitivity of various Transformer model architectures, encoder-only (BERT), decoder-only (GPT-2), and encoder-decoder (T5), in response to various types of prompt modifications on text classification tasks. By leveraging a fine-tuning approach, the models were evaluated across chosen benchmark datasets from GLUE, with modifications encompassing lexical, positioning, and syntactic changes. The findings reveal that encoder-based models (BERT and T5) demonstrate greater sensitivity to prompt modifications than the decoder-only model (GPT-2), with varying impacts based on task and modification type. We reason that the complete bidirectional nature of the encoder self-attention mechanism causes models to overfit on subtle linguistic artifacts in the training data, reducing the ability to generalise to unseen examples. As such, we recommend that models used in production that deal with potentially unpredictable input (ie. client-facing applications), be trained on more diverse data to enhance model robustness. This can be obtained through manual collection or noise-based data augmentation such as the prompt modification techniques covered in this study. Future research is recommended to explore additional modification categories, tasks, and scalability effects across larger models. |
---|