Zero-shot text classification via self-supervised tuning
Existing solutions to zero-shot text classification either conduct prompting with pre-trained language models, which is sensitive to the choices of templates, or rely on large-scale annotated data of relevant tasks for meta-tuning. In this work, we propose a new paradigm based on self-supervised...
Saved in:
Main Authors: | , , , , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/168505 https://2023.aclweb.org/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Existing solutions to zero-shot text classification either conduct prompting
with pre-trained language models, which is sensitive to the choices of
templates, or rely on large-scale annotated data of relevant tasks for
meta-tuning. In this work, we propose a new paradigm based on self-supervised
learning to solve zero-shot text classification tasks by tuning the language
models with unlabeled data, called self-supervised tuning. By exploring the
inherent structure of free texts, we propose a new learning objective called
first sentence prediction to bridge the gap between unlabeled data and text
classification tasks. After tuning the model to learn to predict the first
sentence in a paragraph based on the rest, the model is able to conduct
zero-shot inference on unseen tasks such as topic classification and sentiment
analysis. Experimental results show that our model outperforms the
state-of-the-art baselines on 7 out of 10 tasks. Moreover, the analysis reveals
that our model is less sensitive to the prompt design. Our code and pre-trained
models are publicly available at https://github.com/DAMO-NLP-SG/SSTuning . |
---|