Virtual prompt pre-training for prototype-based few-shot relation extraction
Prompt tuning with pre-trained language models (PLM) has exhibited outstanding performance by reducing the gap between pre-training tasks and various downstream applications, which requires additional labor efforts in label word mappings and prompt template engineering. However, in a label intensive...
Saved in:
Main Authors: | He, Kai, Huang, Yucheng, Mao, Rui, Gong, Tieliang, Li, Chen, Cambria, Erik |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/170494 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
FEW-SHOT IMAGE RECOGNITION AND OBJECT DETECTION
by: LI YITING
Published: (2023) -
Self-regularized prototypical network for few-shot semantic segmentation
by: Ding, Henghui, et al.
Published: (2023) -
Few-shot vision recognition and generation for the open-world
by: Song, Nan
Published: (2024) -
Few-shot learning in Wi-Fi-based indoor positioning
by: Xie, Feng, et al.
Published: (2024) -
Learning to Self-Train for Semi-Supervised Few-Shot Classification
by: Xinzhe Li, et al.
Published: (2020)