Virtual prompt pre-training for prototype-based few-shot relation extraction

Prompt tuning with pre-trained language models (PLM) has exhibited outstanding performance by reducing the gap between pre-training tasks and various downstream applications, which requires additional labor efforts in label word mappings and prompt template engineering. However, in a label intensive...

Full description

Saved in:
Bibliographic Details
Main Authors: He, Kai, Huang, Yucheng, Mao, Rui, Gong, Tieliang, Li, Chen, Cambria, Erik
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170494
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Be the first to leave a comment!
You must be logged in first