Large language model enhanced with prompt-based vanilla distillation for sentence embeddings
In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under...
Saved in:
Main Author: | Wang, Minghao |
---|---|
Other Authors: | Lihui Chen |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173839 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Composition distillation for semantic sentence embeddings
by: Vaanavan, Sezhiyan
Published: (2024) -
On-the-fly knowledge distillation model for sentence embedding
by: Zhu, Xuchun
Published: (2024) -
Mutual-reinforcement document summarization using embedded graph based sentence clustering for storytelling
by: Zhang, Z., et al.
Published: (2014) -
When missing NPs make double center-embedding sentences acceptable
by: Huang, Nick, et al.
Published: (2022) -
BlendCSE: blend contrastive learnings for sentence embeddings with rich semantics and transferability
by: Xu, Jiahao, et al.
Published: (2024)