Large language model enhanced with prompt-based vanilla distillation for sentence embeddings

In this dissertation, the prompt-based method PromptEOL is used to train the opt- 2.7b model with the Parameter-Efficient Fine-Tuning method to reduce the number of training parameters and GPU memory usage. Then the opt-2.7b-lora model is used as the teacher model to train the student model under...

Full description

Saved in:
Bibliographic Details
Main Author: Wang, Minghao
Other Authors: Lihui Chen
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173839
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English

Similar Items