On-the-fly knowledge distillation model for sentence embedding
In this dissertation, we run experimental study to investigate the performance of sentence embedding using an on-the-fly knowledge distillation model based on DistillCSE framework. This model utilizes SimCSE as the initial teacher model. After a certain number of training steps, it caches an interm...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174236 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In this dissertation, we run experimental study to investigate the performance of sentence embedding using an on-the-fly knowledge distillation model based on DistillCSE framework.
This model utilizes SimCSE as the initial teacher model. After a certain number of training steps, it caches an intermediate model and employs it as a new teacher model for knowledge distillation. This process is repeated several times to obtain the desired on-the-fly knowledge distilled student model. This model employs a novel approach to knowledge distillation, potentially offering advantages such as reducing training time and achieving performance close to the original teacher model. In some cases, after fine-tuning, it may even surpass the performance of the original teacher model for specific tasks. |
---|