Ranking-aware contrastive learning with large language models
Generating high-quality word and sentence representations is a foundational task in natural language processing (NLP). In recent years, various embedding methodologies have been proposed, notably those leveraging the capabilities of large language models for in-context learning. Research has shown t...
Saved in:
Main Author: | Hu, Yuqi |
---|---|
Other Authors: | Lihui Chen |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177983 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Matrix Completion Models with Fixed Basis Coefficients and Rank Regularized Problems with Hard Constraints
by: MIAO WEIMIN
Published: (2013) -
Minimum rank of graphs
by: Sarawut Rattanaprayoon
Published: (2012) -
Learning to rank tags
by: Wang, Z., et al.
Published: (2014) -
The First Positive Rank and Crank Moments for Overpartitions
by: Andrews, George, et al.
Published: (2017) -
From rank estimation to rank approximation : rank residual constraint for image restoration
by: Zha, Zhiyuan, et al.
Published: (2021)