Generalized logit adjustment: Calibrating fine-tuned models by removing label bias in foundation models
Foundation models like CLIP allow zero-shot transfer on various tasks without additional training data. Yet, the zero-shot performance is less competitive than a fully supervised one. Thus, to enhance the performance, fine-tuning and ensembling are also commonly adopted to better fit the downstream...
محفوظ في:
المؤلفون الرئيسيون: | , , , |
---|---|
التنسيق: | text |
اللغة: | English |
منشور في: |
Institutional Knowledge at Singapore Management University
2023
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://ink.library.smu.edu.sg/sis_research/8473 https://ink.library.smu.edu.sg/context/sis_research/article/9476/viewcontent/Generalized_Logit_Adjustment__Calibrating_Fine_tuned_Models_by_Removing_Label_Bias_in_Foundation_Models__1_.pdf |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|