Few-shot learner parameterization by diffusion time-steps
Even when using large multi-modal foundation models, few-shot learning is still challenging—if there is no proper inductive bias, it is nearly impossible to keep the nuanced class attributes while removing the visually prominent attributes that spuriously correlate with class labels. To this end, we...
Saved in:
Main Authors: | YUE, Zhongqi, ZHOU, Pan, HONG, Richang, ZHANG, Hanwang, SUN Qianru |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9019 https://ink.library.smu.edu.sg/context/sis_research/article/10022/viewcontent/2024_CVPR_few_shot.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Exploring diffusion time-steps for unsupervised representation learning
by: YUE, Zhongqi, et al.
Published: (2024) -
Diffusion time-step curriculum for one image to 3D generation
by: YI, Xuanyu, et al.
Published: (2024) -
Revisiting local descriptor for improved few-shot classification
by: HE, Jun, et al.
Published: (2022) -
Interventional few-shot learning
by: YUE, Zhongqi, et al.
Published: (2020) -
Self-promoted supervision for few-shot transformer
by: DONG, Bowen, et al.
Published: (2022)