Towards understanding why mask reconstruction pretraining helps in downstream tasks
For unsupervised pretraining, mask-reconstruction pretraining (MRP) approaches, e.g. MAE (He et al., 2021) and data2vec (Baevski et al., 2022), randomly mask input patches and then reconstruct the pixels or semantic features of these masked patches via an auto-encoder. Then for a downstream task, su...
Saved in:
Main Authors: | PAN, Jiachun, ZHOU, Pan, YAN, Shuicheng |
---|---|
格式: | text |
語言: | English |
出版: |
Institutional Knowledge at Singapore Management University
2023
|
主題: | |
在線閱讀: | https://ink.library.smu.edu.sg/sis_research/9022 https://ink.library.smu.edu.sg/context/sis_research/article/10025/viewcontent/2023_ICLR_MAE_Theory.pdf |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Singapore Management University |
語言: | English |
相似書籍
-
Task relation networks
由: LI, Jianshu, et al.
出版: (2019) -
Towards understanding convergence and generalization of AdamW
由: ZHOU, Pan, et al.
出版: (2024) -
LPT: Long-tailed prompt tuning for image classification
由: DONG, Bowen, et al.
出版: (2023) -
Masked diffusion transformer is a strong image synthesizer
由: GAO, Shanghua, et al.
出版: (2023) -
Efficient meta learning via minibatch proximal update
由: ZHOU, Pan, et al.
出版: (2019)