Towards understanding why mask reconstruction pretraining helps in downstream tasks
For unsupervised pretraining, mask-reconstruction pretraining (MRP) approaches, e.g. MAE (He et al., 2021) and data2vec (Baevski et al., 2022), randomly mask input patches and then reconstruct the pixels or semantic features of these masked patches via an auto-encoder. Then for a downstream task, su...
Saved in:
Main Authors: | PAN, Jiachun, ZHOU, Pan, YAN, Shuicheng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9022 https://ink.library.smu.edu.sg/context/sis_research/article/10025/viewcontent/2023_ICLR_MAE_Theory.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Task relation networks
by: LI, Jianshu, et al.
Published: (2019) -
Towards understanding convergence and generalization of AdamW
by: ZHOU, Pan, et al.
Published: (2024) -
InceptionNeXt: When Inception meets ConvNeXt
by: YU, Weihao, et al.
Published: (2024) -
LPT: Long-tailed prompt tuning for image classification
by: DONG, Bowen, et al.
Published: (2023) -
Masked diffusion transformer is a strong image synthesizer
by: GAO, Shanghua, et al.
Published: (2023)