Towards understanding why mask reconstruction pretraining helps in downstream tasks

For unsupervised pretraining, mask-reconstruction pretraining (MRP) approaches, e.g. MAE (He et al., 2021) and data2vec (Baevski et al., 2022), randomly mask input patches and then reconstruct the pixels or semantic features of these masked patches via an auto-encoder. Then for a downstream task, su...

全面介紹

Saved in:
書目詳細資料
Main Authors: PAN, Jiachun, ZHOU, Pan, YAN, Shuicheng
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2023
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/9022
https://ink.library.smu.edu.sg/context/sis_research/article/10025/viewcontent/2023_ICLR_MAE_Theory.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!