MetaFormer is actually what you need for vision
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perf...
Saved in:
Main Authors: | YU, Weihao, LUO, Mi, ZHOU, Pan, SI, Chenyang, ZHOU, Yichen, WANG, Xinchao, FENG, Jiashi, YAN, Shuicheng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8983 https://ink.library.smu.edu.sg/context/sis_research/article/9986/viewcontent/2022_CVPR_MetaFormer.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
MetaFormer baselines for vision
by: YU, Weihao, et al.
Published: (2023) -
InceptionNeXt: When Inception meets ConvNeXt
by: YU, Weihao, et al.
Published: (2024) -
DualFormer: Local-global stratified transformer for efficient video recognition
by: LIANG, Yuxuan, et al.
Published: (2022) -
STPrivacy: Spatio-temporal privacy-preserving action recognition
by: LI, Ming, et al.
Published: (2023) -
Efficient meta learning via minibatch proximal update
by: ZHOU, Pan, et al.
Published: (2019)