MetaFormer baselines for vision

Abstract—MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, by migrating our focus away from the token mixer design: we introduce several baseline...

Full description

Saved in:
Bibliographic Details
Main Authors: YU, Weihao, SI, Chenyang, ZHOU, Pan, LUO, Mi, ZHOU, Yichen, FENG, Jiashi, YAN, Shuicheng, WANG, Xinchao
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9054
https://ink.library.smu.edu.sg/context/sis_research/article/10057/viewcontent/2021_TPAMI_MetaFormer.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10057
record_format dspace
spelling sg-smu-ink.sis_research-100572024-08-01T15:37:51Z MetaFormer baselines for vision YU, Weihao SI, Chenyang ZHOU, Pan LUO, Mi ZHOU, Yichen FENG, Jiashi YAN, Shuicheng WANG, Xinchao Abstract—MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, by migrating our focus away from the token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and demonstrate their gratifying performance. We summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer’s results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224 × 224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with commonly-used GELU yet achieves better performance. Specifically, StarReLU is a variant of Squared ReLU dedicated to alleviating distribution shift. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks. Code and models are available at https://github.com/sail-sg/metaformer. 2023-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9054 info:doi/10.1109/TPAMI.2023.3329173 https://ink.library.smu.edu.sg/context/sis_research/article/10057/viewcontent/2021_TPAMI_MetaFormer.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University MetaFormer Transformer Neural Networks Image Classification Deep Learning Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic MetaFormer
Transformer
Neural Networks
Image Classification
Deep Learning
Graphics and Human Computer Interfaces
spellingShingle MetaFormer
Transformer
Neural Networks
Image Classification
Deep Learning
Graphics and Human Computer Interfaces
YU, Weihao
SI, Chenyang
ZHOU, Pan
LUO, Mi
ZHOU, Yichen
FENG, Jiashi
YAN, Shuicheng
WANG, Xinchao
MetaFormer baselines for vision
description Abstract—MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, by migrating our focus away from the token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and demonstrate their gratifying performance. We summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer’s results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224 × 224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with commonly-used GELU yet achieves better performance. Specifically, StarReLU is a variant of Squared ReLU dedicated to alleviating distribution shift. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks. Code and models are available at https://github.com/sail-sg/metaformer.
format text
author YU, Weihao
SI, Chenyang
ZHOU, Pan
LUO, Mi
ZHOU, Yichen
FENG, Jiashi
YAN, Shuicheng
WANG, Xinchao
author_facet YU, Weihao
SI, Chenyang
ZHOU, Pan
LUO, Mi
ZHOU, Yichen
FENG, Jiashi
YAN, Shuicheng
WANG, Xinchao
author_sort YU, Weihao
title MetaFormer baselines for vision
title_short MetaFormer baselines for vision
title_full MetaFormer baselines for vision
title_fullStr MetaFormer baselines for vision
title_full_unstemmed MetaFormer baselines for vision
title_sort metaformer baselines for vision
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/9054
https://ink.library.smu.edu.sg/context/sis_research/article/10057/viewcontent/2021_TPAMI_MetaFormer.pdf
_version_ 1814047718765494272