Shunted self-attention via multi-scale token aggregation

Recent Vision Transformer (ViT) models have demonstrated encouraging results across various computer vision tasks, thanks to its competence in modeling long-range dependencies of image patches or tokens via self-attention. These models, however, usually designate the similar receptive fields of each...

Full description

Saved in:
Bibliographic Details
Main Authors: REN, Sucheng, ZHOU, Daquan, HE, Shengfeng, FENG, Jiashi, WANG, Xinchao
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8530
https://ink.library.smu.edu.sg/context/sis_research/article/9533/viewcontent/Shunted_self_attention_via_multi_scale_token_aggregation.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9533
record_format dspace
spelling sg-smu-ink.sis_research-95332024-01-22T14:59:04Z Shunted self-attention via multi-scale token aggregation REN, Sucheng ZHOU, Daquan HE, Shengfeng FENG, Jiashi WANG, Xinchao Recent Vision Transformer (ViT) models have demonstrated encouraging results across various computer vision tasks, thanks to its competence in modeling long-range dependencies of image patches or tokens via self-attention. These models, however, usually designate the similar receptive fields of each token feature within each layer. Such a constraint inevitably limits the ability of each self-attention layer in capturing multi-scale features, thereby leading to performance degradation in handling images with multiple objects of different scales. To address this issue, we propose a novel and generic strategy, termed shunted selfattention (SSA), that allows ViTs to model the attentions at hybrid scales per attention layer. The key idea of SSA is to inject heterogeneous receptive field sizes into tokens: before computing the self-attention matrix, it selectively merges tokens to represent larger object features while keeping certain tokens to preserve fine-grained features. This novel merging scheme enables the self-attention to learn relationships between objects with different sizes, and simultaneously reduces the token numbers and the computational cost. Extensive experiments across various tasks demonstrate the superiority of SSA. Specifically, the SSAbased transformer achieve 84.0% Top-1 accuracy and outperforms the state-of-the-art Focal Transformer on ImageNet with only half of the model size and computation cost, and surpasses Focal Transformer by 1.3 mAP on COCO and 2.9 mIOU on ADE20K under similar parameter and computation cost. 2022-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8530 info:doi/10.1109/CVPR52688.2022.01058 https://ink.library.smu.edu.sg/context/sis_research/article/9533/viewcontent/Shunted_self_attention_via_multi_scale_token_aggregation.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Computation costs Deep learning architecture and technique Efficient learning Efficient learning and inference Image patches Learning architectures Learning techniques Multi-scales Receptive fields Transformer modeling Databases and Information Systems Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Computation costs
Deep learning architecture and technique
Efficient learning
Efficient learning and inference
Image patches
Learning architectures
Learning techniques
Multi-scales
Receptive fields
Transformer modeling
Databases and Information Systems
Information Security
spellingShingle Computation costs
Deep learning architecture and technique
Efficient learning
Efficient learning and inference
Image patches
Learning architectures
Learning techniques
Multi-scales
Receptive fields
Transformer modeling
Databases and Information Systems
Information Security
REN, Sucheng
ZHOU, Daquan
HE, Shengfeng
FENG, Jiashi
WANG, Xinchao
Shunted self-attention via multi-scale token aggregation
description Recent Vision Transformer (ViT) models have demonstrated encouraging results across various computer vision tasks, thanks to its competence in modeling long-range dependencies of image patches or tokens via self-attention. These models, however, usually designate the similar receptive fields of each token feature within each layer. Such a constraint inevitably limits the ability of each self-attention layer in capturing multi-scale features, thereby leading to performance degradation in handling images with multiple objects of different scales. To address this issue, we propose a novel and generic strategy, termed shunted selfattention (SSA), that allows ViTs to model the attentions at hybrid scales per attention layer. The key idea of SSA is to inject heterogeneous receptive field sizes into tokens: before computing the self-attention matrix, it selectively merges tokens to represent larger object features while keeping certain tokens to preserve fine-grained features. This novel merging scheme enables the self-attention to learn relationships between objects with different sizes, and simultaneously reduces the token numbers and the computational cost. Extensive experiments across various tasks demonstrate the superiority of SSA. Specifically, the SSAbased transformer achieve 84.0% Top-1 accuracy and outperforms the state-of-the-art Focal Transformer on ImageNet with only half of the model size and computation cost, and surpasses Focal Transformer by 1.3 mAP on COCO and 2.9 mIOU on ADE20K under similar parameter and computation cost.
format text
author REN, Sucheng
ZHOU, Daquan
HE, Shengfeng
FENG, Jiashi
WANG, Xinchao
author_facet REN, Sucheng
ZHOU, Daquan
HE, Shengfeng
FENG, Jiashi
WANG, Xinchao
author_sort REN, Sucheng
title Shunted self-attention via multi-scale token aggregation
title_short Shunted self-attention via multi-scale token aggregation
title_full Shunted self-attention via multi-scale token aggregation
title_fullStr Shunted self-attention via multi-scale token aggregation
title_full_unstemmed Shunted self-attention via multi-scale token aggregation
title_sort shunted self-attention via multi-scale token aggregation
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/8530
https://ink.library.smu.edu.sg/context/sis_research/article/9533/viewcontent/Shunted_self_attention_via_multi_scale_token_aggregation.pdf
_version_ 1789483259731640320