Wave-ViT: Unifying wavelet and transformers for visual representation learning

Multi-scale Vision Transformer (ViT) has emerged as a powerful backbone for computer vision tasks, while the self-attention computation in Transformer scales quadratically w.r.t. the input patch number. Thus, existing solutions commonly employ down-sampling operations (e.g., average pooling) over ke...

Full description

Saved in:
Bibliographic Details
Main Authors: YAO, Ting, PAN, Yingwei, LI, Yehao, NGO, Chong-wah, MEI, Tao
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7508
https://ink.library.smu.edu.sg/context/sis_research/article/8511/viewcontent/2207.04978.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8511
record_format dspace
spelling sg-smu-ink.sis_research-85112023-08-08T07:53:02Z Wave-ViT: Unifying wavelet and transformers for visual representation learning YAO, Ting PAN, Yingwei LI, Yehao NGO, Chong-wah MEI, Tao Multi-scale Vision Transformer (ViT) has emerged as a powerful backbone for computer vision tasks, while the self-attention computation in Transformer scales quadratically w.r.t. the input patch number. Thus, existing solutions commonly employ down-sampling operations (e.g., average pooling) over keys/values to dramatically reduce the computational cost. In this work, we argue that such over-aggressive down-sampling design is not invertible and inevitably causes information dropping especially for high-frequency components in objects (e.g., texture details). Motivated by the wavelet theory, we construct a new Wavelet Vision Transformer (Wave-ViT) that formulates the invertible down-sampling with wavelet transforms and self-attention learning in a unified way. This proposal enables self-attention learning with lossless down-sampling over keys/values, facilitating the pursuing of a better efficiency-vs-accuracy trade-off. Furthermore, inverse wavelet transforms are leveraged to strengthen self-attention outputs by aggregating local contexts with enlarged receptive field. We validate the superiority of Wave-ViT through extensive experiments over multiple vision tasks (e.g., image recognition, object detection and instance segmentation). Its performances surpass state-of-the-art ViT backbones with comparable FLOPs. Source code is available at https://github.com/YehLi/ImageNetModel. 2022-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7508 info:doi/10.1007/978-3-031-19806-9_19 https://ink.library.smu.edu.sg/context/sis_research/article/8511/viewcontent/2207.04978.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Vision transformer Wavelet transform Self-attention learning Image recognition Artificial Intelligence and Robotics Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Vision transformer
Wavelet transform
Self-attention
learning
Image recognition
Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
spellingShingle Vision transformer
Wavelet transform
Self-attention
learning
Image recognition
Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
YAO, Ting
PAN, Yingwei
LI, Yehao
NGO, Chong-wah
MEI, Tao
Wave-ViT: Unifying wavelet and transformers for visual representation learning
description Multi-scale Vision Transformer (ViT) has emerged as a powerful backbone for computer vision tasks, while the self-attention computation in Transformer scales quadratically w.r.t. the input patch number. Thus, existing solutions commonly employ down-sampling operations (e.g., average pooling) over keys/values to dramatically reduce the computational cost. In this work, we argue that such over-aggressive down-sampling design is not invertible and inevitably causes information dropping especially for high-frequency components in objects (e.g., texture details). Motivated by the wavelet theory, we construct a new Wavelet Vision Transformer (Wave-ViT) that formulates the invertible down-sampling with wavelet transforms and self-attention learning in a unified way. This proposal enables self-attention learning with lossless down-sampling over keys/values, facilitating the pursuing of a better efficiency-vs-accuracy trade-off. Furthermore, inverse wavelet transforms are leveraged to strengthen self-attention outputs by aggregating local contexts with enlarged receptive field. We validate the superiority of Wave-ViT through extensive experiments over multiple vision tasks (e.g., image recognition, object detection and instance segmentation). Its performances surpass state-of-the-art ViT backbones with comparable FLOPs. Source code is available at https://github.com/YehLi/ImageNetModel.
format text
author YAO, Ting
PAN, Yingwei
LI, Yehao
NGO, Chong-wah
MEI, Tao
author_facet YAO, Ting
PAN, Yingwei
LI, Yehao
NGO, Chong-wah
MEI, Tao
author_sort YAO, Ting
title Wave-ViT: Unifying wavelet and transformers for visual representation learning
title_short Wave-ViT: Unifying wavelet and transformers for visual representation learning
title_full Wave-ViT: Unifying wavelet and transformers for visual representation learning
title_fullStr Wave-ViT: Unifying wavelet and transformers for visual representation learning
title_full_unstemmed Wave-ViT: Unifying wavelet and transformers for visual representation learning
title_sort wave-vit: unifying wavelet and transformers for visual representation learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7508
https://ink.library.smu.edu.sg/context/sis_research/article/8511/viewcontent/2207.04978.pdf
_version_ 1779156847704408064