Wave-ViT: Unifying wavelet and transformers for visual representation learning
Multi-scale Vision Transformer (ViT) has emerged as a powerful backbone for computer vision tasks, while the self-attention computation in Transformer scales quadratically w.r.t. the input patch number. Thus, existing solutions commonly employ down-sampling operations (e.g., average pooling) over ke...
Saved in:
Main Authors: | YAO, Ting, PAN, Yingwei, LI, Yehao, NGO, Chong-wah, MEI, Tao |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7508 https://ink.library.smu.edu.sg/context/sis_research/article/8511/viewcontent/2207.04978.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Face recognition by applying wavelet subband representation and kernel associative memory
by: Zhang, B.-L., et al.
Published: (2014) -
The application of wavelet transform to analyze the rainfall data
by: Nareemal Hilae
Published: (2018) -
Exponential B-Splines: Scale-Space and Wavelet Representations
by: LOR CHOON YEE
Published: (2012) -
Unifying global-local representations in salient object detection with transformers
by: REN, Sucheng, et al.
Published: (2024) -
Efficient architecture for discrete wavelet transform using daubechies
by: Zhang Xiaoyin
Published: (2011)