MLP-3D: A MLP-like 3D architecture with grouped time mixing

Convolutional Neural Networks (CNNs) have been re-garded as the go-to models for visual recognition. More re-cently, convolution-free networks, based on multi-head self-attention (MSA) or multi-layer perceptrons (MLPs), become more and more popular. Nevertheless, it is not trivial when utilizing the...

Full description

Saved in:
Bibliographic Details
Main Authors: QIU, Zhaofan, YAO, Ting, NGO, Chong-wah, MEI, Tao
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7505
https://ink.library.smu.edu.sg/context/sis_research/article/8508/viewcontent/Qiu_MLP_3D_A_MLP_Like_3D_Architecture_With_Grouped_Time_Mixing_CVPR_2022_paper.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8508
record_format dspace
spelling sg-smu-ink.sis_research-85082022-11-18T08:05:15Z MLP-3D: A MLP-like 3D architecture with grouped time mixing QIU, Zhaofan YAO, Ting NGO, Chong-wah MEI, Tao Convolutional Neural Networks (CNNs) have been re-garded as the go-to models for visual recognition. More re-cently, convolution-free networks, based on multi-head self-attention (MSA) or multi-layer perceptrons (MLPs), become more and more popular. Nevertheless, it is not trivial when utilizing these newly-minted networks for video recognition due to the large variations and complexities in video data. In this paper, we present MLP-3D networks, a novel MLP-like 3D architecture for video recognition. Specifically, the architecture consists of MLP-3D blocks, where each block contains one MLP applied across tokens (i.e., token-mixing MLP) and one MLP applied independently to each token (i.e., channel MLP). By deriving the novel grouped time mixing (GTM) operations, we equip the basic token-mixing MLP with the ability of temporal modeling. GTM divides the input tokens into several temporal groups and linearly maps the tokens in each group with the shared projection matrix. Furthermore, we devise several variants of GTM with different grouping strategies, and compose each vari-ant in different blocks of MLP-3D network by greedy ar-chitecture search. Without the dependence on convolutions or attention mechanisms, our MLP-3D networks achieves 68.5%/81.4% top-1 accuracy on Something-Something V2 and Kinetics-400 datasets, respectively. Despite with fewer computations, the results are comparable to state-of-the-art widely-used 3D CNNs and video transformers. 2022-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7505 info:doi/10.1109/CVPR52688.2022.00307 https://ink.library.smu.edu.sg/context/sis_research/article/8508/viewcontent/Qiu_MLP_3D_A_MLP_Like_3D_Architecture_With_Grouped_Time_Mixing_CVPR_2022_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
spellingShingle Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
QIU, Zhaofan
YAO, Ting
NGO, Chong-wah
MEI, Tao
MLP-3D: A MLP-like 3D architecture with grouped time mixing
description Convolutional Neural Networks (CNNs) have been re-garded as the go-to models for visual recognition. More re-cently, convolution-free networks, based on multi-head self-attention (MSA) or multi-layer perceptrons (MLPs), become more and more popular. Nevertheless, it is not trivial when utilizing these newly-minted networks for video recognition due to the large variations and complexities in video data. In this paper, we present MLP-3D networks, a novel MLP-like 3D architecture for video recognition. Specifically, the architecture consists of MLP-3D blocks, where each block contains one MLP applied across tokens (i.e., token-mixing MLP) and one MLP applied independently to each token (i.e., channel MLP). By deriving the novel grouped time mixing (GTM) operations, we equip the basic token-mixing MLP with the ability of temporal modeling. GTM divides the input tokens into several temporal groups and linearly maps the tokens in each group with the shared projection matrix. Furthermore, we devise several variants of GTM with different grouping strategies, and compose each vari-ant in different blocks of MLP-3D network by greedy ar-chitecture search. Without the dependence on convolutions or attention mechanisms, our MLP-3D networks achieves 68.5%/81.4% top-1 accuracy on Something-Something V2 and Kinetics-400 datasets, respectively. Despite with fewer computations, the results are comparable to state-of-the-art widely-used 3D CNNs and video transformers.
format text
author QIU, Zhaofan
YAO, Ting
NGO, Chong-wah
MEI, Tao
author_facet QIU, Zhaofan
YAO, Ting
NGO, Chong-wah
MEI, Tao
author_sort QIU, Zhaofan
title MLP-3D: A MLP-like 3D architecture with grouped time mixing
title_short MLP-3D: A MLP-like 3D architecture with grouped time mixing
title_full MLP-3D: A MLP-like 3D architecture with grouped time mixing
title_fullStr MLP-3D: A MLP-like 3D architecture with grouped time mixing
title_full_unstemmed MLP-3D: A MLP-like 3D architecture with grouped time mixing
title_sort mlp-3d: a mlp-like 3d architecture with grouped time mixing
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7505
https://ink.library.smu.edu.sg/context/sis_research/article/8508/viewcontent/Qiu_MLP_3D_A_MLP_Like_3D_Architecture_With_Grouped_Time_Mixing_CVPR_2022_paper.pdf
_version_ 1770576359917092864