Temporal feature extraction for video-based activity recognition

With the development of modern media, video understanding has become a heated research topic. Convolutional Neural Network(CNN) has been proven to be very effective in the image classification task. But simply applying traditional CNN on the video action recognition task is not feasible because it c...

Full description

Saved in:
Bibliographic Details
Main Author: Chen, Zhiyang
Other Authors: Mao Kezhi
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/164378
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:With the development of modern media, video understanding has become a heated research topic. Convolutional Neural Network(CNN) has been proven to be very effective in the image classification task. But simply applying traditional CNN on the video action recognition task is not feasible because it cannot learn the motion information. In this dissertation, we study two mainstream temporal feature extraction methods at present, two-stream CNN and 3D CNN, together with their variants. The following conclusions can be obtained from our work: (i) 3D CNN models are more prone to overfit and a small video dataset is not sufficient to train a deep 3D CNN model. Transferring and fine-tuning the pre-trained model can help to solve the problem. (ii) We can improve the performance of two-stream CNN by building interaction features between two-stream features after a late convolutional layer. (iii) Factorizing 3D convolution into separate 2D and 1D convolution can boost the performance of 3D CNN. (iv) Using optical flow input in 3D CNN can also improve the prediction accuracy.