Visual event recognition in videos

The report provides a detailed documentation on the methods implemented and evaluations carried out in this project. The project aims to create a framework with an efficient classifier for visual event recognition in videos. Firstly, a dataset of videos made up of six classes of events were obta...

Full description

Saved in:
Bibliographic Details
Main Author: Chan, Kerlina Pei Min.
Other Authors: Xu Dong
Format: Final Year Project
Language:English
Published: 2012
Subjects:
Online Access:http://hdl.handle.net/10356/48724
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The report provides a detailed documentation on the methods implemented and evaluations carried out in this project. The project aims to create a framework with an efficient classifier for visual event recognition in videos. Firstly, a dataset of videos made up of six classes of events were obtained from the Kodak database. Next, the videos are divided into training and testing sets manually. Thereafter, space time interest points feature extraction method was used to extract interest points for all videos. Subsequently, K-mean clustering was used to determine the optimal visual words clusters. For classification use, histograms were formed based on the optimal clusters. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) were the two classification methods implemented in this project. Finally, the performance of the classifiers was evaluated. The best classifier will be selected to apply in the framework. A user friendly graphical user interface (GUI) was created to implement with the framework for visual event recognition in videos.