Rushes video summarization by object and event understanding

This paper explores a variety of visual and audio analysis techniques in selecting the most representative video clips for rushes summarization at TRECVID 2007. These techniques include object detection, camera motion estimation, keypoint matching and tracking, audio classification and speech recogn...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Feng, NGO, Chong-wah
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2007
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6537
https://ink.library.smu.edu.sg/context/sis_research/article/7540/viewcontent/1290031.1290035.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:This paper explores a variety of visual and audio analysis techniques in selecting the most representative video clips for rushes summarization at TRECVID 2007. These techniques include object detection, camera motion estimation, keypoint matching and tracking, audio classification and speech recognition. Our system is composed of two major steps. First, based on video structuring, we filter undesirable shots and minimize the inter-shot redundancy by repetitive shot detection. Second, a representability measure is proposed to model the presence of objects and four audio-visual events: motion activity of objects, camera motion, scene changes, and speech content, in a video clip. The video clips with the highest representability scores are selected for summarization. The evaluation at TRECVID shows that our experimental results are highly encouraging, where we rank first in EA (easy to understand), second in RE (little redundancy) and third in IN (inclusion of objects and events).