Multimodal knowledge-based analysis in multimedia event detection

Multimedia Event Detection (MED) is a multimedia retrieval task with the goal of finding videos of a particular event in a large-scale Internet video archive, given example videos and text descriptions. We focus on the multimodal knowledge-based analysis in MED where we utilize meaningful and semant...

Full description

Saved in:
Bibliographic Details
Main Authors: Younessian, Ehsan., Mitamura, Teruko., Hauptmann, Alexander.
Other Authors: School of Computer Engineering
Format: Conference or Workshop Item
Language:English
Published: 2013
Online Access:https://hdl.handle.net/10356/84248
http://hdl.handle.net/10220/12649
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Multimedia Event Detection (MED) is a multimedia retrieval task with the goal of finding videos of a particular event in a large-scale Internet video archive, given example videos and text descriptions. We focus on the multimodal knowledge-based analysis in MED where we utilize meaningful and semantic features such as Automatic Speech Recognition (ASR) transcripts, acoustic concept indexing (i.e. 42 acoustic concepts) and visual semantic indexing (i.e. 346 visual concepts) to characterize videos in archive. We study two scenarios where we either do or do not use the provided example videos. In the former, we propose a novel Adaptive Semantic Similarity (ASS) to measure textual similarity between ASR transcripts of videos. We also incorporate acoustic concept indexing and classification to retrieve test videos, specially with too few spoken words. In the latter 'ad-hoc' scenario where we do not have any example video, we use only the event kit description to retrieve test videos ASR transcripts and visual semantics. We also propose an event-specific fusion scheme to combine textual and visual retrieval outputs. Our results show the effectiveness of the proposed ASS and acoustic concept indexing methods and their complimentary role. We also conduct a set of experiments to assess the proposed framework for the 'ad-hoc' scenario.