Video concept detection by learning from web images: A case study on cross domain learning

Concept detection is probably the most important research problem in the area of multimedia. The need to model with sufficient and diverse training instances, however, makes the task computationally and resourcefully expensive. Meanwhile, the popularity of social media has generated massive amount o...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHU, Shiai, YAO, Ting, NGO, Chong-wah
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2013
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6597
https://ink.library.smu.edu.sg/context/sis_research/article/7600/viewcontent/icme2013zhu.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Concept detection is probably the most important research problem in the area of multimedia. The need to model with sufficient and diverse training instances, however, makes the task computationally and resourcefully expensive. Meanwhile, the popularity of social media has generated massive amount of weakly tagged images which could be leveraged for concept model learning. Therefore, in this paper, we consider exploring weakly taggedWeb images to shed some light on video concept detection. Particularly, two sets of Web images downloaded from Flickr are utilized as training data for concept detection on two real-world large-scale video datasets released by TRECVID. Our experiments are conducted under different settings with and without transfer learning. The results indicate that Web images are helpful in the case of few available training instances in video domain, which is a common case of many real-world applications.