Sampling and ontologically pooling web images for visual concept learning

Sufficient training examples are essential for effective learning of semantic visual concepts. In practice, however, acquiring noise-free training examples has always been expensive. Recently the rapid popularization of social media websites, such as Flickr, has made it possible to collect training...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHU, Shiai, NGO, Chong-wah, JIANG, Yu-Gang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2012
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6340
https://ink.library.smu.edu.sg/context/sis_research/article/7343/viewcontent/tmm12_trainingset.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Sufficient training examples are essential for effective learning of semantic visual concepts. In practice, however, acquiring noise-free training examples has always been expensive. Recently the rapid popularization of social media websites, such as Flickr, has made it possible to collect training exemplars without human assistance. This paper proposes a novel and efficient approach to collect training samples from the noisily tagged Web images for visual concept learning, where we try to maximize two important criteria, relevancy and coverage, of the automatically generated training sets. For the former, a simple method named semantic field is introduced to handle the imprecise and incomplete image tags. Specifically, the relevancy of an image to a target concept is predicted by collectively analyzing the associated tag list of the image using two knowledge sources: WordNet corpus and statistics from Flickr.com. To boost the coverage or diversity of the training sets, we further propose an ontology-based hierarchical pooling method to collect samples not only based on the target concept alone, but also from ontologically neighboring concepts. Extensive experiments on three different datasets (NUS-WIDE, PASCAL VOC, and ImageNet) demonstrate the effectiveness of our proposed approach, producing competitive performance even when comparing with concept classifiers learned using expert-labeled training examples.