Sampling and ontologically pooling web images for visual concept learning

Sufficient training examples are essential for effective learning of semantic visual concepts. In practice, however, acquiring noise-free training examples has always been expensive. Recently the rapid popularization of social media websites, such as Flickr, has made it possible to collect training...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHU, Shiai, NGO, Chong-wah, JIANG, Yu-Gang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2012
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6340
https://ink.library.smu.edu.sg/context/sis_research/article/7343/viewcontent/tmm12_trainingset.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7343
record_format dspace
spelling sg-smu-ink.sis_research-73432021-11-23T04:07:31Z Sampling and ontologically pooling web images for visual concept learning ZHU, Shiai NGO, Chong-wah JIANG, Yu-Gang Sufficient training examples are essential for effective learning of semantic visual concepts. In practice, however, acquiring noise-free training examples has always been expensive. Recently the rapid popularization of social media websites, such as Flickr, has made it possible to collect training exemplars without human assistance. This paper proposes a novel and efficient approach to collect training samples from the noisily tagged Web images for visual concept learning, where we try to maximize two important criteria, relevancy and coverage, of the automatically generated training sets. For the former, a simple method named semantic field is introduced to handle the imprecise and incomplete image tags. Specifically, the relevancy of an image to a target concept is predicted by collectively analyzing the associated tag list of the image using two knowledge sources: WordNet corpus and statistics from Flickr.com. To boost the coverage or diversity of the training sets, we further propose an ontology-based hierarchical pooling method to collect samples not only based on the target concept alone, but also from ontologically neighboring concepts. Extensive experiments on three different datasets (NUS-WIDE, PASCAL VOC, and ImageNet) demonstrate the effectiveness of our proposed approach, producing competitive performance even when comparing with concept classifiers learned using expert-labeled training examples. 2012-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6340 info:doi/10.1109/TMM.2012.2190387 https://ink.library.smu.edu.sg/context/sis_research/article/7343/viewcontent/tmm12_trainingset.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Training set construction visual concept learning web images Computer Sciences Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Training set construction
visual concept learning
web images
Computer Sciences
Graphics and Human Computer Interfaces
spellingShingle Training set construction
visual concept learning
web images
Computer Sciences
Graphics and Human Computer Interfaces
ZHU, Shiai
NGO, Chong-wah
JIANG, Yu-Gang
Sampling and ontologically pooling web images for visual concept learning
description Sufficient training examples are essential for effective learning of semantic visual concepts. In practice, however, acquiring noise-free training examples has always been expensive. Recently the rapid popularization of social media websites, such as Flickr, has made it possible to collect training exemplars without human assistance. This paper proposes a novel and efficient approach to collect training samples from the noisily tagged Web images for visual concept learning, where we try to maximize two important criteria, relevancy and coverage, of the automatically generated training sets. For the former, a simple method named semantic field is introduced to handle the imprecise and incomplete image tags. Specifically, the relevancy of an image to a target concept is predicted by collectively analyzing the associated tag list of the image using two knowledge sources: WordNet corpus and statistics from Flickr.com. To boost the coverage or diversity of the training sets, we further propose an ontology-based hierarchical pooling method to collect samples not only based on the target concept alone, but also from ontologically neighboring concepts. Extensive experiments on three different datasets (NUS-WIDE, PASCAL VOC, and ImageNet) demonstrate the effectiveness of our proposed approach, producing competitive performance even when comparing with concept classifiers learned using expert-labeled training examples.
format text
author ZHU, Shiai
NGO, Chong-wah
JIANG, Yu-Gang
author_facet ZHU, Shiai
NGO, Chong-wah
JIANG, Yu-Gang
author_sort ZHU, Shiai
title Sampling and ontologically pooling web images for visual concept learning
title_short Sampling and ontologically pooling web images for visual concept learning
title_full Sampling and ontologically pooling web images for visual concept learning
title_fullStr Sampling and ontologically pooling web images for visual concept learning
title_full_unstemmed Sampling and ontologically pooling web images for visual concept learning
title_sort sampling and ontologically pooling web images for visual concept learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2012
url https://ink.library.smu.edu.sg/sis_research/6340
https://ink.library.smu.edu.sg/context/sis_research/article/7343/viewcontent/tmm12_trainingset.pdf
_version_ 1770575937885175808