On the pooling of positive examples with ontology for visual concept learning

A common obstacle in effective learning of visual concept classifiers is the scarcity of positive training examples due to expensive labeling cost. This paper explores the sampling of weakly tagged web images for concept learning without human assistance. In particular, ontology knowledge is incorpo...

全面介紹

Saved in:
書目詳細資料
Main Authors: ZHU, Shiai, NGO, Chong-wah, JIANG, Yu-Gang
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2011
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/6520
https://ink.library.smu.edu.sg/context/sis_research/article/7523/viewcontent/2072298.2071934.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Singapore Management University
語言: English
實物特徵
總結:A common obstacle in effective learning of visual concept classifiers is the scarcity of positive training examples due to expensive labeling cost. This paper explores the sampling of weakly tagged web images for concept learning without human assistance. In particular, ontology knowledge is incorporated for semantic pooling of positive examples from ontologically neighboring concepts. This effectively widens the coverage of the positive samples with visually more diversified content, which is important for learning a good concept classifier. We experiment with two learning strategies: aggregate and incremental. The former strategy re-trains a new classifier by combining existing and newly collected examples, while the latter updates the existing model using the new samples incrementally. Extensive experiments on NUS-WIDE and VOC 2010 datasets show very encouraging results, even when comparing with classifiers learnt using expert labeled training examples.