Semantic context modeling with maximal margin conditional random fields for automatic image annotation

Context modeling for Vision Recognition and Automatic Image Annotation (AIA) has attracted increasing attentions in recent years. For various contextual information and resources, semantic context has been exploited in AIA and brings promising results. However, previous works either casted the probl...

全面介紹

Saved in:
書目詳細資料
Main Authors: XIANG, Yu, ZHOU, Xiangdong, LIU, Zuotao, CHUA, Tat-Seng, NGO, Chong-wah
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2010
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/6601
https://ink.library.smu.edu.sg/context/sis_research/article/7604/viewcontent/xiang_cvpr10.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Context modeling for Vision Recognition and Automatic Image Annotation (AIA) has attracted increasing attentions in recent years. For various contextual information and resources, semantic context has been exploited in AIA and brings promising results. However, previous works either casted the problem into structural classification or adopted multi-layer modeling, which suffer from the problems of scalability or model efficiency. In this paper, we propose a novel discriminative Conditional Random Field (CRF) model for semantic context modeling in AIA, which is built over semantic concepts and treats an image as a whole observation without segmentation. Our model captures the interactions between semantic concepts from both semantic level and visual level in an integrated manner. Specifically, we employ graph structure to model contextual relationships between semantic concepts. The potential functions are designed based on linear discriminative models, which enables us to propose a novel decoupled hinge loss function for maximal margin parameter estimation. We train the model by solving a set of independent quadratic programming problems with our derived contextual kernel. The experiments are conducted on commonly used benchmarks: Corel and TRECVID data sets for evaluation. The experimental results show that compared with the state-of-the-art methods, our method achieves significant improvement on annotation performance.