Exploring inter-concept relationship with context space for semantic video indexing

Semantic concept detectors are often individually and independently developed. Using peripherally related concepts for leveraging the power of joint detection, which is referred to as context-based concept fusion (CBCF), has been one of the focus studies in recent years. This paper proposes the cons...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: WEI, Xiao-Yong, JIANG, Yu-Gang, NGO, Chong-wah
التنسيق: text
اللغة:English
منشور في: Institutional Knowledge at Singapore Management University 2009
الموضوعات:
الوصول للمادة أونلاين:https://ink.library.smu.edu.sg/sis_research/6525
https://ink.library.smu.edu.sg/context/sis_research/article/7528/viewcontent/1646396.1646416.pdf
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:Semantic concept detectors are often individually and independently developed. Using peripherally related concepts for leveraging the power of joint detection, which is referred to as context-based concept fusion (CBCF), has been one of the focus studies in recent years. This paper proposes the construction of a context space and the exploration of the space for CBCF. Context space considers the global consistency of concept relationship, addresses the problem of missing annotation, and is extensible for cross-domain contextual fusion. The space is linear and can be built by modeling the inter-concept relationship through annotation provided by either manual labeling or machine tagging. With context space, CBCF becomes a problem of concept selection and detector fusion, under which the significance of a concept/detector can be adapted when applied to a target domain different from where the detector is being developed. Experiments on TRECVID datasets of years 2005 to 2008 confirm the usefulness of context space for CBCF. We observe a consistent improvement of 2.8% to 38.8% for concept detection when context space is used, and more importantly, with significant speed-up compared to existing approaches.