Content is still king: the effect of neighbor voting schemes on tag relevance for social image retrieval
Tags associated with social images are valuable information source for superior tag-based image retrieval (TagIR) experiences. One of the key issues in TagIR is to learn the effectiveness of a tag in describing the visual content of its annotated image, also known as tag relevance. One of the most e...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2013
|
Online Access: | https://hdl.handle.net/10356/98906 http://hdl.handle.net/10220/12650 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Tags associated with social images are valuable information source for superior tag-based image retrieval (TagIR) experiences. One of the key issues in TagIR is to learn the effectiveness of a tag in describing the visual content of its annotated image, also known as tag relevance. One of the most effective approaches in the literature for tag relevance learning is neighbor voting. In this approach a tag is considered more relevant to its annotated image (also known as the seed image) if the tag is also used to annotate the neighbor images (nearest neighbors by visual similarity). However, the state-of-the-art approach that realizes the neighbor voting scheme does not explore the possibility of exploiting the content (e.g., degree of visual similarity between the seed and neighbor images) and contextual (e.g., tag association by co-occurrence) features of social images to further boost the accuracy of TagIR. In this paper, we identify and explore the viability of four content and context-based dimensions namely, image similarity, tag matching, tag influence, and refined tag relevance, in the context of tag relevance learning for TagIR. With alternative formulations under each dimension, this paper empirically evaluated 20 neighbor voting schemes with 81 single-tag queries on nus-wide dataset. Despite the potential benefits that the contextual information related to tags bring in to image search, surprisingly, our experimental results reveal that the content-based (image similarity) dimension is still the king as it significantly improves the accuracy of tag relevance learning for TagIR. On the other hand, tag relevance learning does not benefit from the context-based dimensions in the voting schemes. |
---|