Mining visual collocation patterns via self-supervised subspace learning

Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial depend...

全面介紹

Saved in:
書目詳細資料
Main Authors: Yuan, Junsong, Wu, Ying
其他作者: School of Electrical and Electronic Engineering
格式: Article
語言:English
出版: 2013
主題:
在線閱讀:https://hdl.handle.net/10356/96325
http://hdl.handle.net/10220/11425
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.