Mining visual collocation patterns via self-supervised subspace learning

Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial depend...

Full description

Saved in:
Bibliographic Details
Main Authors: Yuan, Junsong, Wu, Ying
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/96325
http://hdl.handle.net/10220/11425
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-96325
record_format dspace
spelling sg-ntu-dr.10356-963252020-03-07T14:02:45Z Mining visual collocation patterns via self-supervised subspace learning Yuan, Junsong Wu, Ying School of Electrical and Electronic Engineering DRNTU::Engineering::Electrical and electronic engineering Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively. Accepted version 2013-07-15T06:39:31Z 2019-12-06T19:28:59Z 2013-07-15T06:39:31Z 2019-12-06T19:28:59Z 2011 2011 Journal Article Yuan, J., & Wu, Y. (2012). Mining Visual Collocation Patterns via Self-Supervised Subspace Learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42(2), 334-346. 1083-4419 https://hdl.handle.net/10356/96325 http://hdl.handle.net/10220/11425 10.1109/TSMCB.2011.2172605 en IEEE transactions on systems, man, and cybernetics, part b (cybernetics) © 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/TSMCB.2011.2172605]. application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic DRNTU::Engineering::Electrical and electronic engineering
spellingShingle DRNTU::Engineering::Electrical and electronic engineering
Yuan, Junsong
Wu, Ying
Mining visual collocation patterns via self-supervised subspace learning
description Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Yuan, Junsong
Wu, Ying
format Article
author Yuan, Junsong
Wu, Ying
author_sort Yuan, Junsong
title Mining visual collocation patterns via self-supervised subspace learning
title_short Mining visual collocation patterns via self-supervised subspace learning
title_full Mining visual collocation patterns via self-supervised subspace learning
title_fullStr Mining visual collocation patterns via self-supervised subspace learning
title_full_unstemmed Mining visual collocation patterns via self-supervised subspace learning
title_sort mining visual collocation patterns via self-supervised subspace learning
publishDate 2013
url https://hdl.handle.net/10356/96325
http://hdl.handle.net/10220/11425
_version_ 1681040390547832832