Improved low rank representation : kernelization, efficient optimization and applications
Given data sampled from multiple subspaces, the goal of subspace clustering is to partition data into several clusters, so that each cluster exactly corresponds to one subspace. Initially proposed for subspace clustering, the low rank representation (LRR) approach has shown promising results in vari...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2016
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/66234 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Given data sampled from multiple subspaces, the goal of subspace clustering is to partition data into several clusters, so that each cluster exactly corresponds to one subspace. Initially proposed for subspace clustering, the low rank representation (LRR) approach has shown promising results in various applications, e.g., motion segmentation and face clustering. In this thesis, we present our works for improving LRR in three aspects: kernelization, efficient optimization and applications, respectively.
LRR performs well when handling data from linear subspaces, but it may not achieve satisfactory results when dealing with nonlinear data. Whereas, in kernel-based methods, non-linear data is usually mapped to a new feature space via a kernel-induced mapping, making it easier to handle. Therefore, we propose the kernelized version of LRR. For the clean data case, we present the corresponding kernelized version of LRR as well as a closed form solution for it. For handling corrupted data, we propose the robust kernel low rank representation (RKLRR) approach, and develop an efficient algorithm for it.
Regarding the optimization of LRR, different from existing solvers based on the formulation containing the original data matrix, we develop a fast LRR solver (called FaLRR) based on reformulating LRR using the factorized data matrix. To solve the resultant optimization problem, we propose an algorithm which is efficient and theoretically guaranteed to obtain a global optimum. Besides, our proposed algorithm can be readily incorporated into an existing distributed framework of LRR for further acceleration.
In terms of applications, we proposed new approaches based on LRR to deal with some real-world problems where weak supervision is available and potentially beneficial. Specifically, we focus on two real-world problems: face clustering in videos, and face naming with weakly labeled images. For face clustering in videos, given automatically extracted faces from videos and two kinds of prior knowledge (the face track that each face belongs to, and the pairs of faces that appear in the same frame), the task is to partition the faces into a given number of disjoint groups such that each group corresponds to one subject. To address this problem, we propose the weighted block-sparse low rank representation (WBSLRR) method which considers the available prior knowledge while learning a low rank data representation, followed by a simple but effective approach to obtain the clustering result of faces. On the other hand, for face naming with weakly labeled images, given a collection of images where each image contains several faces and is associated with a caption containing several names, the goal is to infer the correct name for each face. We propose to solve this problem based on learning discriminative affinity matrices. Specifically, we first propose the regularized low-rank representation (rLRR) approach which effectively utilizes the weak supervision (via a specially designed regularizer) to learn a low-rank reconstruction coefficient matrix, thus leading to a discriminative affinity matrix. Moreover, we also develop a new distance metric learning approach, based on which another affinity matrix can be obtained. Face naming based on a combination of these two affinity matrices achieves satisfactory results. |
---|