A fast correction approach to tensor robust principal component analysis

Tensor robust principal component analysis (TRPCA) is a useful approach for obtaining low-rank data corrupted by noise or outliers. However, existing TRPCA methods face certain challenges when it comes to estimating the tensor rank and the sparsity accurately. The commonly used tensor nuclear norm (...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Zhechen, Liu, Sanyang, Lin, Zhiping, Xue, Jize, Liu, Lixia
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180347
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Tensor robust principal component analysis (TRPCA) is a useful approach for obtaining low-rank data corrupted by noise or outliers. However, existing TRPCA methods face certain challenges when it comes to estimating the tensor rank and the sparsity accurately. The commonly used tensor nuclear norm (TNN) may lead to sub-optimal solutions due to the gap between TNN and the tensor rank. Additionally, the ℓ1-norm is not an ideal estimation of the ℓ0-norm, and solving TNN minimization can be computationally intensive because of the tensor singular value thresholding (t-SVT) scheme. To address these issues, a method called fast correction TNN (FC-TNN) is proposed for TRPCA. In contrast to existing methods, FC-TNN introduces a correction term to bridge the gap between TNN and the tensor rank. Furthermore, a new correction term is employed for the ℓ1-norm to achieve the desired solution. To improve computational efficiency, the Chebyshev polynomial approximation (CPA) technique is presented for computing t-SVT without requiring tensor singular value decomposition (t-SVD). The CPA technique is incorporated into the alternating direction method of multipliers (ADMM) algorithm to solve the proposed model effectively. Theoretical analysis demonstrates that FC-TNN offers a lower error bound compared to TNN under certain conditions. Extensive experiments conducted on various tensor-based datasets illustrate that the proposed method outperforms several state-of-the-art methods.