Generalised Scalable Robust Principal Component Analysis
In Proceedings British Machine Vision Conference 2014
AbstractThe robust estimation of the low-dimensional subspace that spans the data from a set of high-dimensional, possibly corrupted by gross errors and outliers observations is fundamental in many computer vision problems. The state-of-the-art robust principal component analysis (PCA) methods adopt convex relaxations of L0 quasi-norm-regularised rank minimisation problems. That is, the nuclear norm and the L1-norm are employed. However, this convex relaxation may make the solutions deviate from the original ones. To this end, the Generalised Scalable Robust PCA (GSRPCA) is proposed, by reformulating the robust PCA problem using the Schatten p-norm and the Lq-norm subject to orthonormality constraints, resulting in a better non-convex approximation of the original sparsity regularised rank minimisation problem. It is worth noting that the common robust PCA variants are special cases of the GSRPCA when p=q=1 and by properly choosing the upper bound of the number of the principal components. An efficient algorithm for the GSRPCA is developed. The performance of the GSRPCA is assessed by conducting experiments on both synthetic and real data. The experimental results indicate that the GSRPCA outperforms the common state-of-the-art robust PCA methods without introducing much extra computational cost.
FilesExtended Abstract (PDF, 1 page, 162K)
Paper (PDF, 11 pages, 263K)