Results 1  10
of
76
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 204 (12 self)
 Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Offtheshelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple firstorder and easytoimplement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k} and at each step, mainly performs a softthresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for lowrank matrix completion problems. The first is that the softthresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On
Robust Principal Component Analysis?
, 2009
"... This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the sparse co ..."
Abstract

Cited by 142 (6 self)
 Add to MetaCart
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the ℓ1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
A simpler approach to matrix completion
 the Journal of Machine Learning Research
"... This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candès and Recht [4], Candès and Tao [7], and Keshavan, Montanari, and Oh [18]. The reconstruction is accomplished by minim ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candès and Recht [4], Candès and Tao [7], and Keshavan, Montanari, and Oh [18]. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.
Robust principal component analysis: Exact recovery of corrupted lowrank matrices via convex optimization
 Advances in Neural Information Processing Systems 22
, 2009
"... The supplementary material to the NIPS version of this paper [4] contains a critical error, which was discovered several days before the conference. Unfortunately, it was too late to withdraw the paper from the proceedings. Fortunately, since that time, a correct analysis of the proposed convex prog ..."
Abstract

Cited by 45 (3 self)
 Add to MetaCart
The supplementary material to the NIPS version of this paper [4] contains a critical error, which was discovered several days before the conference. Unfortunately, it was too late to withdraw the paper from the proceedings. Fortunately, since that time, a correct analysis of the proposed convex programming relaxation has been developed by Emmanuel Candes of Stanford University. That analysis is reported in a joint paper, Robust Principal Component Analysis? by Emmanuel Candes, Xiaodong Li, Yi Ma and John Wright,
NonParametric Bayesian Dictionary Learning for Sparse Image Representations
"... Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers ..."
Abstract

Cited by 37 (23 self)
 Add to MetaCart
Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stickbreaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other stateoftheart approaches.
Robust Subspace Segmentation by LowRank Representation
"... We propose lowrank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowestrank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlik ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We propose lowrank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowestrank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlike the wellknown sparse representation (SR), which computes the sparsest representation of each data vector individually, LRR aims at finding the lowestrank representation of a collection of vectors jointly. LRR better captures the global structure of data, giving a more effective tool for robust subspace segmentation from corrupted data. Both theoretical and experimental results show that LRR is a promising tool for subspace segmentation. 1.
Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise
, 2010
"... We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near lowrank matrices. Our results are based on measures of the “spikiness ” and “lowrankness ” of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an Mestimator that includes controls on both the rank and spikiness of the solution, and we establish nonasymptotic error bounds in weighted Frobenius norm for recovering matrices lying with ℓq“balls ” of bounded spikiness. Using informationtheoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal.
Tight Oracle Bounds for Lowrank Matrix Recovery from a Minimal Number of Random Measurements
, 2009
"... This paper presents several novel theoretical results regarding the recovery of a lowrank matrix from just a few measurements consisting of linear combinations of the matrix entries. We showthatproperlyconstrainednuclearnormminimizationstablyrecoversalowrankmatrix from a constant number of noisy ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
This paper presents several novel theoretical results regarding the recovery of a lowrank matrix from just a few measurements consisting of linear combinations of the matrix entries. We showthatproperlyconstrainednuclearnormminimizationstablyrecoversalowrankmatrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature. Further, the recovery error from noisy data is within a constant of three targets: 1) the minimax risk, 2) an ‘oracle ’ error that would be available if the column space of the matrix were known, and 3) a more adaptive ‘oracle ’ error which would be available with the knowledge of the column space corresponding to the part of the matrix that stands above the noise. Lastly, the error bounds regarding lowrank matrices are extended to provide an error bound when the matrix has full rank with decaying singular values. The analysis in this paper is based on the restricted isometry property (RIP) introduced in [6] for vectors, and in [22] for matrices.
SOLVING A LOWRANK FACTORIZATION MODEL FOR MATRIX COMPLETION BY A NONLINEAR SUCCESSIVE OVERRELAXATION ALGORITHM
"... Abstract. The matrix completion problem is to recover a lowrank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclearnorm minimization which requires computing singular value decompositions – a task that is increasingly costly as matrix sizes an ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
Abstract. The matrix completion problem is to recover a lowrank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclearnorm minimization which requires computing singular value decompositions – a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving largescale problems, we propose a lowrank factorization model and construct a nonlinear successive overrelaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Convergence of this nonlinear SOR algorithm is analyzed. Numerical results show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclearnorm minimization algorithms. Key words. Matrix Completion, alternating minimization, nonlinear GS method, nonlinear SOR method AMS subject classifications. 65K05, 90C06, 93C41, 68Q32
Collaborative filtering in a nonuniform world: Learning with the weighted trace norm. Preprint available at arxiv.org/abs/1002.2780
, 2010
"... We show that matrix completion with tracenorm regularization can be significantly hurt when entries of the matrix are sampled nonuniformly, but that a properly weighted version of the tracenorm regularizer works well with nonuniform sampling. We show that the weighted tracenorm regularization i ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We show that matrix completion with tracenorm regularization can be significantly hurt when entries of the matrix are sampled nonuniformly, but that a properly weighted version of the tracenorm regularizer works well with nonuniform sampling. We show that the weighted tracenorm regularization indeed yields significant gains on the highly nonuniformly sampled Netflix dataset. 1