• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Exact matrix completion via convex optimization. Foundations of Computational mathematics, (2009)

by Emmanuel J Candès, Benjamin Recht
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 875
Next 10 →

Robust principal component analysis?

by Emmanuel J Candès , Xiaodong Li , Yi Ma , John Wright - Journal of the ACM, , 2011
"... Abstract This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the ..."
Abstract - Cited by 569 (26 self) - Add to MetaCart
Abstract This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
(Show Context)

Citation Context

...ndeed, there seems to not be enough information to perfectly disentangle the low-rank and the sparse components. And indeed, there is some truth to this, since there obviously is an identifiability issue. For instance, suppose the matrix M is equal to e1e∗1 (this matrix has a one in the top left corner and zeros everywhere else). Then since M is both sparse and low-rank, how can we decide whether it is low-rank or sparse? To make the problem meaningful, we need to impose that the low-rank component L0 is not sparse. In this paper, we will borrow the general notion of incoherence introduced in [8] for the matrix completion problem; this is an assumption concerning the singular vectors of the low-rank component. Write the singular value decomposition of L0 ∈ Rn1×n2 as L0 = UΣV ∗ = r∑ i=1 σiuiv ∗ i , 5Although the name naturally suggests an emphasis on the recovery of the low-rank component, we reiterate that in some applications, the sparse component truly is the object of interest. 4 where r is the rank of the matrix, σ1, . . . , σr are the positive singular values, and U = [u1, . . . , ur], V = [v1, . . . , vr] are the matrices of left- and right-singular vectors. Then the incoherence...

A Singular Value Thresholding Algorithm for Matrix Completion

by Jian-Feng Cai, Emmanuel J. Candès, Zuowei Shen , 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract - Cited by 555 (22 self) - Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k} and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On

The Power of Convex Relaxation: Near-Optimal Matrix Completion

by Emmanuel J. Candès, Terence Tao , 2009
"... This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In ..."
Abstract - Cited by 359 (7 self) - Add to MetaCart
This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n × n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nrpolylog(n).

The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices

by Zhouchen Lin, Minming Chen, Leqin Wu, Yi Ma , 2009
"... ..."
Abstract - Cited by 329 (26 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...in [7] that the same techniques can be used to minimize the nuclear norm for the matrix completion (MC) problem, namely recovering a low-rank matrix from an incomplete but clean subset of its entries =-=[21,9]-=-. As the matrix recovery (Robust PCA) problem (2) involves minimizing a combination of both the ℓ1-norm and the nuclear norm, in the original paper [26], the authors have also adopted the iterative th...

Matrix Completion with Noise

by Emmanuel J. Candès, Yaniv Plan
"... On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest ..."
Abstract - Cited by 255 (13 self) - Add to MetaCart
On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n × n matrix of low rank r from just about nr log 2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.

FINDING STRUCTURE WITH RANDOMNESS: PROBABILISTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS

by N. Halko, P. G. Martinsson, J. A. Tropp
"... Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for ..."
Abstract - Cited by 253 (6 self) - Add to MetaCart
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition
(Show Context)

Citation Context

... compressive sampling for matrices. In 2007, Recht– Fazel–Parillo demonstrated that it is possible to reconstruct a rank-deficient matrix from Gaussian measurements [111]. More recently, Candès–Recht =-=[22]-=- and Candès– Tao [23] considered the problem of completing a low-rank matrix from a random sample of its entries. The usual goals of compressive sampling are (i) to design a method for collecting info...

Rank-sparsity incoherence for matrix decomposition

by Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo, Alan S. Willsky , 2010
"... Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is intractable ..."
Abstract - Cited by 230 (21 self) - Add to MetaCart
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components, by minimizing a linear combination of the ℓ1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and column spaces, and use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.
(Show Context)

Citation Context

...t was used to recover low-rank positive semidefinite matrices [22]. Indeed, several papers demonstrate that the nuclear norm heuristic recovers low-rank matrices in various rank minimization problems =-=[24, 4]-=-. Based on these results, we propose the following optimization formulation to recover A ⋆ and B ⋆ given C = A ⋆ + B ⋆ : ( Â, ˆ B) = arg min A,B γ‖A‖1 + ‖B‖∗ s.t. A + B = C. (1.3) Here γ is a paramete...

A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers

by Sahand Negahban, Pradeep Ravikumar, Martin J. Wainwright, Bin Yu
"... ..."
Abstract - Cited by 218 (32 self) - Add to MetaCart
Abstract not found

Fixed point and Bregman iterative methods for matrix rank minimization

by Shiqian Ma, Donald Goldfarb, Lifeng Chen - MATH. PROGRAM., SER. A , 2008
"... ..."
Abstract - Cited by 196 (12 self) - Add to MetaCart
Abstract not found

Matrix completion from a few entries

by An H. Keshavan, Andrea Montanari, Sewoong Oh
"... Let M be a random nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E | = O(r n) observed entries with relative root mean square error RMSE ≤ C(α) ..."
Abstract - Cited by 196 (9 self) - Add to MetaCart
Let M be a random nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E | = O(r n) observed entries with relative root mean square error RMSE ≤ C(α)
(Show Context)

Citation Context

...sive size of actual datasets, we shall focus on the limit m, n → ∞ with m/n = α fixed. We further assume that the factors U, V are unstructured. This notion is formalized by the incoherence condition =-=[3]-=- as defined in Section II. In particular the incoherence condition 1 Indeed, in 2006, NETFLIX made public such a dataset with m ≈ 5·10 5 , n ≈ 2 · 10 4 and |E| ≈ 10 8 and challenged the research commu...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University