• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. (2010)

by Z Lin, M Chen, Y Ma
Venue:In Mathematical Programming,
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 329
Next 10 →

Robust principal component analysis?

by Emmanuel J Candès , Xiaodong Li , Yi Ma , John Wright - Journal of the ACM, , 2011
"... Abstract This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the ..."
Abstract - Cited by 569 (26 self) - Add to MetaCart
Abstract This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
(Show Context)

Citation Context

...ponent Pursuit’s ability to correctly recover matrices of various rank from errors of various density. We then sketch applications in background modeling from video and removing shadows and specularities from face images. While the exact recovery guarantee provided by Theorem 1.1 is independent of the particular algorithm used to solve Principal Component Pursuit, its applicability to large scale problems depends on the availability of scalable algorithms for nonsmooth convex optimization. For the experiments in this section, we use the an augmented Lagrange multiplier algorithm introduced in [33, 51].8 In Section 5, we describe this algorithm in more detail, and explain why it is our algorithm of choice for sparse and low-rank separation. One important implementation detail in our approach is the choice of λ. Our analysis identifies one choice, λ = 1/ √ max(n1, n2), which works well for incoherent matrices. In order to illustrate the theory, throughout this section we will always choose λ = 1/ √ max(n1, n2). For practical problems, however, it is often possible to improve performance by choosing λ according to prior knowledge about the solution. For example, if we know that S is very spar...

RASL: Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images

by Yigang Peng, Arvind Ganesh, John Wright, Wenli Xu, Yi Ma , 2010
"... This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of ..."
Abstract - Cited by 161 (6 self) - Add to MetaCart
This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.
(Show Context)

Citation Context

...al for its practical use. Fortunately, a recent flurry of work on high-dimensional nuclear norm minimization has shown that such problems are well within the capabilities of a standard PC [27], [28], =-=[29]-=-. In this section, we show how one such fast first-order method, the Augmented Lagrange Multiplier (ALM) algorithm [29], [30], [16], can be adapted to efficiently solve (7). The basic idea of the ALM ...

Robust Subspace Segmentation by Low-Rank Representation

by Guangcan Liu, Zhouchen Lin, Yong Yu
"... We propose low-rank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowestrank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlik ..."
Abstract - Cited by 145 (25 self) - Add to MetaCart
We propose low-rank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowestrank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlike the well-known sparse representation (SR), which computes the sparsest representation of each data vector individually, LRR aims at finding the lowest-rank representation of a collection of vectors jointly. LRR better captures the global structure of data, giving a more effective tool for robust subspace segmentation from corrupted data. Both theoretical and experimental results show that LRR is a promising tool for subspace segmentation. 1.
(Show Context)

Citation Context

...segment the vertices of the graph into k clusters where Y1 and Y2 are Lagrange multipliers and µ > 0 is a penalty parameter. The above problem can by solved by either exact or inexact ALM algorithms (=-=Lin et al., 2009-=-). For efficiency, we choose the inexact ALM, which we outline in Algorithm 1. Its convergence properties could be proved similarly as those in (Lin et al., 2009). Notice that although steps 1 and 3 o...

Robust Recovery of Subspace Structures by Low-Rank Representation

by Guangcan Liu, et al.
"... In this work we address the subspace recovery problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to segment the samples into their respective subspaces and correct the possible errors as well. To this end, we propose a novel method ter ..."
Abstract - Cited by 128 (24 self) - Add to MetaCart
In this work we address the subspace recovery problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to segment the samples into their respective subspaces and correct the possible errors as well. To this end, we propose a novel method termed Low-Rank Representation (LRR), which seeks the lowest-rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that LRR well solves the subspace recovery problem: when the data is clean, we prove that LRR exactly captures the true subspace structures; for the data contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for the data corrupted by arbitrary errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace segmentation and error correction, in an efficient way.
(Show Context)

Citation Context

... low-rank recovery to the original data X0. The optimization problem (7) is convex and can be solved by various methods. For efficiency, we adopt in this paper the Augmented Lagrange Multiplier (ALM) =-=[36]-=-, [37] method. We first convert (7) to the following equivalent problem: min Z;E;J Jk kþ Ek k2;1; s:t: X AZ þ E;Z J: This problem can be solved by the ALM method, which minimizes the following a...

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

by Zhouchen Lin, Risheng Liu, Zhixun Su
"... Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints ar ..."
Abstract - Cited by 55 (8 self) - Add to MetaCart
Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, we propose a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the subproblems. For fast convergence, we also allow the penalty to change adaptively according a novel update rule. We prove the global convergence of LADM with adaptive penalty (LADMAP). As an example, we apply LADMAP to solve lowrank representation (LRR), which is an important subspace clustering technique yet suffers from high computation cost. By combining LADMAP with a skinny SVD representation technique, we are able to reduce the complexity O(n 3) of the original ADM based method to O(rn 2), where r and n are the rank and size of the representation matrix, respectively, hence making LRR possible for large scale applications. Numerical experiments verify that for LRR our LADMAP based methods are much faster than state-of-the-art algorithms. 1
(Show Context)

Citation Context

...PG) algorithm [16] is a popular technique due to its guaranteed O(k −2 ) convergence rate, where k is the iteration number. The alternating direction method (ADM) has also regained a lot of attention =-=[11, 15]-=-. It updates the variables alternately by minimizing the augmented Lagrangian function with respect to the variables in a Gauss-Seidel manner. While APG has to convert (1) into an approximate unconstr...

Two proposals for robust PCA using semidefinite programming

by Michael McCoy, Joel A. Tropp , 2010
"... The performance of principal component analysis (PCA) suffers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semidefinite programming. The first method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in the data ..."
Abstract - Cited by 47 (2 self) - Add to MetaCart
The performance of principal component analysis (PCA) suffers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semidefinite programming. The first method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in the data while damping the effect of outliers. The second method produces a low-leverage decomposition (LLD) of the data that attempts to form a low-rank model for the data by separating out corrupted observations. This paper also presents efficient computational methods for solving these SDPs. Numerical experiments confirm the value of these new techniques.

Clustering partially observed graphs via convex optimization.

by Yudong Chen , Ali Jalali , Sujay Sanghavi , Huan Xu - Journal of Machine Learning Research, , 2014
"... Abstract This paper considers the problem of clustering a partially observed unweighted graph-i.e., one where for some node pairs we know there is an edge between them, for some others we know there is no edge, and for the remaining we do not know whether or not there is an edge. We want to organiz ..."
Abstract - Cited by 47 (13 self) - Add to MetaCart
Abstract This paper considers the problem of clustering a partially observed unweighted graph-i.e., one where for some node pairs we know there is an edge between them, for some others we know there is no edge, and for the remaining we do not know whether or not there is an edge. We want to organize the nodes into disjoint clusters so that there is relatively dense (observed) connectivity within clusters, and sparse across clusters. We take a novel yet natural approach to this problem, by focusing on finding the clustering that minimizes the number of "disagreements"-i.e., the sum of the number of (observed) missing edges within clusters, and (observed) present edges across clusters. Our algorithm uses convex optimization; its basis is a reduction of disagreement minimization to the problem of recovering an (unknown) low-rank matrix and an (unknown) sparse matrix from their partially observed sum. We evaluate the performance of our algorithm on the classical Planted Partition/Stochastic Block Model. Our main theorem provides sufficient conditions for the success of our algorithm as a function of the minimum cluster size, edge density and observation probability; in particular, the results characterize the tradeoff between the observation probability and the edge density gap. When there are a constant number of clusters of equal size, our results are optimal up to logarithmic factors.
(Show Context)

Citation Context

...(A) for η ∈ (0, 1) do Solve (1) if Solution K is valid then Output the clustering w.r.t K and EXIT. end if end for Declare Failure. We recommend using the fast implementation algorithms developed in (=-=Lin et al., 2009-=-), which is specially tailored 1 In particular, it is the 1 norm of the singular value vector, while rank is the 0 norm of the same. 2 An SVD of a valid K will yield singular vectors with disjoint sup...

Robust Photometric Stereo via Low-Rank Matrix Completion and Recovery ⋆

by Lun Wu, Arvind Ganesh, Boxin Shi, Yasuyuki Matsushita, Yongtian Wang, Yi Ma
"... Abstract. We present a new approach to robustly solve photometric stereo problems. We cast the problem of recovering surface normals from multiple lighting conditions as a problem of recovering a low-rank matrix with both missing entries and corrupted entries, which model all types of non-Lambertian ..."
Abstract - Cited by 47 (12 self) - Add to MetaCart
Abstract. We present a new approach to robustly solve photometric stereo problems. We cast the problem of recovering surface normals from multiple lighting conditions as a problem of recovering a low-rank matrix with both missing entries and corrupted entries, which model all types of non-Lambertian effects such as shadows and specularities. Unlike previous approaches that use Least-Squares or heuristic robust techniques, our method uses advanced convex optimization techniques that are guaranteed to find the correct low-rank matrix by simultaneously fixing its missing and erroneous entries. Extensive experimental results demonstrate that our method achieves unprecedentedly accurate estimates of surface normals in the presence of significant amount of shadows and specularities. The new technique can be used to improve virtually any photometric stereo method including uncalibrated photometric stereo. 1
(Show Context)

Citation Context

...gence properties, they are not very scalable for large problems. Fortunately, there has been a flurry of work recently on developing scalable algorithms for high-dimensional nuclear-norm minimization =-=[16, 20, 21]-=-. In this section, we show how one such algorithm, the Augmented Lagrange Multiplier (ALM) method [16, 22], can be adapted to efficiently solve Eq. (10). The basic idea of the ALM method is to minimiz...

SpaRCS: Recovering low-rank and sparse matrices from compressive measurements

by Andrew E. Waters, Aswin C. Sankaranarayanan, Richard G. Baraniuk , 2011
"... We consider the problem of recovering a matrix M that is the sum of a low-rank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) =A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization ..."
Abstract - Cited by 46 (4 self) - Add to MetaCart
We consider the problem of recovering a matrix M that is the sum of a low-rank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) =A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. Empirically, SpaRCS inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm. 1
(Show Context)

Citation Context

...ails sorting the signal proxy magnitudes and choosing the largest 2K elements. 4Figure 2 compares the performance of SpaRCS with two alternate recovery algorithms. We implement CS versions of the IT =-=[18]-=- and APG [19] algorithms, which solve the problems min ⌧ (kLk⇤ + kvec(S)k1)+ 1 2 kLk2 F + 1 2 kSk2 F s.t. y = A(L + S) and min kLk⇤ + kvec(S)k1 s.t. y = A(L + S), respectively. We endeavor to tune the...

A Closed Form Solution to Robust Subspace Estimation and Clustering

by Paolo Favaro, René Vidal, Avinash Ravichandran
"... We consider the problem of fitting one or more subspaces to a collection of data points drawn from the subspaces and corrupted by noise/outliers. We pose this problem as a rank minimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean, self-expressive, low- ..."
Abstract - Cited by 43 (4 self) - Add to MetaCart
We consider the problem of fitting one or more subspaces to a collection of data points drawn from the subspaces and corrupted by noise/outliers. We pose this problem as a rank minimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean, self-expressive, low-rank dictionary plus a matrix of noise/outliers. Our key contribution is to show that, for noisy data, this non-convex problem can be solved very efficiently and in closed form from the SVD of the noisy data matrix. Remarkably, this is true for both one or more subspaces. An important difference with respect to existing methods is that our framework results in a polynomial thresholding of the singular values with minimal shrinkage. Indeed, a particular case of our framework in the case of a single subspace leads to classical PCA, which requires no shrinkage. In the case of multiple subspaces, our framework provides an affinity matrix that can be used to cluster the data according to the subspaces. In the case of data corrupted by outliers, a closedform solution appears elusive. We thus use an augmented Lagrangian optimization framework, which requires a combination of our proposed polynomial thresholding operator with the more traditional shrinkage-thresholding operator. 1.
(Show Context)

Citation Context

...ex problem min A,E ‖A‖∗ + γ‖E‖1 s.t. D = A + E. (6) While a closed form solution to this problem is not known, convex optimization techniques can be used to find the minimizer. We refer the reader to =-=[11]-=- for a review of numerous approaches. One such approach is the Augmented Lagrange Multiplier (ALM) method, which minimizes ‖A‖∗ + γ‖E‖1 + 〈Y, D−A−E〉 + α 2 ‖D−A−E‖2 F . (7) The third term enforces the ...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University