DMCA
Rank-sparsity incoherence for matrix decomposition (2010)
Cached
Download Links
Citations: | 229 - 21 self |
Citations
7710 |
Matrix Analysis
- Horn, Johnson
- 1985
(Show Context)
Citation Context ...γ ) [ ⋆ ⋆ ξ(B ) − ξ(B ) < 1 − 2ξ(B⋆ )µ(A⋆ ] + γ ) = γ. Here, we used the fact that ξ(B ⋆ ) 1−4ξ(B⋆ )µ(A⋆) < γ in the second inequality. □ Proof of Proposition 3. Based on the Perron-Frobenius theorem =-=[18]-=-, one can conclude that ‖P ‖ ≥ ‖Q‖ if Pi,j ≥ |Qi,j|, ∀ i, j. Thus, we need only consider the matrix that has 1 in every location in the support set Ω(A) and 0 everywhere else. Based on the definition ... |
5395 | Convex Analysis
- Rockafellar
- 1970
(Show Context)
Citation Context ...n-space orthogonal to the column-space of B ⋆ . We have that P T (B ⋆ ) ⊥(M) = (In×n − PU )M(In×n − PV ), (4.2) where In×n is the n × n identity matrix. Following standard notation in convex analysis =-=[26]-=-, we denote the subdifferential of a convex function f at a point ˆx in its domain by ∂f(ˆx). The subdifferential ∂f(ˆx) consists of all y such that f(x) ≥ f(ˆx) + 〈y, x − ˆx〉, ∀x. From the optimality... |
3606 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...ecovering sparse solutions [9]. Incoherence is also a concept that is used in recent work under the title of compressed sensing, which aims to recover “low-dimensional” objects such as sparse vectors =-=[3, 12]-=- and low-rank matrices [24, 4] given incomplete observations. Our work is closer in spirit to that in [10], and can be viewed as a method to recover the “simplest explanation” of a matrix given an “ov... |
2620 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...ecovering sparse solutions [9]. Incoherence is also a concept that is used in recent work under the title of compressed sensing, which aims to recover “low-dimensional” objects such as sparse vectors =-=[3, 12]-=- and low-rank matrices [24, 4] given incomplete observations. Our work is closer in spirit to that in [10], and can be viewed as a method to recover the “simplest explanation” of a matrix given an “ov... |
2385 | Random Graphs
- Bollobás
- 2001
(Show Context)
Citation Context ...of A⋆ . We have that degmax(A ⋆ ) ≤ m n log(n), with high probability. The proof of this lemma follows from a standard balls and bins argument, and can be found in several references (see for example =-=[2]-=-). Next we consider low-rank matrices in which the singular vectors are chosen uniformly at random from the set of all partial isometries. Such a model was considered in recent work on the matrix comp... |
1589 |
Graphical Models
- Lauritzen
- 1996
(Show Context)
Citation Context ...he sparse and low-rank matrices having different interpretations depending on the application. In a statistical model selection setting, the sparse matrix can correspond to a Gaussian graphical model =-=[19]-=- and the low-rank matrix can summarize the effect of latent, unobserved variables. Decomposing a given model into these simpler components is useful for developing efficient estimation and inference a... |
1100 | Semidefinite Programming
- Vandenberghe, Boyd
- 1994
(Show Context)
Citation Context ... C. (1.3) Here γ is a parameter that provides a trade-off between the low-rank and sparse components. This optimization problem is convex, and can in fact be rewritten as a semidefinite program (SDP) =-=[31]-=- (see Appendix A). We prove that ( Â, ˆ B) = (A⋆, B⋆ ) is the unique optimum of (1.3) for a range of γ if µ(A⋆)ξ(B ⋆ ) < 1 6 (see Theorem 2 in Section 4.2). Thus, the conditions for4 V. Chandrasekara... |
1069 | Introduction to Fourier Optics
- Goodman
- 1996
(Show Context)
Citation Context ...rtially coherent decomposition in optical systems We outline an optics application that is described in greater detail in [13]. Optical imaging systems are commonly modeled using the Hopkins integral =-=[16]-=-, which gives the output intensity at a point as a function of the input transmission via a quadratic form. In many applications the operator in this quadratic form can be well-approximated by a (fini... |
871 | Exact matrix completion via convex optimization
- Candes, Recht
- 2009
(Show Context)
Citation Context ...t was used to recover low-rank positive semidefinite matrices [22]. Indeed, several papers demonstrate that the nuclear norm heuristic recovers low-rank matrices in various rank minimization problems =-=[24, 4]-=-. Based on these results, we propose the following optimization formulation to recover A ⋆ and B ⋆ given C = A ⋆ + B ⋆ : ( Â, ˆ B) = arg min A,B γ‖A‖1 + ‖B‖∗ s.t. A + B = C. (1.3) Here γ is a paramete... |
630 | Optimally sparse representation in general (nonorthogonal) dictionaries via l-minimization
- Donoho, Elad
- 2003
(Show Context)
Citation Context ...r such matrices. 1.2. Previous work using incoherence. The concept of incoherence was studied in the context of recovering sparse representations of vectors from a so-called “overcomplete dictionary” =-=[10]-=-. More concretely consider a situation in which one is given a vector formed by a sparse linear combination of a few elements from a combined time-frequency dictionary, i.e., a vector formed by adding... |
580 | Uncertainty principles and ideal atomic decomposition
- Donoho, Huo
- 2001
(Show Context)
Citation Context ...nd sinusoids that compose the vector from the infinitely many possible solutions. Based on a notion of time-frequency incoherence, the ℓ1 heuristic was shown to succeed in recovering sparse solutions =-=[9]-=-. Incoherence is also a concept that is used in recent work under the title of compressed sensing, which aims to recover “low-dimensional” objects such as sparse vectors [3, 12] and low-rank matrices ... |
566 | For most large underdetermined systems of linear equations the minimal ℓ1 solution is also the sparsest solution
- Donoho
(Show Context)
Citation Context ...d as an effective surrogate for the number of nonzero entries of a vector, and a number of results provide conditions under which this heuristic recovers sparse solutions to illposed inverse problems =-=[11]-=-. More recently, the nuclear norm has been shown to be an effective surrogate for the rank of a matrix [14]. This relaxation is a generalization of the previously studied trace-heuristic that was used... |
561 | Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization
- Recht, Fazel, et al.
(Show Context)
Citation Context ...t was used to recover low-rank positive semidefinite matrices [22]. Indeed, several papers demonstrate that the nuclear norm heuristic recovers low-rank matrices in various rank minimization problems =-=[24, 4]-=-. Based on these results, we propose the following optimization formulation to recover A ⋆ and B ⋆ given C = A ⋆ + B ⋆ : ( Â, ˆ B) = arg min A,B γ‖A‖1 + ‖B‖∗ s.t. A + B = C. (1.3) Here γ is a paramete... |
382 |
Yalmip: A toolbox for modeling and optimization in matlab
- Lofberg
- 2004
(Show Context)
Citation Context ...nfirm the theoretical predictions in this paper with some simple experimental results. We also present a heuristic to choose the trade-off parameter γ. All our simulations were performed using YALMIP =-=[20]-=- and the SDPT3 software [30] for solving SDPs. Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.586 CHANDRASEKARAN, SANGHAVI, PARRILO, AND WILLSKY Downloaded 08/14/12 to 1... |
361 | Sdpt3 - a MATLAB software package for semidefinite programming
- Toh, Todd, et al.
- 1999
(Show Context)
Citation Context ...tions in this paper with some simple experimental results. We also present a heuristic to choose the trade-off parameter γ. All our simulations were performed using YALMIP [20] and the SDPT3 software =-=[30]-=- for solving SDPs. Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.586 CHANDRASEKARAN, SANGHAVI, PARRILO, AND WILLSKY Downloaded 08/14/12 to 128.31.5.23. Redistribution s... |
286 |
Matrix Rank Minimization with Applications
- Fazel
- 2001
(Show Context)
Citation Context ...conditions under which this heuristic recovers sparse solutions to illposed inverse problems [11]. More recently, the nuclear norm has been shown to be an effective surrogate for the rank of a matrix =-=[14]-=-. This relaxation is a generalization of the previously studied trace-heuristic that was used to recover low-rank positive semidefinite matrices [22]. Indeed, several papers demonstrate that the nucle... |
240 |
Algebraic Geometry: A First Course
- Harris
- 1995
(Show Context)
Citation Context ...rn of a matrix and its row/column spaces. This condition is based on quantities involving the tangent spaces to the algebraic variety of sparse matrices and the algebraic variety of low-rank matrices =-=[17]-=-. Another point of ambiguity in the problem statement is that one could subtract a nonzero entry from A and add it to B ; the sparsity level of A is strictly improved, while the rank of B is increased... |
183 |
Mathematical Control Theory.
- Sontag
- 1998
(Show Context)
Citation Context ... also be posed in the system identification setting. Linear time-invariant (LTI) systems can be represented by Hankel matrices, where the matrix represents the input-output relationship of the system =-=[28]-=-. Thus, a sparse Hankel matrix corresponds to an LTI system with a sparse impulse response. A low-rank Hankel matrix corresponds to a system with small model order, and provides a minimal realization ... |
147 | Robust principal component analysis: Exact recovery of corrupted low-rank matrices by convex optimization
- Wright, Ganesh, et al.
(Show Context)
Citation Context ... (1.3) yields exact recovery with high probability even when the size of the support of A⋆ is super-linear in n. During final preparation of this manuscript we learned of related contemporaneous work =-=[30]-=- that specifically studies the problem of decomposing random sparse and low-rank matrices. In addition to the assumptions of our random sparsity and random orthogonal models, [30] also requires that t... |
131 |
Graph-theoretic arguments in low-level complexity
- Valiant
- 1977
(Show Context)
Citation Context ...d variables. Decomposing a given model into these simpler components is useful for developing efficient estimation and inference algorithms. In computational complexity, the notion of matrix rigidity =-=[30]-=- captures the smallest number of entries of a matrix that must be changed in order to reduce the rank of the matrix below a specified level (the changes can be of arbitrary magnitude). Bounds on the r... |
123 |
Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices.
- Fazel, Hindi, et al.
- 2003
(Show Context)
Citation Context ...rse Hankel matrix corresponds to an LTI system with a sparse impulse response. A low-rank Hankel matrix corresponds to a system with small model order, and provides a minimal realization for a system =-=[15]-=-. Given an LTI system H as follows H = Hs + Hlr,6 V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky where Hs is sparse and Hlr is low-rank, obtaining a simple description of H requires... |
90 |
Characterization of the subdifferential of some matrix norms, Linear Algebra and its Applications
- Watson
- 1992
(Show Context)
Citation Context ...‖1 if and only if P Ω(A ⋆ )(Q) = γ sign(A ⋆ ), ‖P Ω(A ⋆ ) c(Q)‖∞ ≤ γ. (4.4) Here sign(A⋆ i,j ) equals +1 if A⋆i,j > 0, −1 if A⋆i,j < 0, and 0 if A⋆i,j = 0. We also have that Q ∈ ∂‖B⋆‖∗ if and only if =-=[32]-=- P T (B ⋆ )(Q) = UV ′ , ‖P T (B ⋆ ) ⊥(Q)‖ ≤ 1. (4.5) Note that these are necessary and sufficient conditions for (A ⋆ , B ⋆ ) to be an optimum of (1.3). The following proposition provides sufficient c... |
67 |
Bemerkungen zur Theorie der Beschrankten Bilinearformen mit unendlich vielen Veranderlichten. Journal für die reine und angewandte Mathematik 140
- Schur
- 1911
(Show Context)
Citation Context ...te µ(A) as follows: ∑ µ(A) = max xiyj. (B.15) ‖x‖2=1,‖y‖2=1 (i,j)∈Ω(A) ⎠22 V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky Upper bound. For any matrix M, we have from the results in =-=[27]-=- that ‖M‖ 2 ≤ max ricj, (B.16) i,j where ri = ∑ k |Mi,k| denotes the absolute row-sum of row i and cj = ∑ k |Mk,j| denotes the absolute column-sum of column j. Let M Ω(A) be a matrix defined as follow... |
63 |
Introduction to Fourier Optics (Roberts,
- Goodman
- 2005
(Show Context)
Citation Context ...tially coherent decomposition in optical systems. We outline an optics application that is described in greater detail in [13]. Optical imaging systems are commonly modeled using the Hopkins integral =-=[16]-=-, which gives the output intensity at a point as a function of the input transmission via a quadratic form. In many applications the operator in this quadratic form can be well-approximated by a (fini... |
61 |
On the rank minimization problem over a positive semidefinite linear matrix inequality.
- Mesbahi, Papavassilopoulos
- 1997
(Show Context)
Citation Context ...be an effective surrogate for the rank of a matrix [14]. This relaxation is a generalization of the previously studied trace-heuristic that was used to recover low-rank positive semidefinite matrices =-=[22]-=-. Indeed, several papers demonstrate that the nuclear norm heuristic recovers low-rank matrices in various rank minimization problems [24, 4]. Based on these results, we propose the following optimiza... |
52 |
Tutuncu, ”SDPT3– a MATLAB software package for semidefinite programming”,
- Toh, Todd, et al.
- 1996
(Show Context)
Citation Context ...tions in this paper with some simple experimental results. We also present a heuristic to choose the trade-off parameter γ. All our simulations were performed using YALMIP [33] and the SDPT3 software =-=[29]-=- for solving SDPs. In the first experiment we generate random 25 × 25 matrices according to the random sparsity and random orthogonal models described in Section 4.4. To generate a random rank-k matri... |
51 | Spectral methods for matrix rigidity with applications to size-depth tradeoffs and communication complexity.
- Lokam
- 2001
(Show Context)
Citation Context ...dentification [6]. Submitted on June 11, 2009; revised on October 4, 2010. 12 V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky a matrix have several implications in complexity theory =-=[20]-=-. Similarly, in a system identification setting the low-rank matrix represents a system with a small model order while the sparse matrix represents a system with a sparse impulse response. Decomposing... |
33 |
Algebraic Geometry. A First
- Harris
- 1992
(Show Context)
Citation Context ...rn of a matrix and its row/column spaces. This condition is based on quantities involving the tangent spaces to the algebraic variety of sparse matrices and the algebraic variety of low-rank matrices =-=[17]-=-. Another point of ambiguity in the problem statement is that one could subtract a nonzero entry from A ⋆ and add it to B ⋆ ; the sparsity level of A ⋆ is strictly improved while the rank of B ⋆ is in... |
31 | Sparse and low-rank matrix decompositions - Chandrasekaran, Sanghavi, et al. - 2009 |
27 |
Robust principal component analysis?,” preprint,
- Cands, Li, et al.
- 2009
(Show Context)
Citation Context ...hat certain tangent spaces have a transverse intersection. The implications of our results for the matrix rigidity problem are also demonstrated. We would like to mention here a related piece of work =-=[5]-=- that appeared subsequent to the submission of our paper. In [5] the authors analyze the convex program (1.3) for the sparse-plus-low-rank decomposition problem Copyright © by SIAM. Unauthorized repro... |
24 |
Phase-shifting masks for microlithography: Automated design and mask requirements,”
- Pati, Kailath
- 1994
(Show Context)
Citation Context ...herent imaging systems, and the corresponding system matrices have small rank. For systems that are not perfectly coherent various methods have been proposed to find an optimal coherent decomposition =-=[23]-=-, and these essentially identify the best approximation of the system matrix by a matrix of lower rank. At the other end are incoherent optical systems that allow some high frequencies, and are charac... |
14 |
Robust principal component analysis? Submitted
- Candès, Li, et al.
- 2009
(Show Context)
Citation Context ...ion. The implications of ourRank-Sparsity Incoherence for Matrix Decomposition 17 results for the matrix rigidity problem are also demonstrated. We would like to mention here a related piece of work =-=[5]-=- that appeared subsequent to the submission of our paper. In [5] the authors analyze the convex program (1.3) for the sparse-pluslow-rank decomposition problem, and provide results for exact recovery ... |
4 |
Approximations for partially coherent optical imaging systems
- Fazel, Goodman
- 1998
(Show Context)
Citation Context ...nstraint that the sparse and low-rank matrices have Hankel structure. 2.4. Partially coherent decomposition in optical systems. We outline an optics application that is described in greater detail in =-=[13]-=-. Optical imaging systems are commonly modeled using the Hopkins integral [16], which gives the output intensity at a point as a function of the input transmission via a quadratic form. In many applic... |
3 |
Matrix rigidity. Linear Algebra and its Applications 304(1–3
- Codenotti
- 2000
(Show Context)
Citation Context ...ity has a number of implications in complexity theory [20], such as the trade-offs between size and depth in arithmetic circuits. However, computing the rigidity of a matrix is intractable in general =-=[21, 8]-=-. For any M ∈ R n×n one can check that RM (k) ≤ (n − k) 2 (this follows directly from a Schur complement argument). Generically every M ∈ R n×n is very rigid, i.e., RM (k) = (n − k) 2 [30], although s... |
3 |
personal communication,
- Recht
- 2011
(Show Context)
Citation Context ... http://www.siam.org/journals/ojsa.php spanðUÞ. We show in Appendix B that matrices with incoherent row/column spaces have small ξ; the proof technique for the lower bound here was suggested by Recht =-=[26]-=-. PROPOSITION 4. Let B ∈ R n×n be any matrix with incðBÞ defined as in (4.7) and ξðBÞ defined as in (1.1). We have that incðBÞ ≤ ξðBÞ ≤ 2 incðBÞ: If B ∈ Rn×n is a full-rank matrix or a matrix such as ... |
2 |
On the complexity of matrix rank and rigidity
- Mahajan, Sarma
- 2007
(Show Context)
Citation Context ...ity has a number of implications in complexity theory [20], such as the trade-offs between size and depth in arithmetic circuits. However, computing the rigidity of a matrix is intractable in general =-=[21, 8]-=-. For any M ∈ R n×n one can check that RM (k) ≤ (n − k) 2 (this follows directly from a Schur complement argument). Generically every M ∈ R n×n is very rigid, i.e., RM (k) = (n − k) 2 [30], although s... |
1 |
Latentvariable graphical model selection via convex optimization. Preprint available on www.arxiv.org/abs/1008.1290v1
- Chandrasekaran, Parrilo, et al.
- 2010
(Show Context)
Citation Context ...als the graphical structure in the observed variables as well as the effect due to (and the number of) the unobserved latent variables. We discuss this application in more detail in a separate report =-=[7]-=-. 2.2. Matrix rigidity. The rigidity of a matrix M, denoted by RM (k), is the smallest number of entries that need to be changed in order to reduce the rank of M below k. Obtaining bounds on rigidity ... |
1 | Matrix rigidity
- CODENOTTI
- 2000
(Show Context)
Citation Context ...s a number of implications in complexity theory [21], such as the trade-offs between size and depth in arithmetic circuits. However, computing the rigidity of a matrix is intractable in general [22], =-=[8]-=-. For any M ∈ Rn×n one can check that RM ðkÞ ≤ ðn − kÞ2 (this follows directly from a Schur complement argument). Generically every M ∈ Rn×n is very rigid, i.e., RM ðkÞ ðn− kÞ2 [31], although special... |
1 |
On the compelxity of matrix rank and rigidity, Theory Comput
- MAHAJAN, SARMA
(Show Context)
Citation Context ...ity has a number of implications in complexity theory [21], such as the trade-offs between size and depth in arithmetic circuits. However, computing the rigidity of a matrix is intractable in general =-=[22]-=-, [8]. For any M ∈ Rn×n one can check that RM ðkÞ ≤ ðn − kÞ2 (this follows directly from a Schur complement argument). Generically every M ∈ Rn×n is very rigid, i.e., RM ðkÞ ðn− kÞ2 [31], although sp... |