Results 1 - 10
of
630
Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization,”
- SIAM Review,
, 2010
"... Abstract The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and col ..."
Abstract
-
Cited by 562 (20 self)
- Add to MetaCart
(Show Context)
Abstract The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
Single-pixel imaging via compressive sampling
- IEEE Signal Processing Magazine
"... Humans are visual animals, and imaging sensors that extend our reach – cameras – have improved dramatically in recent times thanks to the introduction of CCD and CMOS digital technology. Consumer digital cameras in the mega-pixel range are now ubiquitous thanks to the happy coincidence that the semi ..."
Abstract
-
Cited by 296 (19 self)
- Add to MetaCart
(Show Context)
Humans are visual animals, and imaging sensors that extend our reach – cameras – have improved dramatically in recent times thanks to the introduction of CCD and CMOS digital technology. Consumer digital cameras in the mega-pixel range are now ubiquitous thanks to the happy coincidence that the semiconductor material of choice for large-scale electronics integration (silicon) also happens to readily convert photons at visual wavelengths into electrons. On the contrary, imaging at wavelengths where silicon is blind is considerably more complicated, bulky, and expensive. Thus, for comparable resolution, a $500 digital camera for the visible becomes a $50,000 camera for the infrared. In this paper, we present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a much broader spectral range than conventional silicon-based cameras. Our approach fuses a new camera architecture based on a digital mi-cromirror device (DMD – see Sidebar: Spatial Light Modulators) with the new mathematical theory and algorithms of compressive sampling (CS – see Sidebar: Compressive Sampling in a Nutshell). CS combines sampling and compression into a single nonadaptive linear measurement process [1–4]. Rather than measuring pixel samples of the scene under view, we measure inner products
Compressed sensing and best k-term approximation
- J. Amer. Math. Soc
, 2009
"... Compressed sensing is a new concept in signal processing where one seeks to minimize the number of measurements to be taken from signals while still retaining the information necessary to approximate them well. The ideas have their origins in certain abstract results from functional analysis and app ..."
Abstract
-
Cited by 282 (10 self)
- Add to MetaCart
(Show Context)
Compressed sensing is a new concept in signal processing where one seeks to minimize the number of measurements to be taken from signals while still retaining the information necessary to approximate them well. The ideas have their origins in certain abstract results from functional analysis and approximation theory by Kashin [23] but were recently brought into the forefront by the work of Candès, Romberg and Tao [7, 5, 6] and Donoho [9] who constructed concrete algorithms and showed their promise in application. There remain several fundamental questions on both the theoretical and practical side of compressed sensing. This paper is primarily concerned about one of these theoretical issues revolving around just how well compressed sensing can approximate a given signal from a given budget of fixed linear measurements, as compared to adaptive linear measurements. More precisely, we consider discrete signals x ∈ IR N, allocate n < N linear measurements of x, and we describe the range of k for which these measurements encode enough information to recover x in the sense of ℓp to the accuracy of best k-term approximation. We also consider the problem of having such accuracy only with high probability.
Sparse subspace clustering
- In CVPR
, 2009
"... We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all oth ..."
Abstract
-
Cited by 241 (14 self)
- Add to MetaCart
(Show Context)
We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained ’exactly ’ by using ℓ1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods. 1.
Robust Recovery of Signals From a Structured Union of Subspaces
, 2008
"... Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structu ..."
Abstract
-
Cited by 221 (47 self)
- Add to MetaCart
(Show Context)
Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modelled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose non-zero elements appear in fixed blocks. We then propose a mixed ℓ2/ℓ1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.
Sparsest solutions of underdetermined linear systems via ℓ
"... We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓq-quasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1-norm. We then introduce a simple numerical scheme to compu ..."
Abstract
-
Cited by 192 (11 self)
- Add to MetaCart
(Show Context)
We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓq-quasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1-norm. We then introduce a simple numerical scheme to compute solutions with minimal ℓq-quasinorm, and we study its convergence. Finally, we display the results of some experiments which indicate that the ℓq-method performs better than other available methods. 1
Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals
, 2009
"... Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, alt ..."
Abstract
-
Cited by 158 (18 self)
- Add to MetaCart
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.
Compressive Sensing and Structured Random Matrices
- RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY
, 2011
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1-minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Abstract
-
Cited by 157 (18 self)
- Add to MetaCart
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1-minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using ℓ1-minimization.
Iteratively reweighted least squares minimization for sparse recovery
- Comm. Pure Appl. Math
"... Under certain conditions (known as the Restricted Isometry Property or RIP) on the m ×Nmatrix Φ (where m < N), vectors x ∈ RN that are sparse (i.e. have most of their entries equal to zero) can be recovered exactly from y: = Φx even though Φ−1 (y) is typically an (N − m)dimensional hyperplane; in ..."
Abstract
-
Cited by 156 (5 self)
- Add to MetaCart
(Show Context)
Under certain conditions (known as the Restricted Isometry Property or RIP) on the m ×Nmatrix Φ (where m < N), vectors x ∈ RN that are sparse (i.e. have most of their entries equal to zero) can be recovered exactly from y: = Φx even though Φ−1 (y) is typically an (N − m)dimensional hyperplane; in addition x is then equal to the element in Φ−1 (y) of minimal ℓ1-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an Iteratively Re-weighted Least Squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ−1 (y) with smallest ℓ2(w)-norm. If x (n) is the solution at iteration step n, then the new weight w (n) is defined by w (n) i:=