Results 1  10
of
61
The Dantzig Selector: Statistical Estimation When p Is Much Larger Than n
, 2007
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Xβ + z, where β ∈ Rp is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n ≪ p ..."
Abstract

Cited by 426 (12 self)
 Add to MetaCart
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Xβ + z, where β ∈ Rp is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n ≪ p, and the zi’s are i.i.d. N(0,σ2). Is it possible to estimate β reliably based on the noisy data y? To estimate β, we introduce a new estimator—we call it the Dantzig selector—which is a solution to the ℓ1regularization problem min ˜β∈R p ‖ ˜β‖ℓ1 subject to ‖X ∗ r‖ℓ ∞ ≤ (1 + t−1 √) 2logp · σ, where r is the residual vector y − X ˜β and t is a positive scalar. We show that if X obeys a uniform uncertainty principle (with unitnormed columns) and if the true parameter vector β is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability,
Singlepixel imaging via compressive sampling
 IEEE Signal Processing Magazine
"... Humans are visual animals, and imaging sensors that extend our reach – cameras – have improved dramatically in recent times thanks to the introduction of CCD and CMOS digital technology. Consumer digital cameras in the megapixel range are now ubiquitous thanks to the happy coincidence that the semi ..."
Abstract

Cited by 147 (11 self)
 Add to MetaCart
Humans are visual animals, and imaging sensors that extend our reach – cameras – have improved dramatically in recent times thanks to the introduction of CCD and CMOS digital technology. Consumer digital cameras in the megapixel range are now ubiquitous thanks to the happy coincidence that the semiconductor material of choice for largescale electronics integration (silicon) also happens to readily convert photons at visual wavelengths into electrons. On the contrary, imaging at wavelengths where silicon is blind is considerably more complicated, bulky, and expensive. Thus, for comparable resolution, a $500 digital camera for the visible becomes a $50,000 camera for the infrared. In this paper, we present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a much broader spectral range than conventional siliconbased cameras. Our approach fuses a new camera architecture based on a digital micromirror device (DMD – see Sidebar: Spatial Light Modulators) with the new mathematical theory and algorithms of compressive sampling (CS – see Sidebar: Compressive Sampling in a Nutshell). CS combines sampling and compression into a single nonadaptive linear measurement process [1–4]. Rather than measuring pixel samples of the scene under view, we measure inner products
Sparsity and Incoherence in Compressive Sampling
, 2006
"... We consider the problem of reconstructing a sparse signal x 0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux 0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x 0 exactly when the number of measurements exceeds m ≥ Const · µ 2 (U) ..."
Abstract

Cited by 126 (10 self)
 Add to MetaCart
We consider the problem of reconstructing a sparse signal x 0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux 0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x 0 exactly when the number of measurements exceeds m ≥ Const · µ 2 (U) · S · log n, where S is the number of nonzero components in x 0, and µ is the largest entry in U properly normalized: µ(U) = √ n · maxk,j Uk,j. The smaller µ, the fewer samples needed. The result holds for “most ” sparse signals x 0 supported on a fixed (but arbitrary) set T. Given T, if the sign of x 0 for each nonzero entry on T and the observed values of Ux 0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples.
COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY
"... Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constru ..."
Abstract

Cited by 77 (12 self)
 Add to MetaCart
Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of highquality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIPp matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. 1.
Iteratively reweighted algorithms for compressive sensing
 in 33rd International Conference on Acoustics, Speech, and Signal Processing (ICASSP
, 2008
"... The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In [1], it was shown empirically that using ℓ p minimization with p < 1 can do so with fewer measurements than with p = 1. In this paper we ..."
Abstract

Cited by 76 (6 self)
 Add to MetaCart
The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In [1], it was shown empirically that using ℓ p minimization with p < 1 can do so with fewer measurements than with p = 1. In this paper we consider the use of iteratively reweighted algorithms for computing local minima of the nonconvex problem. In particular, a particular regularization strategy is found to greatly improve the ability of a reweighted leastsquares algorithm to recover sparse signals, with exact recovery being observed for signals that are much less sparse than required by an unregularized version (such as FOCUSS, [2]). Improvements are also observed for the reweightedℓ 1 approach of [3]. Index Terms — Compressive sensing, signal reconstruction, nonconvex optimization, iteratively reweighted least squares, ℓ 1 minimization. 1.
One sketch for all: Fast algorithms for compressed sensing
 In Proc. 39th ACM Symp. Theory of Computing
, 2007
"... Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extr ..."
Abstract

Cited by 62 (11 self)
 Add to MetaCart
Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extract the information. The new approach acquires a small number of nonadaptive linear measurements of the signal and uses sophisticated algorithms to determine its information content. Emerging technologies can compute these general linear measurements of a signal at unit cost per measurement. This paper exhibits a randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements: 1. The measurement ensemble succeeds for all signals, with high probability over the random choices in its construction. 2. The number of measurements of the signal is optimal, except for a factor polylogarithmic in the signal length. 3. The running time of the algorithm is polynomial in the amount of information in the signal and polylogarithmic in the signal length. 4. The recovery algorithm offers the strongest possible type of error guarantee. Moreover, it is a fully polynomial approximation scheme with respect to this type of error bound. Emerging applications demand this level of performance. Yet no other algorithm in the literature simultaneously achieves all four of these desiderata.
Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
 SIAM J. Imaging Sci
, 2008
"... Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number o ..."
Abstract

Cited by 62 (14 self)
 Add to MetaCart
Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrixvector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixedpoint continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
The smashed filter for compressive classification and target recognition
 in Proc. IS&T/SPIE Symposium on Electronic Imaging: Computational Imaging
, 2007
"... The theory of compressive sensing (CS) enables the reconstruction of a sparse or compressible image or signal from a small set of linear, nonadaptive (even random) projections. However, in many applications, including object and target recognition, we are ultimately interested in making a decision ..."
Abstract

Cited by 46 (17 self)
 Add to MetaCart
The theory of compressive sensing (CS) enables the reconstruction of a sparse or compressible image or signal from a small set of linear, nonadaptive (even random) projections. However, in many applications, including object and target recognition, we are ultimately interested in making a decision about an image rather than computing a reconstruction. We propose here a framework for compressive classification that operates directly on the compressive measurements without first reconstructing the image. We dub the resulting dimensionally reduced matched filter the smashed filter. The first part of the theory maps traditional maximum likelihood hypothesis testing into the compressive domain; we find that the number of measurements required for a given classification performance level does not depend on the sparsity or compressibility of the images but only on the noise level. The second part of the theory applies the generalized maximum likelihood method to deal with unknown transformations such as the translation, scale, or viewing angle of a target object. We exploit the fact the set of transformed images forms a lowdimensional, nonlinear manifold in the highdimensional image space. We find that the number of measurements required for a given classification performance level grows linearly in the dimensionality of the manifold but only logarithmically in the number of pixels/samples and image classes. Using both simulations and measurements from a new singlepixel compressive camera, we demonstrate the effectiveness of the smashed filter for target classification using very few measurements.
FIXEDPOINT CONTINUATION FOR ℓ1MINIMIZATION: METHODOLOGY AND CONVERGENCE
"... We present a framework for solving largescale ℓ1regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operatorsplitting and continuation. Operatorsplitting results in a fixedpoint algorithm for any given scalar µ; continuation refers ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
We present a framework for solving largescale ℓ1regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operatorsplitting and continuation. Operatorsplitting results in a fixedpoint algorithm for any given scalar µ; continuation refers to approximately following the path traced by the optimal value of x as µ increases. In this paper, we study the structure of optimal solution sets; prove finite convergence for important quantities; and establish qlinear convergence rates for the fixedpoint algorithm applied to problems with f(x) convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.