Results 1  10
of
26
Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information
, 2006
"... This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discretetime signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this pa ..."
Abstract

Cited by 1304 (42 self)
 Add to MetaCart
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discretetime signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this paper is as follows. Suppose that is a superposition of spikes @ Aa @ A @ A obeying @�� � A I for some constant H. We do not know the locations of the spikes nor their amplitudes. Then with probability at least I @ A, can be reconstructed exactly as the solution to the I minimization problem I aH @ A s.t. ” @ Aa ” @ A for all
Uncertainty principles and ideal atomic decomposition
 IEEE Transactions on Information Theory
, 2001
"... Suppose a discretetime signal S(t), 0 t
Abstract

Cited by 361 (19 self)
 Add to MetaCart
Suppose a discretetime signal S(t), 0 t<N, is a superposition of atoms taken from a combined time/frequency dictionary made of spike sequences 1ft = g and sinusoids expf2 iwt=N) = p N. Can one recover, from knowledge of S alone, the precise collection of atoms going to make up S? Because every discretetime signal can be represented as a superposition of spikes alone, or as a superposition of sinusoids alone, there is no unique way of writing S as a sum of spikes and sinusoids in general. We prove that if S is representable as a highly sparse superposition of atoms from this time/frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the `1 norm of the coe cients among all decompositions. Here \highly sparse " means that Nt + Nw < p N=2 where Nt is the number of time atoms, Nw is the number of frequency atoms, and N is the length of the discretetime signal.
Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit
, 2006
"... Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NPhard in general. We show here that for systems with ‘typical’/‘random ’ Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our pr ..."
Abstract

Cited by 172 (20 self)
 Add to MetaCart
Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NPhard in general. We show here that for systems with ‘typical’/‘random ’ Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our proposal, Stagewise Orthogonal Matching Pursuit (StOMP), successively transforms the signal into a negligible residual. Starting with initial residual r0 = y, at the sth stage it forms the ‘matched filter ’ Φ T rs−1, identifies all coordinates with amplitudes exceeding a speciallychosen threshold, solves a leastsquares problem using the selected coordinates, and subtracts the leastsquares fit, producing a new residual. After a fixed number of stages (e.g. 10), it stops. In contrast to Orthogonal Matching Pursuit (OMP), many coefficients can enter the model at each stage in StOMP while only one enters per stage in OMP; and StOMP takes a fixed number of stages (e.g. 10), while OMP can take many (e.g. n). StOMP runs much faster than competing proposals for sparse solutions, such as ℓ1 minimization and OMP, and so is attractive for solving largescale problems. We use phase diagrams to compare algorithm performance. The problem of recovering a ksparse vector x0 from (y, Φ) where Φ is random n × N and y = Φx0 is represented by a point (n/N, k/n)
Basis Pursuit
, 1994
"... The TimeFrequency and TimeScale communities have recently developed an enormous number of overcomplete signal dictionaries  wavelets, wavelet packets, cosine packets, wilson bases, chirplets, warped bases, and hyperbolic cross bases being a few examples. Basis Pursuit is a technique for decompos ..."
Abstract

Cited by 119 (15 self)
 Add to MetaCart
The TimeFrequency and TimeScale communities have recently developed an enormous number of overcomplete signal dictionaries  wavelets, wavelet packets, cosine packets, wilson bases, chirplets, warped bases, and hyperbolic cross bases being a few examples. Basis Pursuit is a technique for decomposing a signal into an "optimal" superposition of dictionary elements. The optimization criterion is the l 1 norm of coefficients. The method has several advantages over Matching Pursuit and Best Ortho Basis, including superresolution and stability. 1 Introduction Over the last five years or so, there has been an explosion of awareness of alternatives to traditional signal representations. Instead of just representing objects as superpositions of sinusoids (the traditional Fourier representation) we now have available alternate dictionaries  signal representation schemes  of which the Wavelets dictionary is only the most wellknown. Wavelet dictionaries, Gabor dictionaries, Multiscale...
Neighborly Polytopes and Sparse Solutions of Underdetermined Linear Equations
, 2005
"... Consider a d × n matrix A, with d < n. The problem of solving for x in y = Ax is underdetermined, and has many possible solutions (if there are any). In several fields it is of interest to find the sparsest solution – the one with fewest nonzeros – but in general this involves combinatorial optimiza ..."
Abstract

Cited by 85 (12 self)
 Add to MetaCart
Consider a d × n matrix A, with d < n. The problem of solving for x in y = Ax is underdetermined, and has many possible solutions (if there are any). In several fields it is of interest to find the sparsest solution – the one with fewest nonzeros – but in general this involves combinatorial optimization. Let ai denote the ith column of A, 1 ≤ i ≤ n. Associate to A the quotient polytope P formed by taking the convex hull of the 2n points (±ai) in R d. P is centrosymmetric and is called (centrally) kneighborly if every subset of k + 1 elements (±ilail)k+1 l=1 are the vertices of a face of P. We show that if P is kneighborly, then if a system y = Ax has a solution with at most k nonzeros, that solution is also the unique solution of the convex optimization problem min �x�1 subject to y = Ax; the converse holds as well. This complete equivalence between the study of sparse solution by ℓ 1 minimization and neighborliness of convex polytopes immediately gives new results in each field. On the one
Enhancing Sparsity by Reweighted ℓ1 Minimization
, 2007
"... It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many si ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed nearsparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
 THE JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS
, 2004
"... Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implem ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implementation to ℓ1constraints, using a gradient method, with projection on ℓ1balls. The corresponding algorithm uses again iterative softthresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.
Fast slant stack: A notion of Radon transform for data in a Cartesian grid which is rapidly computible, algebraically exact, geometrically faithful and invertible
 SIAM J. Sci. Comput
, 2001
"... Abstract. We define a notion of Radon Transform for data in an n by n grid. It is based on summation along lines of absolute slope less than 1 (as a function either of x or of y), with values at nonCartesian locations defined using trigonometric interpolation on a zeropadded grid. The definition i ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
Abstract. We define a notion of Radon Transform for data in an n by n grid. It is based on summation along lines of absolute slope less than 1 (as a function either of x or of y), with values at nonCartesian locations defined using trigonometric interpolation on a zeropadded grid. The definition is geometrically faithful: the lines exhibit no ‘wraparound effects’. For a special set of lines equispaced in slope (rather than angle), we describe an exact algorithm which uses O(N log N) flops, where N = n2 is the number of pixels. This relies on a discrete projectionslice theorem relating this Radon transform and what we call the Pseudopolar Fourier transform. The Pseudopolar FT evaluates the 2D Fourier transform on a nonCartesian pointset, which we call the pseudopolar grid. Fast Pseudopolar FT – the process of rapid exact evaluation of the 2D Fourier transform at these nonCartesian grid points – is possible using chirpZ transforms. This Radon transform is onetoone and hence invertible on its range; it is rapidly invertible to any degree of desired accuracy using a preconditioned conjugate gradient solver. Empirically, the numerical conditioning is superb; the singular value spread of the preconditioned Radon transform turns out numerically to be less than 10%, and three iterations of the conjugate gradient solver typically suffice for 6 digit accuracy. We also describe a 3D version of the transform.
Fast solution of ℓ1norm minimization problems when the solution may be sparse
, 2006
"... The minimum ℓ1norm solution to an underdetermined system of linear equations y = Ax, is often, remarkably, also the sparsest solution to that system. This sparsityseeking property is of interest in signal processing and information transmission. However, generalpurpose optimizers are much too slo ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
The minimum ℓ1norm solution to an underdetermined system of linear equations y = Ax, is often, remarkably, also the sparsest solution to that system. This sparsityseeking property is of interest in signal processing and information transmission. However, generalpurpose optimizers are much too slow for ℓ1 minimization in many largescale applications. The Homotopy method was originally proposed by Osborne et al. for solving noisy overdetermined ℓ1penalized least squares problems. We here apply it to solve the noiseless underdetermined ℓ1minimization problem min ‖x‖1 subject to y = Ax. We show that Homotopy runs much more rapidly than generalpurpose LP solvers when sufficient sparsity is present. Indeed, the method often has the following kstep solution property: if the underlying solution has only k nonzeros, the Homotopy method reaches that solution in only k iterative steps. When this property holds and k is small compared to the problem size, this means that ℓ1 minimization problems with ksparse solutions can be solved in a fraction of the cost of solving one fullsized linear system. We demonstrate this kstep solution property for two kinds of problem suites. First,
Enhacing sparsity by reweighted ℓ1 minimization
 Journal of Fourier Analysis and Applications
, 2008
"... It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many si ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed nearsparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.