Results 1  10
of
201
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 832 (16 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinato ..."
Abstract

Cited by 202 (31 self)
 Add to MetaCart
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Fast Discrete Curvelet Transforms
, 2005
"... This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform [12, 10] in two and three dimensions. The first digital transformation is based on unequallyspaced fast Fourier transforms (USFFT) while the second is based on the wrap ..."
Abstract

Cited by 114 (9 self)
 Add to MetaCart
This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform [12, 10] in two and three dimensions. The first digital transformation is based on unequallyspaced fast Fourier transforms (USFFT) while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n 2 log n) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at
Recovery algorithms for vector valued data with joint sparsity constraints
, 2006
"... Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take ..."
Abstract

Cited by 71 (21 self)
 Add to MetaCart
Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take into account such joint sparsity patterns, promoting coupling of nonvanishing components. These measures are typically constructed as weighted ℓ1 norms of componentwise ℓq norms of frame coefficients. We show how to compute solutions of linear inverse problems with such joint sparsity regularization constraints by fast thresholded Landweber algorithms. Next we discuss the adaptive choice of suitable weights appearing in the definition of sparsity measures. The weights are interpreted as indicators of the sparsity pattern and are iteratively updated after each new application of the thresholded Landweber algorithm. The resulting twostep algorithm is interpreted as a doubleminimization scheme for a suitable target functional. We show its ℓ2norm convergence. An implementable version of the algorithm is also formulated, and its norm convergence is proven. Numerical experiments in color image restoration are presented.
The Curvelet Representation of Wave Propagators is Optimally Sparse
, 2004
"... This paper argues that curvelets provide a powerful tool for representing very general linear symmetric systems of hyperbolic differential equations. Curvelets are a recently developed multiscale system [10, 7] in which the elements are highly anisotropic at fine scales, with effective support shape ..."
Abstract

Cited by 60 (13 self)
 Add to MetaCart
This paper argues that curvelets provide a powerful tool for representing very general linear symmetric systems of hyperbolic differential equations. Curvelets are a recently developed multiscale system [10, 7] in which the elements are highly anisotropic at fine scales, with effective support shaped according to the parabolic scaling principle width ≈ length 2 at fine scales. We prove that for a wide class of linear hyperbolic differential equations, the curvelet representation of the solution operator is both optimally sparse and well organized. • It is sparse in the sense that the matrix entries decay nearly exponentially fast (i.e. faster than any negative polynomial), • and wellorganized in the sense that the very few nonnegligible entries occur near a few shifted diagonals. Indeed, we show that the wave group maps each curvelet onto a sum of curveletlike waveforms whose locations and orientations are obtained by following the different Hamiltonian flows—hence the diagonal shifts in the curvelet representation. A physical interpretation of this result is that curvelets may be viewed as coherent waveforms with enough frequency localization so that they behave like waves but at the same time, with enough spatial localization so that they simultaneously behave like particles.
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
 THE JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS
, 2004
"... Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implem ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implementation to ℓ1constraints, using a gradient method, with projection on ℓ1balls. The corresponding algorithm uses again iterative softthresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.
Optimally sparse multidimensional representations using shearlets, preprint
, 2006
"... Abstract. Recent advances in applied mathematics and signal processing have shown that, in order to obtain sparse representations of multidimensional functions and signals, one has to use representation elements distributed not only at various scales and locations – as in classical wavelet theory – ..."
Abstract

Cited by 56 (24 self)
 Add to MetaCart
Abstract. Recent advances in applied mathematics and signal processing have shown that, in order to obtain sparse representations of multidimensional functions and signals, one has to use representation elements distributed not only at various scales and locations – as in classical wavelet theory – but also at various directions. In this paper, we show that we obtain a construction having exactly these properties by using the framework of affine systems. The representation elements that we obtain are generated by translations, dilations, and shear transformations of a single mother function, and are called shearlets. The shearlets provide optimally sparse representations for 2D functions that are smooth away from discontinuities along curves. Another benefit of this approach is that, thanks to their mathematical structure, these systems provide a Multiresolution analysis similar to the one associated with classical wavelets, which is very useful for the development of fast algorithmic implementations.
NONSUBSAMPLED CONTOURLET TRANSFORM: FILTER DESIGN AND APPLICATIONS IN DENOISING
"... In this paper we study the nonsubsampled contourlet transform. We address the corresponding filter design problem using the McClellan transformation. We show how zeroes can be imposed in the filters so that the iterated structure produces regular basis functions. The proposed design framework yields ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
In this paper we study the nonsubsampled contourlet transform. We address the corresponding filter design problem using the McClellan transformation. We show how zeroes can be imposed in the filters so that the iterated structure produces regular basis functions. The proposed design framework yields filters that can be implemented efficiently through a lifting factorization. We apply the constructed transform in image noise removal where the results obtained are comparable to the stateofthe art, being superior in some cases.
Wave atoms and sparsity of oscillatory patterns
 Appl. Comput. Harmon. Anal
, 2006
"... We introduce “wave atoms ” as a variant of 2D wavelet packets obeying the parabolic scaling wavelength ∼ (diameter) 2. We prove that warped oscillatory functions, a toy model for texture, have a significantly sparser expansion in wave atoms than in other fixed standard representations like wavelets, ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
We introduce “wave atoms ” as a variant of 2D wavelet packets obeying the parabolic scaling wavelength ∼ (diameter) 2. We prove that warped oscillatory functions, a toy model for texture, have a significantly sparser expansion in wave atoms than in other fixed standard representations like wavelets, Gabor atoms, or curvelets. We propose a novel algorithm for a tight frame of wave atoms with redundancy two, directly in the frequency plane, by the “wrapping ” technique. We also propose variants of the basic transform for applications in image processing, including an orthonormal basis, and a shiftinvariant tight frame with redundancy four. Sparsity and denoising experiments on both seismic and fingerprint images demonstrate the potential of the tool introduced.
Nearideal model selection by ℓ1 minimization
, 2008
"... We consider the fundamental problem of estimating the mean of a vector y = Xβ + z, where X is an n × p design matrix in which one can have far more variables than observations and z is a stochastic error term—the socalled ‘p> n ’ setup. When β is sparse, or more generally, when there is a sparse su ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
We consider the fundamental problem of estimating the mean of a vector y = Xβ + z, where X is an n × p design matrix in which one can have far more variables than observations and z is a stochastic error term—the socalled ‘p> n ’ setup. When β is sparse, or more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate Xβ using a computationally tractable algorithm. We show that in a surprisingly wide range of situations, the lasso happens to nearly select the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic program achieves a squared error within a logarithmic factor of the ideal mean squared error one would achieve with an oracle supplying perfect information about which variables should be included in the model and which variables should not. Interestingly, our results describe the average performance of the lasso; that is, the performance one can expect in an vast majority of cases where Xβ is a sparse or nearly sparse superposition of variables, but not in all cases. Our results are nonasymptotic and widely applicable since they simply require that pairs of predictor variables are not too collinear.