Results 1  10
of
164
NESTA: A Fast and Accurate FirstOrder Method for Sparse Recovery
, 2009
"... Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel firstorder ..."
Abstract

Cited by 168 (2 self)
 Add to MetaCart
(Show Context)
Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel firstorder methods in convex optimization, most notably Nesterov’s smoothing technique, this paper introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov’s work, one of the key ideas of this algorithm is a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradientdescent algorithms. This paper demonstrates that this approach is ideally suited for solving largescale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed stateoftheart methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as totalvariation minimization, and
A review of curvelets and recent applications
 IEEE Signal Processing Magazine
, 2009
"... Multiresolution methods are deeply related to image processing, biological and computer vision, scientific computing, etc. The curvelet transform is a multiscale directional transform which allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing ..."
Abstract

Cited by 126 (10 self)
 Add to MetaCart
(Show Context)
Multiresolution methods are deeply related to image processing, biological and computer vision, scientific computing, etc. The curvelet transform is a multiscale directional transform which allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing interest in the community of applied mathematics and signal processing over the past years. In this paper, we present a review on the curvelet transform, including its history beginning from wavelets, its logical relationship to other multiresolution multidirectional methods like contourlets and shearlets, its basic theory and discrete algorithm. Further, we consider recent applications in image/video processing, seismic exploration, fluid mechanics, simulation of partial different equations, and compressed sensing.
Recovery algorithms for vector valued data with joint sparsity constraints
, 2006
"... Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take ..."
Abstract

Cited by 110 (23 self)
 Add to MetaCart
(Show Context)
Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take into account such joint sparsity patterns, promoting coupling of nonvanishing components. These measures are typically constructed as weighted ℓ1 norms of componentwise ℓq norms of frame coefficients. We show how to compute solutions of linear inverse problems with such joint sparsity regularization constraints by fast thresholded Landweber algorithms. Next we discuss the adaptive choice of suitable weights appearing in the definition of sparsity measures. The weights are interpreted as indicators of the sparsity pattern and are iteratively updated after each new application of the thresholded Landweber algorithm. The resulting twostep algorithm is interpreted as a doubleminimization scheme for a suitable target functional. We show its ℓ2norm convergence. An implementable version of the algorithm is also formulated, and its norm convergence is proven. Numerical experiments in color image restoration are presented.
Dictionaries for Sparse Representation Modeling
"... Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a p ..."
Abstract

Cited by 103 (3 self)
 Add to MetaCart
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data, or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1D and 2D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the KSVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.
An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems
 IEEE Trans. Image Process
, 2011
"... Abstract—We propose a new fast algorithm for solving one of the standard approaches to illposed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and con ..."
Abstract

Cited by 88 (9 self)
 Add to MetaCart
Abstract—We propose a new fast algorithm for solving one of the standard approaches to illposed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of offtheshelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either totalvariation or waveletbased (or, more generally, framebased) regularization. The proposed algorithm is an instance of the socalled alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the stateoftheart. Index Terms—Convex optimization, frames, image reconstruction, image restoration, inpainting, totalvariation. A. Problem Formulation
Sparse Directional Image Representations using the Discrete Shearlet Transform
 Appl. Comput. Harmon. Anal
"... It is now widely acknowledged that traditional wavelets are not very effective in dealing with multidimensional signals containing distributed discontinuities. To achieve a more efficient representation one has to use basis elements with much higher directional sensitivity. This paper introduces a n ..."
Abstract

Cited by 80 (44 self)
 Add to MetaCart
(Show Context)
It is now widely acknowledged that traditional wavelets are not very effective in dealing with multidimensional signals containing distributed discontinuities. To achieve a more efficient representation one has to use basis elements with much higher directional sensitivity. This paper introduces a new discrete multiscale directional representation called the Discrete Shearlet Transform. This approach, which is based on the shearlet transform, combines the power of multiscale methods with a unique ability to capture the geometry of multidimensional data and is optimally efficient in representing images containing edges. We describe two different methods of implementing the shearlet transform. The numerical experiments presented in this paper demonstrate that the Discrete Shearlet Transform is very competitive in denoising applications both in terms of performance and computational efficiency.
Wave atoms and sparsity of oscillatory patterns
 Appl. Comput. Harmon. Anal
, 2006
"... We introduce “wave atoms ” as a variant of 2D wavelet packets obeying the parabolic scaling wavelength ∼ (diameter) 2. We prove that warped oscillatory functions, a toy model for texture, have a significantly sparser expansion in wave atoms than in other fixed standard representations like wavelets, ..."
Abstract

Cited by 75 (11 self)
 Add to MetaCart
(Show Context)
We introduce “wave atoms ” as a variant of 2D wavelet packets obeying the parabolic scaling wavelength ∼ (diameter) 2. We prove that warped oscillatory functions, a toy model for texture, have a significantly sparser expansion in wave atoms than in other fixed standard representations like wavelets, Gabor atoms, or curvelets. We propose a novel algorithm for a tight frame of wave atoms with redundancy two, directly in the frequency plane, by the “wrapping ” technique. We also propose variants of the basic transform for applications in image processing, including an orthonormal basis, and a shiftinvariant tight frame with redundancy four. Sparsity and denoising experiments on both seismic and fingerprint images demonstrate the potential of the tool introduced.
Inpainting and zooming using sparse representations
 The Computer Journal
"... Representing the image to be inpainted in an appropriate sparse representation dictionary, and combining elements from Bayesian statistics and modern harmonic analysis, we introduce an expectation maximization (EM) algorithm for image inpainting and interpolation. From a statistical point of view, t ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
(Show Context)
Representing the image to be inpainted in an appropriate sparse representation dictionary, and combining elements from Bayesian statistics and modern harmonic analysis, we introduce an expectation maximization (EM) algorithm for image inpainting and interpolation. From a statistical point of view, the inpainting/interpolation can be viewed as an estimation problem with missing data. Toward this goal, we propose the idea of using the EM mechanism in a Bayesian framework, where a sparsity promoting prior penalty is imposed on the reconstructed coefficients. The EM framework gives a principled way to establish formally the idea that missing samples can be recovered/ interpolated based on sparse representations. We first introduce an easy and efficient sparserepresentationbased iterative algorithm for image inpainting. Additionally, we derive its theoretical convergence properties. Compared to its competitors, this algorithm allows a high degree of flexibility to recover different structural components in the image (piecewise smooth, curvilinear, texture, etc.). We also suggest some guidelines to automatically tune the regularization parameter.
Restoration of Poissonian images using alternating direction optimization
 IEEE Trans. Image Process
, 2010
"... Abstract—Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using stateoftheart regularizers (such as those based upon multiscale representations or total variation) is still an a ..."
Abstract

Cited by 53 (5 self)
 Add to MetaCart
(Show Context)
Abstract—Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using stateoftheart regularizers (such as those based upon multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method. The standard regularization [or maximum a posteriori (MAP)] restoration criterion, which combines the Poisson loglikelihood with a (nonsmooth) convex regularizer (logprior), leads to hard optimization problems: the loglikelihood is nonquadratic and nonseparable, the regularizer is nonsmooth, and there is a nonnegativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: totalvariation, framebased analysis, and framebased synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under totalvariation or framebased (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative stateoftheart methods, both in terms of speed and restoration accuracy. Index Terms—Alternating direction methods, augmented Lagrangian, convex optimization, image deconvolution, image restoration, Poisson images. I.
Nonparametric seismic data recovery with curvelet frames
 Geophysical Journal International
, 2008
"... Seismic data recovery from data with missing traces on otherwise regular acquisition grids forms a crucial step in the seismic processing flow. For instance, unsuccesful recovery leads to imaging artifacts and to erroneous predictions for the multiples, adversely affecting the performance of multipl ..."
Abstract

Cited by 50 (15 self)
 Add to MetaCart
Seismic data recovery from data with missing traces on otherwise regular acquisition grids forms a crucial step in the seismic processing flow. For instance, unsuccesful recovery leads to imaging artifacts and to erroneous predictions for the multiples, adversely affecting the performance of multiple ellimination. A nonparametric transformbased recovery method is presented that exploits the compression of seismic data volumes by multidimensional expansions with respect to recently developed curvelet frames. The frame elements of these transforms locally resemble wavefronts present in the data and this leads to a compressible signal representation. This compression enables us to formulate a new seismic data recovery algorithm through sparsitypromoting inversion. The concept of sparsitypromoting inversion is in itself not new to the geosciences. However, the recent insights from the field of ‘compressed sensing ’ are new since they identify the conditions that determine successful recovery. These conditions are carefully examined by means of examples geared towards the seismic recovery problem for data with large percentages (>70 %) of traces missing. We show that as long as there is sufficient ’randomness ’ in the acquistion pattern, recovery to within an acceptable error is possible. We also show that our approach compares favor