Results 1  10
of
46
An EM Algorithm for WaveletBased Image Restoration
, 2002
"... This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking a ..."
Abstract

Cited by 233 (21 self)
 Add to MetaCart
This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking advantage of the well known sparsity of wavelet representations. Previous works have investigated waveletbased restoration but, except for certain special cases, the resulting criteria are solved approximately or require very demanding optimization methods. The EM algorithm herein proposed combines the efficient image representation oered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. The algorithm alternates between an Estep based on the fast Fourier transform (FFT) and a DWTbased Mstep, resulting in an ecient iterative process requiring O(N log N) operations per iteration. Thus, it is the rst image restoration algorithm that optimizes a waveletbased penalized likelihood criterion and has computational complexity comparable to that of standard wavelet denoising or frequency domain deconvolution methods. The convergence behavior of the algorithm is investigated, and it is shown that under mild conditions the algorithm converges to a globally optimal restoration. Moreover, our new approach outperforms several of the best existing methods in benchmark tests, and in some cases is also much less computationally demanding.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinato ..."
Abstract

Cited by 202 (31 self)
 Add to MetaCart
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
A New TwIST: TwoStep Iterative Shrinkage/Thresholding Algorithms for Image Restoration
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2007
"... Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic ..."
Abstract

Cited by 96 (19 self)
 Add to MetaCart
Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or waveletbased regularization). It happens that the convergence rate of these IST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is illconditioned or illposed. In this paper, we introduce twostep IST (TwIST) algorithms, exhibiting much faster convergence rate than IST for illconditioned problems. For a vast class of nonquadratic convex regularizers ( norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.
Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
 SIAM J. Imaging Sci
, 2008
"... Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number o ..."
Abstract

Cited by 59 (13 self)
 Add to MetaCart
Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrixvector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixedpoint continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
A douglasRachford splitting approach to nonsmooth convex variational signal recovery
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... Abstract — Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is propo ..."
Abstract

Cited by 49 (15 self)
 Add to MetaCart
Abstract — Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is proposed to solve it. The convergence of the method, which is based on the DouglasRachford algorithm for monotone operatorsplitting, is obtained under general conditions. Applications to nonGaussian image denoising in a tight frame are also demonstrated. Index Terms — Convex optimization, denoising, DouglasRachford, frame, nondifferentiable optimization, Poisson noise,
A variational formulation for framebased inverse problems
 Inverse Problems
, 2007
"... A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smo ..."
Abstract

Cited by 42 (19 self)
 Add to MetaCart
A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smooth term modeling the data formation model as well as other constraints. Sparsityconstrained and Bayesian formulations are examined as special cases. A splitting algorithm is presented to solve this problem and its convergence is established in infinitedimensional spaces under mild conditions on the penalization functions, which need not be differentiable. Numerical simulations demonstrate applications to framebased image restoration. 1
An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems
 IEEE Trans. Image Process
, 2011
"... Abstract—We propose a new fast algorithm for solving one of the standard approaches to illposed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and con ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
Abstract—We propose a new fast algorithm for solving one of the standard approaches to illposed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of offtheshelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either totalvariation or waveletbased (or, more generally, framebased) regularization. The proposed algorithm is an instance of the socalled alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the stateoftheart. Index Terms—Convex optimization, frames, image reconstruction, image restoration, inpainting, totalvariation. A. Problem Formulation
Generalizing the nonlocalmeans to superresolution reconstruction
 IN IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2009
"... Superresolution reconstruction proposes a fusion of several lowquality images into one higher quality result with better optical resolution. Classic superresolution techniques strongly rely on the availability of accurate motion estimation for this fusion task. When the motion is estimated inacc ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
Superresolution reconstruction proposes a fusion of several lowquality images into one higher quality result with better optical resolution. Classic superresolution techniques strongly rely on the availability of accurate motion estimation for this fusion task. When the motion is estimated inaccurately, as often happens for nonglobal motion fields, annoying artifacts appear in the superresolved outcome. Encouraged by recent developments on the video denoising problem, where stateoftheart algorithms are formed with no explicit motion estimation, we seek a superresolution algorithm of similar nature that will allow processing sequences with general motion patterns. In this paper, we base our solution on the NonlocalMeans (NLM) algorithm. We show how this denoising method is generalized to become a relatively simple superresolution algorithm with no explicit motion estimation. Results on several test movies show that the proposed method is very successful in providing superresolution on general sequences.
A wideangle view at iterated shrinkage algorithms
 in SPIE (Wavelet XII
, 2007
"... Sparse and redundant representations – an emerging and powerful model for signals – suggests that a data source could be described as a linear combination of few atoms from a prespecified and overcomplete dictionary. This model has drawn a considerable attention in the past decade, due to its appe ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Sparse and redundant representations – an emerging and powerful model for signals – suggests that a data source could be described as a linear combination of few atoms from a prespecified and overcomplete dictionary. This model has drawn a considerable attention in the past decade, due to its appealing theoretical foundations, and promising practical results it leads to. Many of the applications that use this model are formulated as a mixture of ℓ2ℓp (p ≤ 1) optimization expressions. Iterated Shrinkage algorithms are a new family of highly effective numerical techniques for handling these optimization tasks, surpassing traditional optimization techniques. In this paper we aim to give a broad view of this group of methods, motivate their need, present their derivation, show their comparative performance, and most important of all, discuss their potential in various applications.
Sparse Representationbased Image Deconvolution by Iterative Thresholding
, 2007
"... Image deconvolution algorithms with overcomplete sparse representations and fast iterative thresholding methods are presented. The image to be recovered is assumed to be sparsely represented in a redundant dictionary of transforms. These transforms are chosen to offer a wider range of generating ato ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Image deconvolution algorithms with overcomplete sparse representations and fast iterative thresholding methods are presented. The image to be recovered is assumed to be sparsely represented in a redundant dictionary of transforms. These transforms are chosen to offer a wider range of generating atoms; allowing more flexibility in image representation and adaptativity to its morphological content. The deconvolution inverse problem is formulated as the minimization of an energy functional with a sparsitypromoting regularization (e.g. ℓ1 norm of the image representation coefficients). As opposed to quadratic programming solvers based on the interior point method, here, recent advances in fast solution algorithms of such problems, i.e. Stagewise Iterative Thresholding, are exploited to solve the optimization problem and provide fast and good image recovery results. Some theoretical aspects as well as computational and practical issues are investigated. Illustrations are provided for potential applicability of the method to astronomical data.