Results 1  10
of
53
An EM Algorithm for WaveletBased Image Restoration
, 2002
"... This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking a ..."
Abstract

Cited by 233 (21 self)
 Add to MetaCart
This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking advantage of the well known sparsity of wavelet representations. Previous works have investigated waveletbased restoration but, except for certain special cases, the resulting criteria are solved approximately or require very demanding optimization methods. The EM algorithm herein proposed combines the efficient image representation oered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. The algorithm alternates between an Estep based on the fast Fourier transform (FFT) and a DWTbased Mstep, resulting in an ecient iterative process requiring O(N log N) operations per iteration. Thus, it is the rst image restoration algorithm that optimizes a waveletbased penalized likelihood criterion and has computational complexity comparable to that of standard wavelet denoising or frequency domain deconvolution methods. The convergence behavior of the algorithm is investigated, and it is shown that under mild conditions the algorithm converges to a globally optimal restoration. Moreover, our new approach outperforms several of the best existing methods in benchmark tests, and in some cases is also much less computationally demanding.
Spatial resolution Enhancement of LowResolution . . .
, 1998
"... Recent years have seen growing interest in the problem of superresolution restoration of video sequences. Whereas in the traditional single image restoration problem only a single input image is available for processing, the task of reconstructing superresolution images from multiple undersampled ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
Recent years have seen growing interest in the problem of superresolution restoration of video sequences. Whereas in the traditional single image restoration problem only a single input image is available for processing, the task of reconstructing superresolution images from multiple undersampled and degraded images can take advantage of the additional spatiotemporal data available in the image sequence. In particular, camera and scene motion lead to frames in the source video sequence containing similar, but not identical information. The additional information available in these frames make possible reconstruction of visually superior frames at higher resolution than that of the original data. In this paper we review the current state of the art and identify promising directions for future research.
Multiresolution Support Applied to Image Filtering and Restoration
, 1995
"... The notion of a multiresolution support is introduced. This is a sequence of boolean images, related to significant pixels at each of a number of resolution levels. The multiresolution support is then used for noise suppression, in the context of image filtering, or iterative image restoration. A ..."
Abstract

Cited by 39 (21 self)
 Add to MetaCart
The notion of a multiresolution support is introduced. This is a sequence of boolean images, related to significant pixels at each of a number of resolution levels. The multiresolution support is then used for noise suppression, in the context of image filtering, or iterative image restoration. Algorithmic details, and a range of practical examples, illustrate this approach.
A fast thresholded Landweber algorithm for waveletregularized multidimensional deconvolution
 IEEE Trans. Image Process
, 2008
"... Abstract—We present a fast variational deconvolution algorithm that minimizes a quadratic data term subject to a regularization on the 1norm of the wavelet coefficients of the solution. Previously available methods have essentially consisted in alternating between a Landweber iteration and a wavele ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
Abstract—We present a fast variational deconvolution algorithm that minimizes a quadratic data term subject to a regularization on the 1norm of the wavelet coefficients of the solution. Previously available methods have essentially consisted in alternating between a Landweber iteration and a waveletdomain softthresholding operation. While having the advantage of simplicity, they are known to converge slowly. By expressing the cost functional in a Shannon wavelet basis, we are able to decompose the problem into a series of subbanddependent minimizations. In particular, this allows for larger (subbanddependent) step sizes and threshold levels than the previous method. This improves the convergence properties of the algorithm significantly. We demonstrate a speedup of one order of magnitude in practical situations. This makes waveletregularized deconvolution more widely accessible, even for applications with a strong limitation on computational complexity. We present promising results in 3D deconvolution microscopy, where the size of typical data sets does not permit more than a few tens of iterations. Index Terms—Deconvolution, fast, fluorescence microscopy, iterative, nonlinear, sparsity, 3D, thresholding, wavelets,
Hierarchical Bayesian Sparse Image Reconstruction With Application to MRFM
"... Abstract—This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seam ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
Abstract—This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument. Index Terms—Bayesian inference, deconvolution, Markov chain Monte Carlo (MCMC) methods, magnetic resonance force microscopy
1 Dictionary Learning for Sparse Approximations with the Majorization Method
"... Abstract—In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Abstract—In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation (e.g. Expectation Maximization (EM)) problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.
Tomographic inversion using ℓ1norm regularization of wavelet coefficients
 Geophysical Journal International
"... coefficients ..."
A Fast Multilevel Algorithm for WaveletRegularized Image Restoration
 IEEE Trans. Image Processing
"... Abstract—We present a multilevel extension of the popular “thresholded Landweber ” algorithm for waveletregularized image restoration that yields an order of magnitude speed improvement over the standard fixedscale implementation. The method is generic and targeted towards largescale linear inver ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Abstract—We present a multilevel extension of the popular “thresholded Landweber ” algorithm for waveletregularized image restoration that yields an order of magnitude speed improvement over the standard fixedscale implementation. The method is generic and targeted towards largescale linear inverse problems, such as 3D deconvolution microscopy. The algorithm is derived within the framework of bound optimization. The key idea is to successively update the coefficients in the various wavelet channels using fixed, subbandadapted iteration parameters (step sizes and threshold levels). The optimization problem is solved efficiently via a proper chaining of basic iteration modules. The higher level description of the algorithm is similar to that of a multigrid solver for PDEs, but there is one fundamental difference: the latter iterates though a sequence of multiresolution versions of the original problem, while, in our case, we cycle through the wavelet subspaces corresponding to the difference between successive approximations. This strategy is motivated by the special structure of the problem and the preconditioning properties of the wavelet representation. We establish that the solution of the restoration problem corresponds to a fixed point of our multilevel optimizer. We also provide experimental evidence that the improvement in convergence rate is essentially determined by the (unconstrained) linear part of the algorithm, irrespective of the type of wavelet. Finally, we illustrate the technique with some image deconvolution examples, including some real 3D fluorescence microscopy data. Index Terms—Bound optimization, confocal, convergence acceleration, deconvolution, fast, fluorescence, inverse problems,regularization, majorizeminimize, microscopy, multigrid, multilevel, multiresolution, multiscale, nonlinear, optimization transfer, preconditioning, reconstruction, restoration, sparsity,
Iterative Solution Methods for Large Linear Discrete IllPosed Problems
, 1998
"... This paper discusses iterative methods for the solution of very large severely illconditioned linear systems of equations that arise from the discretization of linear illposed problems. The righthand side vector represents the given data and is assumed to be contaminated by errors. Solution metho ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
This paper discusses iterative methods for the solution of very large severely illconditioned linear systems of equations that arise from the discretization of linear illposed problems. The righthand side vector represents the given data and is assumed to be contaminated by errors. Solution methods proposed in the literature employ some form of filtering to reduce the influence of the error in the righthand side on the computed approximate solution. The amount of filtering is determined by a parameter, often referred to as the regularization parameter. We discuss how the filtering affects the computed approximate solution and consider the selection of regularization parameter. Methods in which a suitable value of the regularization parameter is determined during the computation, without user intervention, are emphasized. New iterative solution methods based on expanding explicitly chosen filter functions in terms of Chebyshev polynomials are presented. The properties of these methods are illustrated with applications to image restoration.
Wavelet Accelerated Regularization Methods for Hyperthermia Treatment Planning
, 1996
"... Cancer therapy by hyperthermia treatment aims at heating up the region of the tumor while keeping the surrounding body below a prespecified temperature. The heating is achieved by an electromagnetic field generated by several antennae which are placed around the patient. The hyperthermia problem a ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Cancer therapy by hyperthermia treatment aims at heating up the region of the tumor while keeping the surrounding body below a prespecified temperature. The heating is achieved by an electromagnetic field generated by several antennae which are placed around the patient. The hyperthermia problem asks to determine the parameters of the antennae, s.t. the resulting electromagnetic field is optimal with respect to some prescribed quality criterion. Iterative optimization algorithms require the solution of large, dense linear systems in each iteration step. We investigate modifications of standard regularization methods for inverse problems where the system matrix A is replaced by a family of sparse approximations fA k g. An adaptation strategy for choosing the approximation level, which leads to the same convergence rates as iteration schemes with the full matrix A, is proved. Wavelet compression techniques originally designed for applications in image processing are used to compute ...