Results 1  10
of
64
MonteCarlo Sure: A blackbox optimization of regularization parameters for general denoising algorithms
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2008
"... We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein’s unbiased risk estimate (SURE) which provides a means of assessing the true meansquared error (MSE) pure ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein’s unbiased risk estimate (SURE) which provides a means of assessing the true meansquared error (MSE) purely from the measured data without need for any knowledge about the noisefree signal. Specifically, we present a novel MonteCarlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a blackbox approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SUREbased optimization of a series of popular imagedenoising algorithms such as totalvariation denoising, wavelet softthresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases.
Image Denoising in Mixed Poisson–Gaussian Noise
, 2011
"... We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed Poisson–Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a pur ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed Poisson–Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely dataadaptive unbiased estimate of the meansquared error (MSE), derived in a nonBayesian framework (PURE: Poisson–Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transformdomain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subbandadaptive thresholding functions with signaldependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with stateoftheart techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of lowcount fluorescence microscopy.
A SURE Approach for Digital Signal/Image Deconvolution Problems
, 2009
"... In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is twofold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the mini ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is twofold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the minimization of a criterion derived from Stein’s unbiased quadratic risk estimate. Secondly, the deconvolution procedure is performed using any analysis and synthesis frames that can be overcomplete or not. New theoretical results concerning the calculation of the variance of the Stein’s risk estimate are also provided in this work. Simulations carried out on natural images show the good performance of our method w.r.t. conventional waveletbased restoration methods.
Recursive risk estimation for nonlinear image deconvolution with a waveletdomain sparsity constraint
 In IEEE International Conference on Image Processing, ICIP’08
, 2008
"... We propose a recursive datadriven riskestimation method for nonlinear iterative deconvolution. Our two main contributions are 1) a solutiondomain riskestimation approach that is applicable to nonlinear restoration algorithms for illconditioned inverse problems; and 2) a risk estimate for a s ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
We propose a recursive datadriven riskestimation method for nonlinear iterative deconvolution. Our two main contributions are 1) a solutiondomain riskestimation approach that is applicable to nonlinear restoration algorithms for illconditioned inverse problems; and 2) a risk estimate for a stateoftheart iterative procedure, the thresholded Landweber iteration, which enforces a waveletdomain sparsity constraint. Our method can be used to estimate the SNR improvement at every step of the algorithm; e.g., for stopping the iteration after the highest value is reached. It can also be applied to estimate the optimal threshold level for a given number of iterations. Index Terms — Risk estimation, parameter adjustment, deconvolution, nonlinear, iterative, wavelets, sparsity.
A Fast Multilevel Algorithm for WaveletRegularized Image Restoration
 IEEE Trans. Image Processing
"... Abstract—We present a multilevel extension of the popular “thresholded Landweber ” algorithm for waveletregularized image restoration that yields an order of magnitude speed improvement over the standard fixedscale implementation. The method is generic and targeted towards largescale linear inver ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
(Show Context)
Abstract—We present a multilevel extension of the popular “thresholded Landweber ” algorithm for waveletregularized image restoration that yields an order of magnitude speed improvement over the standard fixedscale implementation. The method is generic and targeted towards largescale linear inverse problems, such as 3D deconvolution microscopy. The algorithm is derived within the framework of bound optimization. The key idea is to successively update the coefficients in the various wavelet channels using fixed, subbandadapted iteration parameters (step sizes and threshold levels). The optimization problem is solved efficiently via a proper chaining of basic iteration modules. The higher level description of the algorithm is similar to that of a multigrid solver for PDEs, but there is one fundamental difference: the latter iterates though a sequence of multiresolution versions of the original problem, while, in our case, we cycle through the wavelet subspaces corresponding to the difference between successive approximations. This strategy is motivated by the special structure of the problem and the preconditioning properties of the wavelet representation. We establish that the solution of the restoration problem corresponds to a fixed point of our multilevel optimizer. We also provide experimental evidence that the improvement in convergence rate is essentially determined by the (unconstrained) linear part of the algorithm, irrespective of the type of wavelet. Finally, we illustrate the technique with some image deconvolution examples, including some real 3D fluorescence microscopy data. Index Terms—Bound optimization, confocal, convergence acceleration, deconvolution, fast, fluorescence, inverse problems,regularization, majorizeminimize, microscopy, multigrid, multilevel, multiresolution, multiscale, nonlinear, optimization transfer, preconditioning, reconstruction, restoration, sparsity,
SURELET Multichannel Image Denoising: Interscale Orthonormal Wavelet Thresholding
, 2008
"... We propose a vector/matrix extension of our denoising algorithm initially developed for grayscale images, in order to efficiently process multichannel (e.g., color) images. This work follows our recently published SURELET approach where the denoising algorithm is parameterized as a linear expansion ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We propose a vector/matrix extension of our denoising algorithm initially developed for grayscale images, in order to efficiently process multichannel (e.g., color) images. This work follows our recently published SURELET approach where the denoising algorithm is parameterized as a linear expansion of thresholds (LET) and optimized using Stein’s unbiased risk estimate (SURE). The proposed wavelet thresholding function is pointwise and depends on the coefficients of same location in the other channels, as well as on their parents in the coarser wavelet subband. A nonredundant, orthonormal, wavelet transform is first applied to the noisy data, followed by the (subbanddependent) vectorvalued thresholding of individual multichannel wavelet coefficients which are finally brought back to the image domain by inverse wavelet transform. Extensive comparisons with the stateoftheart multiresolution image denoising algorithms indicate that despite being nonredundant, our algorithm matches the quality of the best redundant approaches, while maintaining a high computational efficiency and a low CPU/memory consumption. An online Java demo illustrates these assertions.
SURELET for Orthonormal WaveletDomain Video Denoising
"... Abstract—We propose an efficient orthonormal waveletdomain video denoising algorithm based on an appropriate integration of motion compensation into an adapted version of our recently devised Stein’s unbiased risk estimatorlinear expansion of thresholds (SURELET) approach. To take full advantage ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We propose an efficient orthonormal waveletdomain video denoising algorithm based on an appropriate integration of motion compensation into an adapted version of our recently devised Stein’s unbiased risk estimatorlinear expansion of thresholds (SURELET) approach. To take full advantage of the strong spatiotemporal correlations of neighboring frames, a global motion compensation followed by a selective blockmatching is first applied to adjacent frames, which increases their temporal correlations without distorting the interframe noise statistics. Then, a multiframe interscale wavelet thresholding is performed to denoise the current central frame. The simulations we made on standard grayscale video sequences for various noise levels demonstrate the efficiency of the proposed solution in reducing additive white Gaussian noise. Obtained at a lighter computational load, our results are even competitive with most stateoftheart redundant waveletbased techniques. By using a cyclespinning strategy, our algorithm is in fact able to outperform these methods. Index Terms—Blockmatching, Stein’s unbiased risk estimatorlinear expansion of thresholds (SURELET), video denoising, wavelet. I.
Wavelet based image denoising technique
 IJACSA, Vol
, 2011
"... Abstract — This paper proposes different approaches of wavelet based image denoising methods. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algor ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Abstract — This paper proposes different approaches of wavelet based image denoising methods. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. Wavelet algorithms are useful tool for signal processing such as image compression and denoising. Multi wavelets can be considered as an extension of scalar wavelets. The main aim is to modify the wavelet coefficients in the new basis, the noise can be removed from the data. In this paper, we extend the existing technique and providing a comprehensive evaluation of the proposed method. Results based on different noise, such as Gaussian, Poisson’s, Salt and Pepper, and Speckle performed in this paper. A signal to noise ratio as a measure of the quality of denoising was preferred.
SUREbased optimization for adaptive sampling and reconstruction
 ACM Transactions on Graphics (SIGGRAPH Asia
, 2011
"... Figure 1: Comparisons between greedy error minimization (GEM) [Rousselle et al. 2011] and our SUREbased filtering. With SURE, we are able to use kernels (cross bilateral filters in this case) that are more effective than GEM’s isotropic Gassians. Thus, our approach better adapts to anisotropic feat ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Figure 1: Comparisons between greedy error minimization (GEM) [Rousselle et al. 2011] and our SUREbased filtering. With SURE, we are able to use kernels (cross bilateral filters in this case) that are more effective than GEM’s isotropic Gassians. Thus, our approach better adapts to anisotropic features (such as the motion blur pattern due to the motion of the airplane) and preserves scene details (such as the textures on the floor and curtains). The kernels of both methods are visualized for comparison. We apply Stein’s Unbiased Risk Estimator (SURE) to adaptive sampling and reconstruction to reduce noise in Monte Carlo rendering. SURE is a general unbiased estimator for mean squared error (MSE) in statistics. With SURE, we are able to estimate error for an arbitrary reconstruction kernel, enabling us to use more effective kernels rather than being restricted to the symmetric ones used in previous work. It also allows us to allocate more samples to areas with higher estimated MSE. Adaptive sampling and reconstruction can therefore be processed within an optimization framework. We also propose an efficient and memoryfriendly approach to reduce the impact of noisy geometry features where there is depth of field or motion blur. Experiments show that our method produces images with less noise and crisper details than previous methods.
Optimal denoising in redundant representations
 IEEE TRANS. IMAGE PROCESS
, 2008
"... Image denoising methods are often designed to minimize meansquared error (MSE) within the subbands of a multiscale decomposition. However, most highquality denoising results have been obtained with overcomplete representations, for which minimization of MSE in the subband domain does not guarante ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Image denoising methods are often designed to minimize meansquared error (MSE) within the subbands of a multiscale decomposition. However, most highquality denoising results have been obtained with overcomplete representations, for which minimization of MSE in the subband domain does not guarantee optimal MSE performance in the image domain. We prove that, despite this suboptimality, the expected imagedomain MSE resulting from applying estimators to subbands that are made redundant through spatial replication of basis functions (e.g., cycle spinning) is always less than or equal to that resulting from applying the same estimators to the original nonredundant representation. In addition, we show that it is possible to further exploit overcompleteness by jointly optimizing the subband estimators for imagedomain MSE. We develop an extended version of Stein’s unbiased risk estimate (SURE) that allows us to perform this optimization adaptively, for each observed noisy image. We demonstrate this methodology using a new class of estimator formed from linear combinations of localized “bump ” functions that are applied either pointwise or on local neighborhoods of subband coefficients. We show through simulations that the performance of these estimators applied to overcomplete subbands and optimized for imagedomain MSE is substantially better than that obtained when they are optimized within each subband. This performance is, in turn, substantially better than that obtained when they are optimized for use on a nonredundant representation.