Results 1  10
of
26
ClusteringBased Denoising With Locally Learned Dictionaries
, 2009
"... In this paper, we propose KLLD: a patchbased, locally adaptive denoising method based on clustering the given noisy image into regions of similar geometric structure. In order to effectively perform such clustering, we employ as features the local weight functions derived from our earlier work on ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
In this paper, we propose KLLD: a patchbased, locally adaptive denoising method based on clustering the given noisy image into regions of similar geometric structure. In order to effectively perform such clustering, we employ as features the local weight functions derived from our earlier work on steering kernel regression [1]. These weights are exceedingly informative and robust in conveying reliable local structural information about the image even in the presence of significant amounts of noise. Next, we model each region (or cluster)—which may not be spatially contiguous—by “learning ” a best basis describing the patches within that cluster using principal components analysis. This learned basis (or “dictionary”) is then employed to optimally estimate the underlying pixel values using a kernel regression framework. An iterated version of the proposed algorithm is also presented which leads to further performance enhancements. We also introduce a novel mechanism for optimally choosing the local patch size for each cluster using Stein’s unbiased risk estimator (SURE). We illustrate the overall algorithm’s capabilities with several examples. These indicate that the proposed method appears to be competitive with some of the most recently published state of the art denoising methods.
1 Automatic Parameter Selection for Denoising Algorithms Using a NoReference Measure of Image Content
"... Across the field of inverse problems in image and video processing, nearly all algorithms have various parameters which need to be set in order to yield good results. In practice, usually the choice of such parameters is made empirically with trial and error if no ”groundtruth ” reference is availa ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
Across the field of inverse problems in image and video processing, nearly all algorithms have various parameters which need to be set in order to yield good results. In practice, usually the choice of such parameters is made empirically with trial and error if no ”groundtruth ” reference is available. Some analytical methods such as crossvalidation and Stein’s unbiased risk estimate (SURE) have been successfully used to set such parameters. However, these methods tend to be strongly reliant on restrictive assumptions on the noise, and also computationally heavy. In this paper, we propose a metric Q which is based on singular value decomposition of local image gradients, and provides a quantitative measure of true image content (e.g. visually salient geometric structures such as edges etc.), in the presence of noise and other disturbances. This measure (1) is easy to compute (2) does not require the use of a reference image, (3) reacts reasonably to both blur and random noise, (4) works well even when the noise is not Gaussian. To illustrate its use in selection of algorithmic parameters, the proposed measure is used to automatically and effectively set the parameters of two leading image denoising algorithms. While the experimental focus of this paper is on optimizing denoising filters, the proposed metric can also be used for a large variety of other image and video restoration algorithms such as deblurring, superresolution, and more. In this paper, ample simulated and real data experiments illustrate the effectiveness of the proposed approach for denoising applications. For the sake of completeness, the statistical properties of the proposed metric Q in some special cases are also provided.
Optimal inversion of the Anscombe transformation in lowcount Poisson image denoising
 IEEE TRANSACTIONS
"... The removal of Poisson noise is often performed through the following threestep procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the n ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
The removal of Poisson noise is often performed through the following threestep procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the noise is removed using a conventional denoising algorithm for additive white Gaussian noise. Third, an inverse transformation is applied to the denoised signal, obtaining the estimate of the signal of interest. The choice of the proper inverse transformation is crucial in order to minimize the bias error which arises when the nonlinear forward transformation is applied. We introduce optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse. We then present an experimental analysis using a few stateoftheart denoising algorithms and show that the estimation can be consistently improved by applying the exact unbiased inverse, particularly at the lowcount regime. This results in a very efficient filtering solution that is competitive with some of the best existing methods for Poisson image denoising.
Parallel proximal algorithm for image restoration using hybrid regularization
 IEEE Transactions on Image Processing
, 2011
"... Regularization approaches have demonstrated their effectiveness for solving illposed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet domain regula ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Regularization approaches have demonstrated their effectiveness for solving illposed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet domain regularization brings other artefacts, e.g. ringing. However, a tradeoff can be made by introducing a hybrid regularization including several terms non necessarily acting in the same domain (e.g. spatial and wavelet transform domains). While this approachwas shown to provide good results for solving deconvolution problems in the presence of additive Gaussian noise, an important issue is to efficiently deal with this hybrid regularization for more general noise models. To solve this problem, we adopt a convex optimization framework where the criterion to be minimized is split in the sum of more than two terms. For spatial domain regularization, isotropic or anisotropic total variation definitions using various gradient filters are considered. An accelerated version of the Parallel Proximal Algorithm is proposed to perform the minimization. Some difficulties in the computation of the proximity operators involved in this algorithm are also addressed in this paper. Numerical experiments performed in the context of Poisson data recovery, show the good behaviour of the algorithm as well as promising results concerning the use of hybrid regularization techniques.
A SURE Approach for Digital Signal/Image Deconvolution Problems
, 2009
"... In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is twofold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the mini ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is twofold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the minimization of a criterion derived from Stein’s unbiased quadratic risk estimate. Secondly, the deconvolution procedure is performed using any analysis and synthesis frames that can be overcomplete or not. New theoretical results concerning the calculation of the variance of the Stein’s risk estimate are also provided in this work. Simulations carried out on natural images show the good performance of our method w.r.t. conventional waveletbased restoration methods.
Local behavior of sparse analysis regularization: Applications to risk estimation
 Applied and Computational Harmonic Analysis
, 2013
"... In this paper, we aim at recovering an unknown signal x0 from noisy measurements y = Φx0 +w, where Φ is an illconditioned or singular linear operator and w accounts for some noise. To regularize such an illposed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In this paper, we aim at recovering an unknown signal x0 from noisy measurements y = Φx0 +w, where Φ is an illconditioned or singular linear operator and w accounts for some noise. To regularize such an illposed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is cast as a convex optimization program where the objective is the sum of a quadratic data fidelity term and a regularization term formed of the ℓ 1norm of the correlations between the sought after signal and atoms in a given (generally overcomplete) dictionary. The ℓ 1sparsity analysis prior is weighted by a regularization parameter λ> 0. In this paper, we prove that any minimizers of this problem is a piecewiseaffine function of the observations y and the regularization parameter λ. As a byproduct, we exploit these properties to get an objectively guided choice of λ. In particular, we develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE) and show that it is an unbiased and reliable estimator of an appropriately defined risk. The latter encompasses special cases
1 Accelerated dynamic MRI exploiting sparsity and lowrank structure: kt SLR
"... We introduce a novel algorithm to reconstruct dynamic MRI data from undersampled kt space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We introduce a novel algorithm to reconstruct dynamic MRI data from undersampled kt space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) domain to exploit the correlations in the dataset. The use of the datadependent KL transform makes our approach ideally suited to a range of dynamic imaging problems, even when the motion is not periodic. In comparison to current KLTbased methods that rely on a twostep approach to first estimate the basis functions and then use it for reconstruction, we pose the problem as a spectrally regularized matrix recovery problem. By simultaneously determining the temporal basis functions and its spatial weights from the entire measured data, the proposed scheme is capable of providing high quality reconstructions at a range of accelerations. In addition to using the compact representation in the KLT domain, we also exploit the sparsity of the data to further improve the recovery rate. Validations using numerical phantoms and invivo cardiac perfusion MRI data demonstrate the significant improvement in performance offered by the proposed scheme over existing methods. I.
On Regularized Reconstruction of Vector Fields
"... Abstract—In this paper, we give a general characterization of regularization functionals for vector field reconstruction, based on the requirement that the said functionals satisfy certain geometric invariance properties with respect to transformations of the coordinate system. In preparation for ou ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—In this paper, we give a general characterization of regularization functionals for vector field reconstruction, based on the requirement that the said functionals satisfy certain geometric invariance properties with respect to transformations of the coordinate system. In preparation for our general result, we also address some commonalities of invariant regularization in scalar and vector settings, and give a complete account of invariant regularization for scalar fields, before focusing on their main points of difference, which lead to a distinct class of regularization operators in the vector case. Finally, as an illustration of potential, we formulate and compare quadratic ( 2) and totalvariationtype ( 1) regularized denoising of vector fields in the proposed framework. Index Terms—Curl and divergence in higher dimensions, fractional Laplacian, fractional vector calculus, regularization, rotation invariance, scale invariance, total variation (TV), vector fields, vector spaces. I.
Epigraphical projection and proximal tools for solving constrained convex optimization problems
 Part I,” pp. x+24, 2012, Submitted. Preprint: http://arxiv.org/pdf/1210.5844
"... We propose a proximal approach to deal with convex optimization problems involving nonlinear constraints. A large family of such constraints, proven to be effective in the solution of inverse problems, can be expressed as the lower level set of a sum of convex functions evaluated over different, but ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We propose a proximal approach to deal with convex optimization problems involving nonlinear constraints. A large family of such constraints, proven to be effective in the solution of inverse problems, can be expressed as the lower level set of a sum of convex functions evaluated over different, but possibly overlapping, blocks of the signal. For this class of constraints, the associated projection operator generally does not have a closed form. We circumvent this difficulty by splitting the lower level set into as many epigraphs as functions involved in the sum. A closed halfspace constraint is also enforced, in order to limit the sum of the introduced epigraphical variables to the upper bound of the original lower level set. In this paper, we focus on a family of constraints involving linear transforms of ℓ1,p balls. Our main theoretical contribution is to provide closed form expressions of the epigraphical projections associated with the Euclidean norm (p = 2) andthe supnorm (p = +∞). The proposed approach is validated in the context of image restoration with missing samples, by making use of TVlike constraints. Experiments show that our method leads to significant improvements in term of convergence speed over existing algorithms for solving similar constrained problems. 1
Regularized Interpolation for Noisy Images
"... Abstract—Interpolation is the means by which a continuously defined model is fit to discrete data samples. When the data samples are exempt of noise, it seems desirable to build the model by fitting them exactly. In medical imaging, where quality is of paramount importance, this ideal situation unfo ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—Interpolation is the means by which a continuously defined model is fit to discrete data samples. When the data samples are exempt of noise, it seems desirable to build the model by fitting them exactly. In medical imaging, where quality is of paramount importance, this ideal situation unfortunately does not occur. In this paper, we propose a scheme that improves on the quality by specifying a tradeoff between fidelity to the data and robustness to the noise. We resort to variational principles, which allow us to impose smoothness constraints on the model for tackling noisy data. Based on shift, rotation, and scaleinvariant requirements on the model, we show that thenorm of an appropriate vector derivative is the most suitable choice of regularization for this purpose. In addition to Tikhonovlike quadratic regularization, this includes edgepreserving totalvariationlike (TV) regularization. We give algorithms to recover the continuously defined model from noisy samples and also provide a datadriven scheme to determine the optimal amount of regularization. We validate our method with numerical examples where we demonstrate its superiority over an exact fit as well as the benefit of TVlike nonquadratic regularization over Tikhonovlike quadratic regularization. Index Terms—Interpolation, regularization, regularization parameter, splines, Tikhonov functional, totalvariation functional.