Results 1 
9 of
9
Image Denoising in Mixed Poisson–Gaussian Noise
, 2011
"... We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed Poisson–Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a pur ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed Poisson–Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely dataadaptive unbiased estimate of the meansquared error (MSE), derived in a nonBayesian framework (PURE: Poisson–Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transformdomain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subbandadaptive thresholding functions with signaldependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with stateoftheart techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of lowcount fluorescence microscopy.
A SURE Approach for Digital Signal/Image Deconvolution Problems
, 2009
"... In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is twofold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the mini ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is twofold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the minimization of a criterion derived from Stein’s unbiased quadratic risk estimate. Secondly, the deconvolution procedure is performed using any analysis and synthesis frames that can be overcomplete or not. New theoretical results concerning the calculation of the variance of the Stein’s risk estimate are also provided in this work. Simulations carried out on natural images show the good performance of our method w.r.t. conventional waveletbased restoration methods.
Stochastic Models for Sparse and PiecewiseSmooth Signals
"... Abstract—We introduce an extended family of continuousdomain stochastic models for sparse, piecewisesmooth signals. These are specified as solutions of stochastic differential equations, or, equivalently, in terms of a suitable innovation model; the latter is analogous conceptually to the classica ..."
Abstract

Cited by 14 (13 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce an extended family of continuousdomain stochastic models for sparse, piecewisesmooth signals. These are specified as solutions of stochastic differential equations, or, equivalently, in terms of a suitable innovation model; the latter is analogous conceptually to the classical interpretation of a Gaussian stationary process as filtered white noise. The two specific features of our approach are 1) signal generation is driven by a random stream of Dirac impulses (Poisson noise) instead of Gaussian white noise, and 2) the class of admissible whitening operators is considerably larger than what is allowed in the conventional theory of stationary processes. We provide a complete characterization of these finiterateofinnovation signals within Gelfand’s framework of generalized stochastic processes. We then focus on the class of scaleinvariant whitening operators which correspond to unstable systems. We show that these can be solved by introducing proper boundary conditions, which leads to the specification of random, splinetype signals that are piecewisesmooth. These processes are the Poisson counterpart of fractional Brownian motion; they are nonstationary and have the sametype spectral signature. We prove that the generalized Poisson processes have a sparse representation in a waveletlike basis subject to some mild matching condition. We also present a limit example of sparse process that yields a MAP signal estimator that is equivalent to the popular TVdenoising algorithm. Index Terms—Fractals, innovation models, Poisson processes, sparsity, splines, stochastic differential equations, stochastic processes,
Least squares estimation without priors or supervision
 Neural Computation
, 2011
"... Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values), or a prior probability model for the true values. Here, we consider the problem of obtaining a leastsquares estimator given a measurement process with ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values), or a prior probability model for the true values. Here, we consider the problem of obtaining a leastsquares estimator given a measurement process with known statistics (i.e., a likelihood function), and a set of unsupervised measurements, each arising from a corresponding true value drawn randomly from an unknown distribution. We develop a general expression for a nonparametric empirical Bayes least squares (NEBLS) estimator, that expresses the optimal leastsquares estimator in terms of the measurement density, with no explicit reference to the unknown (prior) density. We study the conditions under which such estimators exist, and derive specific forms for a variety of different measurement processes. We further show that each of these NEBLS estimators may be used to express the mean squared estimation error as an expectation over the measurement density alone, thus generalizing Stein’s unbiased risk estimator (SURE) which provides such an expression for the additive Gaussian noise case. This error expression may then be optimized over noisy measurement samples, in the absence of supervised training data, yielding a generalized SUREoptimized parametric
Learning least squares estimators without assumed priors or supervision
, 2009
"... The two standard methods of obtaining a leastsquares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one opti ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
The two standard methods of obtaining a leastsquares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one optimizes a parametric estimator over a training set containing pairs of corrupted measurements and their associated true values. But many realworld systems do not have access to either supervised training examples or a prior model. Here, we study the problem of obtaining an optimal estimator given a measurement process with known statistics, and a set of corrupted measurements of random values drawn from an unknown prior. We develop a general form of nonparametric empirical Bayesian estimator that is written as a direct function of the measurement density, with no explicit reference to the prior. We study the observation conditions under which such “priorfree ” estimators may be obtained, and we derive specific forms for a variety of different corruption processes. Each of these priorfree estimators may also be used to express the mean squared estimation error as an expectation over the measurement density, thus generalizing Stein’s unbiased risk estimator (SURE) which provides such an expression for the additive Gaussian noise case. Minimizing this expression over measurement samples provides an “unsupervised
MultiWiener SURELET deconvolution
 IEEE Trans. Image Process
, 2013
"... Abstract — In this paper, we propose a novel deconvolution algorithm based on the minimization of a regularized Stein’s unbiased risk estimate (SURE), which is a good estimate of the mean squared error. We linearly parametrize the deconvolution process by using multiple Wiener filters as elementary ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we propose a novel deconvolution algorithm based on the minimization of a regularized Stein’s unbiased risk estimate (SURE), which is a good estimate of the mean squared error. We linearly parametrize the deconvolution process by using multiple Wiener filters as elementary functions, followed by undecimated Haarwavelet thresholding. Due to the quadratic nature of SURE and the linear parametrization, the deconvolution problem finally boils down to solving a linear system of equations, which is very fast and exact. The linear coefficients, i.e., the solution of the linear system of equations, constitute the best approximation of the optimal processing on the Wiener–Haarthreshold basis that we consider. In addition, the proposed multiWiener SURELET approach is applicable for both periodic and symmetric boundary conditions, and can thus be used in various practical scenarios. The very competitive (both in computation time and quality) results show that the proposed algorithm, which can be interpreted as a kind of nonlinear Wiener processing, can be used as a basic tool for building more sophisticated deconvolution algorithms. Index Terms — Deconvolution, multiWiener filtering, Stein’s unbiased risk estimate (SURE) minimization, undecimated Haar
ARTICLE Communicated by Konrad Paul Kording Least Squares Estimation Without Priors or Supervision
"... Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with kn ..."
Abstract
 Add to MetaCart
Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with known statistics (i.e., a likelihood function) and a set of unsupervised measurements, each arising from a corresponding true value drawn randomly from an unknown distribution. We develop a general expression for a nonparametric empirical Bayes least squares (NEBLS) estimator, which expresses the optimal least squares estimator in terms of the measurement density, with no explicit reference to the unknown (prior) density. We study the conditions under which such estimators exist and derive specific forms for a variety of different measurement processes. We further show that each of these NEBLS estimators may be used to express the mean squared estimation error as an expectation over the measurement density alone, thus generalizing Stein’s unbiased