Results 1  10
of
19
Image Denoising in Mixed Poisson–Gaussian Noise
, 2011
"... We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed Poisson–Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a pur ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed Poisson–Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely dataadaptive unbiased estimate of the meansquared error (MSE), derived in a nonBayesian framework (PURE: Poisson–Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transformdomain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subbandadaptive thresholding functions with signaldependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with stateoftheart techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of lowcount fluorescence microscopy.
Optimal denoising in redundant representations
 IEEE TRANS. IMAGE PROCESS
, 2008
"... Image denoising methods are often designed to minimize meansquared error (MSE) within the subbands of a multiscale decomposition. However, most highquality denoising results have been obtained with overcomplete representations, for which minimization of MSE in the subband domain does not guarante ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Image denoising methods are often designed to minimize meansquared error (MSE) within the subbands of a multiscale decomposition. However, most highquality denoising results have been obtained with overcomplete representations, for which minimization of MSE in the subband domain does not guarantee optimal MSE performance in the image domain. We prove that, despite this suboptimality, the expected imagedomain MSE resulting from applying estimators to subbands that are made redundant through spatial replication of basis functions (e.g., cycle spinning) is always less than or equal to that resulting from applying the same estimators to the original nonredundant representation. In addition, we show that it is possible to further exploit overcompleteness by jointly optimizing the subband estimators for imagedomain MSE. We develop an extended version of Stein’s unbiased risk estimate (SURE) that allows us to perform this optimization adaptively, for each observed noisy image. We demonstrate this methodology using a new class of estimator formed from linear combinations of localized “bump ” functions that are applied either pointwise or on local neighborhoods of subband coefficients. We show through simulations that the performance of these estimators applied to overcomplete subbands and optimized for imagedomain MSE is substantially better than that obtained when they are optimized within each subband. This performance is, in turn, substantially better than that obtained when they are optimized for use on a nonredundant representation.
Optimal Estimation in Sensory Systems
, 2009
"... Abstract: A variety of experimental studies suggest that sensory systems are capable of performing estimation or decision tasks at nearoptimal levels. In this chapter, I explore the use of optimal estimation in describing sensory computations in the brain. I define what is meant by optimality and p ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Abstract: A variety of experimental studies suggest that sensory systems are capable of performing estimation or decision tasks at nearoptimal levels. In this chapter, I explore the use of optimal estimation in describing sensory computations in the brain. I define what is meant by optimality and provide three quite different methods of obtaining an optimal estimator, each based on different assumptions about the nature of the information that is available to constrain the problem. I then discuss how biological systems might go about computing (and learning to compute) optimal estimates. The brain is awash in sensory signals. How does it interpret these signals, so as to extract meaningful and consistent information about the environment? Many tasks require estimation of environmental parameters, and there is substantial evidence that the system is capable of representing and extracting very precise estimates of these parameters. This is particularly impressive when one considers the fact that the brain is built from a large number of lowenergy unreliable components, whose responses are affected by many extraneous factors (e.g., temperature, hydration, blood glucose and oxygen levels). The problem of optimal estimation is well studied in the statistics and engineering communities, where a plethora of tools have been developed for designing, implementing, calibrating and testing such systems. In recent years, many of these tools have been used to provide benchmarks or models for biological perception. Specifically, the development of signal detection theory led to widespread use of statistical decision theory as a framework for assessing performance in perceptual experiments. More recently, optimal estimation theory (in particular, Bayesian estimation) has been used as a framework for describing human performance in perceptual tasks.
Optimal denoising in redundant bases
 in Proc 14th IEEE Int'l Conf on Image Proc. San Antonio, TX: IEEE Computer Society
"... Accepted for publication, currently scheduled for the July issue. Content may change prior to final publication. AbstractImage denoising methods are often designed to minimize mean squared error (MSE) within the subbands of a multiscale decomposition. But most high quality denoising results have be ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Accepted for publication, currently scheduled for the July issue. Content may change prior to final publication. AbstractImage denoising methods are often designed to minimize mean squared error (MSE) within the subbands of a multiscale decomposition. But most high quality denoising results have been obtained with overcomplete representations, for which minimization of MSE in the subband domain does not guarantee optimal MSE performance in the image domain. We prove that despite this suboptimality, the expected imagedomain MSE resulting from applying estimators to subbands that are made redundant through spatial replication of basis functions (e.g., cycle spinning) is always less than or equal to that resulting from applying the same estimators to the original nonredundant representation. In addition, we show that it is possible to further exploit overcompleteness by jointly optimizing the subband estimators for imagedomain MSE. We develop an extended version of Stein’s unbiased risk estimate (SURE) that allows us to perform this optimization adaptively, for each observed noisy image. We demonstrate this methodology using a new class of estimator formed from linear combinations of localized bump functions that are applied either pointwise or on local neighborhoods of subband coefcients. We show through simulations that the performance of these estimators applied to overcomplete subbands and optimized for imagedomain MSE is substantially better than that obtained when they are optimized within each subband. This performance is, in turn, substantially better than that obtained when they are optimized for use on a nonredundant representation. IEEE EDICS: RSTDNOI I.
Empirical Bayes least squares estimation without an explicit prior.” manuscript in preparation
, 2006
"... ..."
Skellam shrinkage: Waveletbased intensity estimation for inhomogeneous Poisson data
"... The ubiquity of integrating detectors in imaging and other applications implies that a variety of realworld data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vectorvalued signal of interest. In this article, we first show how the socalled Skel ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The ubiquity of integrating detectors in imaging and other applications implies that a variety of realworld data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vectorvalued signal of interest. In this article, we first show how the socalled Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the nearoptimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of HaarFisz variancestabilized data, along with accompanying lowcomplexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam
Learning least squares estimators without assumed priors or supervision
, 2009
"... The two standard methods of obtaining a leastsquares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one opti ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The two standard methods of obtaining a leastsquares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one optimizes a parametric estimator over a training set containing pairs of corrupted measurements and their associated true values. But many realworld systems do not have access to either supervised training examples or a prior model. Here, we study the problem of obtaining an optimal estimator given a measurement process with known statistics, and a set of corrupted measurements of random values drawn from an unknown prior. We develop a general form of nonparametric empirical Bayesian estimator that is written as a direct function of the measurement density, with no explicit reference to the prior. We study the observation conditions under which such “priorfree ” estimators may be obtained, and we derive specific forms for a variety of different corruption processes. Each of these priorfree estimators may also be used to express the mean squared estimation error as an expectation over the measurement density, thus generalizing Stein’s unbiased risk estimator (SURE) which provides such an expression for the additive Gaussian noise case. Minimizing this expression over measurement samples provides an “unsupervised
SKELLAM SHRINKAGE: WAVELETBASED INTENSITY ESTIMATION FOR INHOMOGENEOUS POISSON DATA
, 2009
"... The ubiquity of integrating detectors in imaging and other applications implies that a variety of realworld data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vectorvalued signal of interest. In this article, we first show how the socalled Skel ..."
Abstract
 Add to MetaCart
The ubiquity of integrating detectors in imaging and other applications implies that a variety of realworld data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vectorvalued signal of interest. In this article, we first show how the socalled Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the nearoptimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of HaarFisz variancestabilized data, along with accompanying lowcomplexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam