Results 1  10
of
19
Image Denoising in Mixed Poisson–Gaussian Noise
, 2011
"... We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed Poisson–Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a pur ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We propose a general methodology (PURELET) to design and optimize a wide class of transformdomain thresholding algorithms for denoising images corrupted by mixed Poisson–Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely dataadaptive unbiased estimate of the meansquared error (MSE), derived in a nonBayesian framework (PURE: Poisson–Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transformdomain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subbandadaptive thresholding functions with signaldependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with stateoftheart techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of lowcount fluorescence microscopy.
Optimal denoising in redundant representations
 IEEE TRANS. IMAGE PROCESS
, 2008
"... Image denoising methods are often designed to minimize meansquared error (MSE) within the subbands of a multiscale decomposition. However, most highquality denoising results have been obtained with overcomplete representations, for which minimization of MSE in the subband domain does not guarante ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Image denoising methods are often designed to minimize meansquared error (MSE) within the subbands of a multiscale decomposition. However, most highquality denoising results have been obtained with overcomplete representations, for which minimization of MSE in the subband domain does not guarantee optimal MSE performance in the image domain. We prove that, despite this suboptimality, the expected imagedomain MSE resulting from applying estimators to subbands that are made redundant through spatial replication of basis functions (e.g., cycle spinning) is always less than or equal to that resulting from applying the same estimators to the original nonredundant representation. In addition, we show that it is possible to further exploit overcompleteness by jointly optimizing the subband estimators for imagedomain MSE. We develop an extended version of Stein’s unbiased risk estimate (SURE) that allows us to perform this optimization adaptively, for each observed noisy image. We demonstrate this methodology using a new class of estimator formed from linear combinations of localized “bump ” functions that are applied either pointwise or on local neighborhoods of subband coefficients. We show through simulations that the performance of these estimators applied to overcomplete subbands and optimized for imagedomain MSE is substantially better than that obtained when they are optimized within each subband. This performance is, in turn, substantially better than that obtained when they are optimized for use on a nonredundant representation.
Optimal Estimation in Sensory Systems
, 2009
"... Abstract: A variety of experimental studies suggest that sensory systems are capable of performing estimation or decision tasks at nearoptimal levels. In this chapter, I explore the use of optimal estimation in describing sensory computations in the brain. I define what is meant by optimality and p ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Abstract: A variety of experimental studies suggest that sensory systems are capable of performing estimation or decision tasks at nearoptimal levels. In this chapter, I explore the use of optimal estimation in describing sensory computations in the brain. I define what is meant by optimality and provide three quite different methods of obtaining an optimal estimator, each based on different assumptions about the nature of the information that is available to constrain the problem. I then discuss how biological systems might go about computing (and learning to compute) optimal estimates. The brain is awash in sensory signals. How does it interpret these signals, so as to extract meaningful and consistent information about the environment? Many tasks require estimation of environmental parameters, and there is substantial evidence that the system is capable of representing and extracting very precise estimates of these parameters. This is particularly impressive when one considers the fact that the brain is built from a large number of lowenergy unreliable components, whose responses are affected by many extraneous factors (e.g., temperature, hydration, blood glucose and oxygen levels). The problem of optimal estimation is well studied in the statistics and engineering communities, where a plethora of tools have been developed for designing, implementing, calibrating and testing such systems. In recent years, many of these tools have been used to provide benchmarks or models for biological perception. Specifically, the development of signal detection theory led to widespread use of statistical decision theory as a framework for assessing performance in perceptual experiments. More recently, optimal estimation theory (in particular, Bayesian estimation) has been used as a framework for describing human performance in perceptual tasks.
OPTIMAL DENOISING IN REDUNDANT BASES
"... Image denoising methods are often based on estimators chosen to minimize mean squared error (MSE) within the subbands of a multiscale decomposition. But this does not guarantee optimal MSE performance in the image domain, unless the decomposition is orthonormal. We prove that despite this suboptima ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Image denoising methods are often based on estimators chosen to minimize mean squared error (MSE) within the subbands of a multiscale decomposition. But this does not guarantee optimal MSE performance in the image domain, unless the decomposition is orthonormal. We prove that despite this suboptimality, the expected imagedomain MSE resulting from a representation that is made redundant through spatial replication of basis functions (e.g., cyclespinning) is less than or equal to that resulting from the original nonredundant representation. We also develop an extension of Stein’s unbiased risk estimator (SURE) that allows minimization of the imagedomain MSE for estimators that operate on subbands of a redundant decomposition. We implement an example, jointly optimizing the parameters of scalar estimators applied to each subband of an overcomplete representation, and demonstrate substantial MSE improvement over the suboptimal application of SURE within individual subbands.
Empirical Bayes least squares estimation without an explicit prior.” NYU Courant Inst
, 2007
"... Bayesian estimators are commonly constructed using an explicit prior model. In many applications, one does not have such a model, and it is difficult to learn since one does not have access to uncorrupted measurements of the variable being estimated. In many cases however, including the case of cont ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Bayesian estimators are commonly constructed using an explicit prior model. In many applications, one does not have such a model, and it is difficult to learn since one does not have access to uncorrupted measurements of the variable being estimated. In many cases however, including the case of contamination with additive Gaussian noise, the Bayesian least squares estimator can be formulated directly in terms of the distribution of noisy measurements. We demonstrate the use of this formulation in removing noise from photographic images. We use a local approximation of the noisy measurement distribution by exponentials over adaptively chosen intervals, and derive an estimator from this approximate distribution. We demonstrate through simulations that this adaptive Bayesian estimator performs as well or better than previously published estimators based on simple prior models. 1
Optimal estimation: Prior free methods and physiological application
 Ph.D. dissertation, Courant Institute of Mathematical Sciences
, 2007
"... First and foremost, I would like to thank my advisors, Eero Simoncelli and Dan Tranchina. Dan supervised my work on cortical modeling, and his insight and advice were extremely helpful in carrying out the bulk of the work of Chapter 1. He also had many useful comments about the remainder of the mate ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
First and foremost, I would like to thank my advisors, Eero Simoncelli and Dan Tranchina. Dan supervised my work on cortical modeling, and his insight and advice were extremely helpful in carrying out the bulk of the work of Chapter 1. He also had many useful comments about the remainder of the material in the thesis. Over the years, I have learned a lot about computational neuroscience in general from discussions with him. Eero supervised my work on priorfree methods and applications, which make up the substance of Chapters 24. His intuition, insight and ideas were crucial in helping me progress in this line of research, and more importantly, in obtaining useful results. I also learned a lot from him about image processing, statistics and computational neuroscience, amongst other things. I would like to thank my third reader, Charlie Peskin, for his input to my thesis and defense and helpful discussions about the material. I would also like to thank Mehryar Mohri for being on my committee and for some useful discussions about VC type bounds for regression. As well, I would like to thank Francesca Chiaromonte for being on my committee, and for helpful discussions and comments about the material in the thesis. It was good to have a statistician’s point of view on the work. I would like to thank Bob Shapley for his helpful input, and for information about contrast dependent summation area. I would also like to thank him for letting me sit in on his ”new view ” class about visual cortex, where I read some very useful papers. I would like to thank members of the Laboratory for Computational v Vision, for helpful comments and discussions along the way. I would also like to thank LCV alumni Liam Paninski and Jonathan Pillow, who both had some particularly useful comments about the priorfree methods. I would also like thank the various people at Courant, too numerous to mention, who have provided help along the way.
Learning least squares estimators without assumed priors or supervision
, 2009
"... The two standard methods of obtaining a leastsquares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one opti ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The two standard methods of obtaining a leastsquares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one optimizes a parametric estimator over a training set containing pairs of corrupted measurements and their associated true values. But many realworld systems do not have access to either supervised training examples or a prior model. Here, we study the problem of obtaining an optimal estimator given a measurement process with known statistics, and a set of corrupted measurements of random values drawn from an unknown prior. We develop a general form of nonparametric empirical Bayesian estimator that is written as a direct function of the measurement density, with no explicit reference to the prior. We study the observation conditions under which such “priorfree ” estimators may be obtained, and we derive specific forms for a variety of different corruption processes. Each of these priorfree estimators may also be used to express the mean squared estimation error as an expectation over the measurement density, thus generalizing Stein’s unbiased risk estimator (SURE) which provides such an expression for the additive Gaussian noise case. Minimizing this expression over measurement samples provides an “unsupervised
Skellam shrinkage: Waveletbased intensity estimation for inhomogeneous Poisson data
"... The ubiquity of integrating detectors in imaging and other applications implies that a variety of realworld data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vectorvalued signal of interest. In this article, we first show how the socalled Skel ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The ubiquity of integrating detectors in imaging and other applications implies that a variety of realworld data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vectorvalued signal of interest. In this article, we first show how the socalled Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the nearoptimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of HaarFisz variancestabilized data, along with accompanying lowcomplexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam
TWO DENOISING SURELET METHODS FOR COMPLEX OVERSAMPLED SUBBAND DECOMPOSITIONS
"... Redundancy in wavelets and filter banks has the potential to greatly improve signal and image denoising. Having developed a framework for optimized oversampled complex lapped transforms, we propose their association with the statistically efficient Stein’s principle in the context of mean square err ..."
Abstract
 Add to MetaCart
Redundancy in wavelets and filter banks has the potential to greatly improve signal and image denoising. Having developed a framework for optimized oversampled complex lapped transforms, we propose their association with the statistically efficient Stein’s principle in the context of mean square error estimation. Under Gaussian noise assumptions, expectations involving the (unknown) original data are expressed using the observation only. Two forms of Stein’s Unbiased Risk Estimators, derived in the coefficient and the spatial domain respectively, are proposed, the latter being more computationally expensive. These estimators are then employed for denoising with linear combinations of elementary threshold functions. Their performances are compared to the oracle, and addressed with respect to the redundancy. They are finally tested against other denoising algorithms. They prove competitive, yielding especially good results for texture preservation. 1.
SignalDependent Noise Characterization in Haar Filterbank Representation
"... Owing to the properties of joint timefrequency analysis that compress energy and approximately decorrelate temporal redundancies in sequential data, filterbank and wavelets are popular and convenient platforms for statistical signal modeling. Motivated by the prior knowledge and empirical studies, ..."
Abstract
 Add to MetaCart
Owing to the properties of joint timefrequency analysis that compress energy and approximately decorrelate temporal redundancies in sequential data, filterbank and wavelets are popular and convenient platforms for statistical signal modeling. Motivated by the prior knowledge and empirical studies, much of the emphasis in signal processing has been placed on the choice of the prior distribution for these transform coefficients. In this paradigm however, the issues pertaining to the loss of information due to measurement noise are difficult to reconcile because the effects of pointwise signaldependent noise permeate across scale and through multiple coefficients. In this work, we show how a general class of signaldependent noise can be characterized to an arbitrary precision in a Haar filterbank representation, and the corresponding maximum a posteriori estimate for the underlying signal is developed. Moreover, the structure of noise in the transform domain admits a variant of Stein’s unbiased estimate of risk conducive to processing the corrupted signal in the transform domain. We discuss estimators involving Poisson process, a situation that arises often in realworld applications such as communication, signal processing, and imaging. 1.