Results 1  10
of
155
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE Trans Image Processing
, 2003
"... Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussi ..."
Abstract

Cited by 361 (17 self)
 Add to MetaCart
Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
An EM Algorithm for WaveletBased Image Restoration
, 2002
"... This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking a ..."
Abstract

Cited by 242 (20 self)
 Add to MetaCart
This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking advantage of the well known sparsity of wavelet representations. Previous works have investigated waveletbased restoration but, except for certain special cases, the resulting criteria are solved approximately or require very demanding optimization methods. The EM algorithm herein proposed combines the efficient image representation oered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. The algorithm alternates between an Estep based on the fast Fourier transform (FFT) and a DWTbased Mstep, resulting in an ecient iterative process requiring O(N log N) operations per iteration. Thus, it is the rst image restoration algorithm that optimizes a waveletbased penalized likelihood criterion and has computational complexity comparable to that of standard wavelet denoising or frequency domain deconvolution methods. The convergence behavior of the algorithm is investigated, and it is shown that under mild conditions the algorithm converges to a globally optimal restoration. Moreover, our new approach outperforms several of the best existing methods in benchmark tests, and in some cases is also much less computationally demanding.
Scale Mixtures of Gaussians and the Statistics of Natural Images
 in Adv. Neural Information Processing Systems
, 2000
"... The statistics of photographic images, when represented using multiscale (wavelet) bases, exhibit two striking types of nonGaussian behavior. First, the marginal densities of the coefficients have extended heavy tails. Second, the joint densities exhibit variance dependencies not captured by secon ..."
Abstract

Cited by 123 (18 self)
 Add to MetaCart
The statistics of photographic images, when represented using multiscale (wavelet) bases, exhibit two striking types of nonGaussian behavior. First, the marginal densities of the coefficients have extended heavy tails. Second, the joint densities exhibit variance dependencies not captured by secondorder models. We examine properties of the class of Gaussian scale mixtures, and show that these densities can accurately characterize both the marginal and joint distributions of natural image wavelet coefficients. This class of model suggests a Markov structure, in which wavelet coefficients are linked by hidden scaling variables corresponding to local image structure. We derive an estimator for these hidden variables, and show that a nonlinear ``normalization'' procedure can be used to Gaussianize the coefficients.
On Advances in Statistical Modeling of Natural Images
 Journal of Mathematical Imaging and Vision
, 2003
"... Statistical analysis of images reveals two interesting properties: (i) invariance of image statistics to scaling of images, and (ii) nonGaussian behavior of image statistics, i.e. high kurtosis, heavy tails, and sharp central cusps. In this paper we review some recent results in statistical modelin ..."
Abstract

Cited by 104 (5 self)
 Add to MetaCart
Statistical analysis of images reveals two interesting properties: (i) invariance of image statistics to scaling of images, and (ii) nonGaussian behavior of image statistics, i.e. high kurtosis, heavy tails, and sharp central cusps. In this paper we review some recent results in statistical modeling of natural images that attempt to explain these patterns. Two categories of results are considered: (i) studies of probability models of images or image decompositions (such as Fourier or wavelet decompositions), and (ii) discoveries of underlying image manifolds while restricting to natural images. Applications of these models in areas such as texture analysis, image classification, compression, and denoising are also considered.
The bayesian lasso
, 2005
"... The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (doubleexponential) priors. Gibbs sampling from this posterior is possible using an expanded hierarchy with conjugate normal priors ..."
Abstract

Cited by 102 (0 self)
 Add to MetaCart
The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (doubleexponential) priors. Gibbs sampling from this posterior is possible using an expanded hierarchy with conjugate normal priors for the regression parameters and independent exponential priors on their variances. A connection with the inverse Gaussian distribution provides tractable full conditional distributions. The Bayesian Lasso provides interval estimates (Bayesian credible intervals) that can guide variable selection. Moreover, the structure of the hierarchical model provides both Bayesian and likelihood methods for selecting the Lasso parameter. Slight modifications lead to Bayesian versions of other Lassorelated estimation methods like bridge regression and a robust variant.
Random Cascades on Wavelet Trees and Their Use in Analyzing and Modeling Natural Images
 Applied and Computational Harmonic Analysis
, 2001
"... in signal and image processing, including image denoising, coding, and superresolution. # 2001 Academic Press 1. INTRODUCTION Stochastic models of natural images underlie a variety of applications in image processing and lowlevel computer vision, including image coding, denoising and 1 MW supp ..."
Abstract

Cited by 88 (15 self)
 Add to MetaCart
in signal and image processing, including image denoising, coding, and superresolution. # 2001 Academic Press 1. INTRODUCTION Stochastic models of natural images underlie a variety of applications in image processing and lowlevel computer vision, including image coding, denoising and 1 MW supported by NSERC 1967 fellowship; AW and MW by AFOSR Grant F496209810349 and ONR Grant N0001491J1004. Address correspondence to MW. 2 ES supported by NSF Career Grant MIP9796040 and an Alfred P. Sloan fellowship. 89 10635203/01 $35.00 Copyright # 2001 by Academic Press All rights of reproduction in any form reserved. 90 WAINWRIGHT, SIMONCELLI, AND WILLSKY restoration, interpolation and synthesis. Accordingly, the past decade has witnessed an increasing amount of research devoted to developing stochastic models of images (e.g., [19, 38, 45, 48, 55]). Simultaneously, wavel
Flexible empirical Bayes estimation for wavelets
 Journal of the Royal Statistics Society, Series B
, 2000
"... Wavelet shrinkage estimation is an increasingly popular method for signal denoising and compression. Although Bayes estimators can provide excellent mean squared error (MSE) properties, selection of an effective prior is a difficult task. To address this problem, we propose Empirical Bayes (EB) prio ..."
Abstract

Cited by 69 (12 self)
 Add to MetaCart
Wavelet shrinkage estimation is an increasingly popular method for signal denoising and compression. Although Bayes estimators can provide excellent mean squared error (MSE) properties, selection of an effective prior is a difficult task. To address this problem, we propose Empirical Bayes (EB) prior selection methods for various error distributions including the normal and the heavier tailed Student t distributions. Under such EB prior distributions, we obtain threshold shrinkage estimators based on model selection, and multiple shrinkage estimators based on model averaging. These EB estimators are seen to be computationally competitive with standard classical thresholding methods, and to be robust to outliers in both the data and wavelet domains. Simulated and real examples are used to illustrate the flexibility and improved MSE performance of these methods in a wide variety of settings.
Bayesian waveletbased image deconvolution: A GEM algorithm exploiting a class of heavytailed priors
 IEEE Trans. Image Process
, 2006
"... Abstract—Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The wellknown sparsity of the wavelet coefficients of realworld images is modeled by heavytailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of inf ..."
Abstract

Cited by 54 (10 self)
 Add to MetaCart
Abstract—Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The wellknown sparsity of the wavelet coefficients of realworld images is modeled by heavytailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys, and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the “nonnegative garrote ” thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary secondorder iterative method. The result is a GEM algorithm of ( log) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to stateofthe art methods, demanding comparable (in some cases, much less) computational complexity. Index Terms—Bayesian, deconvolution, expectation maximization (EM), generalized expectation maximization (GEM), Gaussian scale mixtures (GSM), heavytailed priors, wavelet. I.
A bayesian approach for blind separation of sparse sources
 IEEE Transactions on Speech and Audio Processing
, 2005
"... We present a Bayesian approach for blind separation of linear instantaneous mixtures of sources having a sparse representation in a given basis. The distributions of the coefficients of the sources in the basis are modeled by a Student t distribution, which can be expressed as a Scale Mixture of Gau ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
We present a Bayesian approach for blind separation of linear instantaneous mixtures of sources having a sparse representation in a given basis. The distributions of the coefficients of the sources in the basis are modeled by a Student t distribution, which can be expressed as a Scale Mixture of Gaussians, and a Gibbs sampler is derived to estimate the sources, the mixing matrix, the input noise variance and also the hyperparameters of the Student t distributions. The method allows for separation of underdetermined (more sources than sensors) noisy mixtures. Results are presented with audio signals using a Modified Discrete Cosine Transfrom basis and compared with a finite mixture of Gaussians prior approach. These results show the improved sound quality obtained with the Student t prior and the better robustness to mixing matrices close to singularity of the Markov Chains Monte Carlo approach.
Topographic product models applied to natural scene statistics
 Neural Computation
, 2005
"... We present an energybased model that uses a product of generalised Studentt distributions to capture the statistical structure in datasets. This model is inspired by and particularly applicable to “natural ” datasets such as images. We begin by providing the mathematical framework, where we discus ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
We present an energybased model that uses a product of generalised Studentt distributions to capture the statistical structure in datasets. This model is inspired by and particularly applicable to “natural ” datasets such as images. We begin by providing the mathematical framework, where we discuss complete and overcomplete models, and provide algorithms for training these models from data. Using patches of natural scenes we demonstrate that our approach represents a viable alternative to “independent components analysis ” as an interpretive model of biological visual systems. Although the two approaches are similar in flavor there are also important differences, particularly when the representations are overcomplete. By constraining the interactions within our model we are also able to study the topographic organization of Gaborlike receptive fields that are learned by our model. Finally, we discuss the relation of our new approach to previous work — in particular Gaussian Scale Mixture models, and variants of independent components analysis. 1