Results 1  10
of
88
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE Trans Image Processing
, 2003
"... Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussi ..."
Abstract

Cited by 361 (17 self)
 Add to MetaCart
Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Natural Signal Statistics and Sensory Gain Control
 Nature Neuroscience
, 2001
"... The statistical properties of natural images suggest an optimal form of nonlinear decomposition, in which the image is decomposed using a set of linear filters at a variety of positions, scales and orientations, and these linear responses are then rectified and divided by a weighted sum of rectified ..."
Abstract

Cited by 137 (23 self)
 Add to MetaCart
The statistical properties of natural images suggest an optimal form of nonlinear decomposition, in which the image is decomposed using a set of linear filters at a variety of positions, scales and orientations, and these linear responses are then rectified and divided by a weighted sum of rectified responses of nearby filters. Such divisive normalization models have become widely used in modeling steadystate responses of neurons in primary visual cortex. In addition to providing a surprisingly good characterization of "typical" neurons, the statistically optimal version of the model is consistent with unusual changes in tuning properties of these neurons at different contrast levels. These results suggest that the nonlinear response properties of cortical neurons are not an accident of biophysical implementation, but serve an important functional role.
Multiresolution markov models for signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coheren ..."
Abstract

Cited by 121 (17 self)
 Add to MetaCart
This paper reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts–in particular making ties to topics such as wavelets and multigrid methods. A third is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for selfsimilar and 1/f processes. We also illustrate how these methods have been used in practice. We discuss the construction of MR models on trees and show how questions that arise in this context make contact with wavelets, state space modeling of time series, system and parameter identification, and hidden
Image information and visual quality
 IEEE Trans. Image Processing
, 2004
"... Measurement of image quality is crucial for many imageprocessing algorithms. Traditionally, image quality assessment algorithms predict visual quality by comparing a distorted image against a reference image, typically by modeling the Human Visual System (HVS), or by using arbitrary signal fidelity ..."
Abstract

Cited by 120 (26 self)
 Add to MetaCart
Measurement of image quality is crucial for many imageprocessing algorithms. Traditionally, image quality assessment algorithms predict visual quality by comparing a distorted image against a reference image, typically by modeling the Human Visual System (HVS), or by using arbitrary signal fidelity criteria. In this paper we adopt a new paradigm for image quality assessment. We propose an information fidelity criterion that quantifies the Shannon information that is shared between the reference and the distorted images relative to the information contained in the reference image itself. We use Natural Scene Statistics (NSS) modeling in concert with an image degradation model and an HVS model. We demonstrate the performance of our algorithm by testing it on a data set of 779 images, and show that our method is competitive with state of the art quality assessment methods, and outperforms them in our simulations. 1.
Empirical Bayes Selection of Wavelet Thresholds
 ANN. STATIST
, 2005
"... This paper explores a class of empirical Bayes methods for leveldependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavytailed density. The mixing weight, or sparsity parameter, for each lev ..."
Abstract

Cited by 88 (3 self)
 Add to MetaCart
This paper explores a class of empirical Bayes methods for leveldependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavytailed density. The mixing weight, or sparsity parameter, for each level of the transform is chosen by marginal maximum likelihood. If estimation
InformationTheoretic Analysis of Interscale and Intrascale Dependencies Between Image Wavelet Coefficients
 IEEE Transactions on Image Processing
, 2001
"... This paper presents an informationtheoretic analysis of statistical dependencies between image wavelet coefficients. The dependencies are measured using mutual information, which has a fundamental relationship to data compression, estimation, and classification performance. ..."
Abstract

Cited by 72 (1 self)
 Add to MetaCart
This paper presents an informationtheoretic analysis of statistical dependencies between image wavelet coefficients. The dependencies are measured using mutual information, which has a fundamental relationship to data compression, estimation, and classification performance.
Directional Multiscale Modeling of Images using the Contourlet Transform
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2004
"... The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The contourlet expansion is composed of basis images oriented at varying directions in multiple scales, with flexible aspect ratios. With this rich set of basis ima ..."
Abstract

Cited by 61 (5 self)
 Add to MetaCart
The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The contourlet expansion is composed of basis images oriented at varying directions in multiple scales, with flexible aspect ratios. With this rich set of basis images, the contourlet transform can effectively capture the smooth contours that are the dominant features in natural images with only a small number of coefficients. We begin with a detailed study on the statistics of the contourlet coefficients of natural images, using histogram estimates of the marginal and joint distributions, and mutual information measurements to characterize the dependencies between coefficients. The study reveals the nonGaussian marginal statistics and strong intrasubband, crossscale, and crossorientation dependencies of contourlet coefficients. It is also found that conditioned on the magnitudes of their generalized neighborhood coefficients, contourlet coefficients can approximately be modeled as Gaussian variables. Based on these statistics, we model contourlet coefficients using a hidden Markov tree (HMT) model that can capture all of their interscale, interorientation, and intrasubband dependencies. We experiment this model in the image denoising and texture retrieval applications where the results are very promising. In denoising, contourlet HMT outperforms wavelet HMT and other classical methods in terms of visual quality. In particular, it preserves edges and oriented features better than other existing methods. In texture retrieval, it shows improvements in performance over wavelet methods for various oriented textures.
An information fidelity criterion for image quality assessment using natural scene statistics
 IEEE TRANS. IMAGE PROCESSING
, 2005
"... Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, imag ..."
Abstract

Cited by 54 (16 self)
 Add to MetaCart
Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a “reference” or “perfect” image in some perceptual space. Such “fullreferenc” QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for “human consumption. ” Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an informationtheoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVSbased methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at [1].
Image Denoising using Gaussian Scale Mixtures in the Wavelet Domain
 IEEE Transactions on Image Processing
, 2002
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier.
Universal Analytical Forms for Modeling Image Probabilities
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2002
"... Seeking probability models for images, we employ a spectral approach where the images are decomposed using bandpass filters and probability models are imposed on the filter outputs (also called spectral components). We employ a (twoparameter) family of probability densities, introduced in [11] an ..."
Abstract

Cited by 38 (9 self)
 Add to MetaCart
Seeking probability models for images, we employ a spectral approach where the images are decomposed using bandpass filters and probability models are imposed on the filter outputs (also called spectral components). We employ a (twoparameter) family of probability densities, introduced in [11] and called Bessel K forms, for modeling the marginal densities of the spectral components, and demonstrate their fit to the observed histograms for video, infrared, and range images. Motivated by objectbased models for image analysis, a relationship between the Bessel parameters and the imaged objects is established. Using 2metric on the set of Bessel K forms, we propose a pseudometric on the image space for quantifying image similarities/differences. Some applications, including clutter classification and pruning of hypotheses for target recognition, are presented.