Results 1  10
of
110
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE Trans Image Processing
, 2003
"... Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussi ..."
Abstract

Cited by 350 (18 self)
 Add to MetaCart
Abstract—We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Image denoising by sparse 3D transformdomain collaborative filtering
 IEEE TRANS. IMAGE PROCESS
, 2007
"... We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call “groups.” Collaborative filtering is a special procedure d ..."
Abstract

Cited by 218 (29 self)
 Add to MetaCart
We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call “groups.” Collaborative filtering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to colorimage denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves stateoftheart denoising performance in terms of both peak signaltonoise ratio and subjective visual quality.
Bivariate Shrinkage with Local Variance Estimation
, 2002
"... The performance of imagedenoising algorithms using wavelet transforms can be improved significantly by taking into account the statistical dependencies among wavelet coefficients as demonstrated by several algorithms presented in the literature. In two earlier papers by the authors, a simple bivari ..."
Abstract

Cited by 74 (5 self)
 Add to MetaCart
The performance of imagedenoising algorithms using wavelet transforms can be improved significantly by taking into account the statistical dependencies among wavelet coefficients as demonstrated by several algorithms presented in the literature. In two earlier papers by the authors, a simple bivariate shrinkage rule is described using a coefficient and its parent. The performance can also be improved using simple models by estimating model parameters in a local neighborhood. This letter presents a locally adaptive denoising algorithm using the bivariate shrinkage function. The algorithm is illustrated using both the orthogonal and dual tree complex wavelet transforms. Some comparisons with the best available results will be given in order to illustrate the effectiveness of the proposed algorithm.
Pointwise shapeadaptive DCT for highquality denoising and deblocking of grayscale and color images
, 2007
"... The shapeadaptive discrete cosine transform (SADCT) transform can be computed on a support of arbitrary shape, but retains a computational complexity comparable to that of the usual separable blockDCT (BDCT). Despite the nearoptimal decorrelation and energy compaction properties, application o ..."
Abstract

Cited by 41 (11 self)
 Add to MetaCart
The shapeadaptive discrete cosine transform (SADCT) transform can be computed on a support of arbitrary shape, but retains a computational complexity comparable to that of the usual separable blockDCT (BDCT). Despite the nearoptimal decorrelation and energy compaction properties, application of the SADCT has been rather limited, targeted nearly exclusively to video compression. In this paper, we present a novel approach to image filtering based on the SADCT. We use the SADCT in conjunction with the Anisotropic Local Polynomial Approximation—Intersection of Confidence Intervals technique, which defines the shape of the transform’s support in a pointwise adaptive manner. The thresholded or attenuated SADCT coefficients are used to reconstruct a local estimate of the signal within the adaptiveshape support. Since supports corresponding to different points are in general overlapping, the local estimates are averaged together using adaptive weights that depend on the region’s statistics. This approach can be used for various imageprocessing tasks. In this paper, we consider, in particular, image denoising and image deblocking and deringing from blockDCT compression. A special structural constraint in luminancechrominance space is also proposed to enable an accurate filtering of color images. Simulation experiments show a stateoftheart quality of the final estimate, both in terms of objective criteria and visual appearance. Thanks to the adaptive support, reconstructed edges are clean, and no unpleasant ringing artifacts are introduced by the fitted transform.
Unsupervised, informationtheoretic, adaptive image filtering for image restoration
 IEEE TRANS. PAMI
, 2006
"... Image restoration is an important and widely studied problem in computer vision and image processing. Various image filtering strategies have been effective, but invariably make strong assumptions about the properties of the signal and/or degradation. Hence, these methods lack the generality to be e ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
Image restoration is an important and widely studied problem in computer vision and image processing. Various image filtering strategies have been effective, but invariably make strong assumptions about the properties of the signal and/or degradation. Hence, these methods lack the generality to be easily applied to new applications or diverse image collections. This paper describes a novel unsupervised, informationtheoretic, adaptive filter (UINTA) that improves the predictability of pixel intensities from their neighborhoods by decreasing their joint entropy. In this way, UINTA automatically discovers the statistical properties of the signal and can thereby restore a wide spectrum of images. The paper describes the formulation to minimize the joint entropy measure and presents several important practical considerations in estimating neighborhood statistics. It presents a series of results on both real and synthetic data along with comparisons with current stateoftheart techniques, including novel applications to medical image processing.
Modeling multiscale subbands of photographic . . .
, 2009
"... The local statistical properties of photographic images, when represented in a multiscale basis, have been described using Gaussian scale mixtures. Here, we use this local description as a substrate for constructing a global field of Gaussian scale mixtures (FoGSM). Specifically, we model multiscal ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
The local statistical properties of photographic images, when represented in a multiscale basis, have been described using Gaussian scale mixtures. Here, we use this local description as a substrate for constructing a global field of Gaussian scale mixtures (FoGSM). Specifically, we model multiscale subbands as a product of an exponentiated homogeneous Gaussian Markov random field (hGMRF) and a second independent hGMRF. We show that parameter estimation for this model is feasible and that samples drawn from a FoGSM model have marginal and joint statistics similar to those of the subband coefficients of photographic images. We develop an algorithm for removing additive white Gaussian noise based on the FoGSM model and demonstrate denoising performance comparable with stateoftheart methods.
Multiscale keypoint detection using the dualtree complex wavelet transform
 in IEEE International Conference on Image Processing
, 2006
"... We show that the DualTree Complex Wavelet Transform (DTCWT) [1] is a wellsuited basis to detect salient keypoints in images as it is: directionally selective, smoothly shift invariant, optimally decimated at coarse scales, invertible (no loss of information) and fast to compute. It is therefore mo ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We show that the DualTree Complex Wavelet Transform (DTCWT) [1] is a wellsuited basis to detect salient keypoints in images as it is: directionally selective, smoothly shift invariant, optimally decimated at coarse scales, invertible (no loss of information) and fast to compute. It is therefore more suitable than the Discrete Wavelet Transform for content analysis and especially for fast and accurate keypoint detection. The DTCWT: The DTCWT decomposition of an n×n image results in a decimated dyadic decomposition into i=1..m scales, where each scale is of dimension n/2 i × n/2 i. At each decimated location of each scale, we have a set S of 6 complex coefficients, denoted as S={ρ1e iθ1,..., ρ6e iθ6}, corresponding to responses to the 6 subband orientations, namely: 15°, 45°, 75°, 105°, 135°, 165°. Determining the keypoint energies: The types of keypoint features we are interested in (blob, corner, junction) create energies in nonadjacent subbands, as distinct from edges which create energies in adjacent subbands only. We propose the following energy measure to detect the presence of such keypoints: E(S) = ρ1ρ3+ρ1ρ4+ρ1ρ5+ρ2ρ4+ρ2ρ5+ρ2ρ6+ρ3ρ5+ρ3ρ6+ρ4ρ6. Note that all products within E(S) are between nonadjacent subband magnitudes ρi. Unlike DifferenceofGaussians detectors (as in SIFT [2]), which rely on isotropic filtering, the directional filtering involved in the DTCWT allows us to
Wavelets on graphs via spectral graph theory
, 2009
"... We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. Given a wavelet generating kernel g and a scale parameter t, we define the scaled wavelet operator T t g = g(tL). The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on g, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing L. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.
Image Restoration Using Gaussian Scale Mixtures in the Wavelet Domain
 IN PROC IEEE INT’L CONF ON IMAGE PROC
, 2003
"... ..."
Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization
, 2009
"... We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transforma ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons.