Results 1  10
of
185
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinato ..."
Abstract

Cited by 202 (31 self)
 Add to MetaCart
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Spatially Adaptive Wavelet Thresholding with Context Modeling for Image Denoising
 IEEE Trans. Image Processing
, 2000
"... The method of wavelet thresholding for removing noise, or denoising, has been researched extensively due to its effectiveness and simplicity. Much of the literature has focused on developing the best uniform threshold or best basis selection. However, not much has been done to make the threshold val ..."
Abstract

Cited by 178 (2 self)
 Add to MetaCart
The method of wavelet thresholding for removing noise, or denoising, has been researched extensively due to its effectiveness and simplicity. Much of the literature has focused on developing the best uniform threshold or best basis selection. However, not much has been done to make the threshold values adaptive to the spatially changing statistics of images. Such adaptivity can improve the wavelet thresholding performance because it allows additional local information of the image (such as the identification of smooth or edge regions) to be incorporated into the algorithm. This work proposes a spatially adaptive wavelet thresholding method based on context modeling, a common technique used in image compression to adapt the coder to changing image characteristics. Each wavelet coefficient is modeled as a random variable of a generalized Gaussian distribution with an unknown parameter. Context modeling is used to estimate the parameter for each coefficient, which is then used to adapt the thresholding strategy. This spatially adaptive thresholding is extended to the overcomplete wavelet expansion, which yields better results than the orthogonal transform. Experimental results show that spatially adaptive wavelet thresholding yields significantly superior image quality and lower MSE than the best uniform thresholding with the original image assumed known.
Bivariate Shrinkage Functions for WaveletBased Denoising Exploiting Interscale Dependency
, 2002
"... Most simple nonlinear thresholding rules for waveletbased denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. In this paper, we will only consider the dependencies between the coefficients and their parents i ..."
Abstract

Cited by 135 (4 self)
 Add to MetaCart
Most simple nonlinear thresholding rules for waveletbased denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. In this paper, we will only consider the dependencies between the coefficients and their parents in detail. For this purpose, new nonGaussian bivariate distributions are proposed, and corresponding nonlinear threshold functions (shrinkage functions) are derived from the models using Bayesian estimation theory. The new shrinkage functions do not assume the independence of wavelet coefficients. We will show three image denoising examples in order to show the performance of these new bivariate shrinkage rules. In the second example, a simple subbanddependent datadriven image denoising system is described and compared with effective datadriven techniques in the literature, namely VisuShrink, SureShrink, BayesShrink, and hidden Markov models. In the third example, the same idea is applied to the dualtree complex wavelet coefficients.
Why simple shrinkage is still relevant for redundant representations
 IEEE Transactions on Information Theory
, 2006
"... Abstract—Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal’s representation is enforced using a unitary trans ..."
Abstract

Cited by 87 (12 self)
 Add to MetaCart
Abstract—Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal’s representation is enforced using a unitary transform. Still, shrinkage is also practiced with nonunitary, and even redundant representations, typically leading to very satisfactory results. In this correspondence we shed some light on this behavior. The main argument in this work is that such simple shrinkage could be interpreted as the first iteration of an algorithm that solves the basis pursuit denoising (BPDN) problem. While the desired solution of BPDN is hard to obtain in general, we develop a simple iterative procedure for the BPDN minimization that amounts to stepwise shrinkage. We demonstrate how the simple shrinkage emerges as the first iteration of this novel algorithm. Furthermore, we show how shrinkage can be iterated, turning into an effective algorithm that minimizes the BPDN via simple shrinkage steps, in order to further strengthen the denoising effect. Index Terms—Basis pursuit, denoising, frame, overcomplete, redundant, sparse representation, shrinkage, thresholding.
Bivariate Shrinkage with Local Variance Estimation
, 2002
"... The performance of imagedenoising algorithms using wavelet transforms can be improved significantly by taking into account the statistical dependencies among wavelet coefficients as demonstrated by several algorithms presented in the literature. In two earlier papers by the authors, a simple bivari ..."
Abstract

Cited by 74 (5 self)
 Add to MetaCart
The performance of imagedenoising algorithms using wavelet transforms can be improved significantly by taking into account the statistical dependencies among wavelet coefficients as demonstrated by several algorithms presented in the literature. In two earlier papers by the authors, a simple bivariate shrinkage rule is described using a coefficient and its parent. The performance can also be improved using simple models by estimating model parameters in a local neighborhood. This letter presents a locally adaptive denoising algorithm using the bivariate shrinkage function. The algorithm is illustrated using both the orthogonal and dual tree complex wavelet transforms. Some comparisons with the best available results will be given in order to illustrate the effectiveness of the proposed algorithm.
NONSUBSAMPLED CONTOURLET TRANSFORM: FILTER DESIGN AND APPLICATIONS IN DENOISING
"... In this paper we study the nonsubsampled contourlet transform. We address the corresponding filter design problem using the McClellan transformation. We show how zeroes can be imposed in the filters so that the iterated structure produces regular basis functions. The proposed design framework yields ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
In this paper we study the nonsubsampled contourlet transform. We address the corresponding filter design problem using the McClellan transformation. We show how zeroes can be imposed in the filters so that the iterated structure produces regular basis functions. The proposed design framework yields filters that can be implemented efficiently through a lifting factorization. We apply the constructed transform in image noise removal where the results obtained are comparable to the stateofthe art, being superior in some cases.
Denoising by Sparse Approximation: Error Bounds Based on RateDistortion Theory
, 2006
"... If a signal x is known to have a sparse representation with respect to a frame, it can be estimated from a noisecorrupted observation y by finding the best sparse approximation to y. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently repres ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
If a signal x is known to have a sparse representation with respect to a frame, it can be estimated from a noisecorrupted observation y by finding the best sparse approximation to y. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently represents the noise. The meansquared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal are analyzed. First an MSE bound that depends on a new bound on approximating a Gaussian signal as a linear combination of elements of an overcomplete dictionary is given. Further analyses are for dictionaries generated randomly according to a sphericallysymmetric distribution and signals expressible with single dictionary elements. Easilycomputed approximations for the probability of selecting the correct dictionary element and the MSE are given. Asymptotic expressions reveal a critical input signaltonoise ratio for signal recovery.
Wavelet thresholding via mdl for natural images
 IEEE Transactions on Information Theory
, 2000
"... We study the application of Rissanen's Principle of Minimum Description Length (MDL) to the problem of wavelet denoising and compression for natural images. After making a connection between thresholding and model selection, we derive an MDL criterion based on a Laplacian model for noiseless wavele ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
We study the application of Rissanen's Principle of Minimum Description Length (MDL) to the problem of wavelet denoising and compression for natural images. After making a connection between thresholding and model selection, we derive an MDL criterion based on a Laplacian model for noiseless wavelet coe cients. We nd that this approach leads to an adaptive thresholding rule. While achieving mean squared error performance comparable with other popular thresholding schemes, the MDL procedure tends to keep far fewer coe cients. From this property, we demonstrate that our method is an excellent tool for simultaneous denoising and compression. We make this claim precise by analyzing MDL thresholding in two optimality frameworks; one in which we measure rate and distortion based on quantized coe cients and one in which we do not quantize, but instead record rate simply as the number of nonzero coe cients.
Wavelets on graphs via spectral graph theory
, 2009
"... We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. Given a wavelet generating kernel g and a scale parameter t, we define the scaled wavelet operator T t g = g(tL). The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on g, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing L. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.
Image denoising with blockmatching and 3D filtering
 IN ELECTRONIC IMAGING’06, PROC. SPIE 6064, NO. 6064A30
, 2006
"... We present a novel approach to still image denoising based on effective filtering in 3D transform domain by combining slidingwindow transform processing with blockmatching. We process blocks within the image in a sliding manner and utilize the blockmatching concept by searching for blocks which a ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We present a novel approach to still image denoising based on effective filtering in 3D transform domain by combining slidingwindow transform processing with blockmatching. We process blocks within the image in a sliding manner and utilize the blockmatching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and effectively attenuate the noise by shrinkage of the transform coefficients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and efficient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers stateofart denoising performance, both in terms of objective criteria and visual quality.