Results 11  20
of
946
Analysis Of Multiresolution Image Denoising Schemes Using GeneralizedGaussian Priors
 IEEE TRANS. INFO. THEORY
, 1998
"... In this paper, we investigate various connections between wavelet shrinkage methods in image processing and Bayesian estimation using Generalized Gaussian priors. We present fundamental properties of the shrinkage rules implied by Generalized Gaussian and other heavytailed priors. This allows us to ..."
Abstract

Cited by 176 (9 self)
 Add to MetaCart
In this paper, we investigate various connections between wavelet shrinkage methods in image processing and Bayesian estimation using Generalized Gaussian priors. We present fundamental properties of the shrinkage rules implied by Generalized Gaussian and other heavytailed priors. This allows us to show a simple relationship between differentiability of the logprior at zero and the sparsity of the estimates, as well as an equivalence between universal thresholding schemes and Bayesian estimation using a certain Generalized Gaussian prior.
An Image Multiresolution Representation for Lossless and Lossy Compression
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 1996
"... We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bitshift operations. During its calculation the number of bits ..."
Abstract

Cited by 173 (11 self)
 Add to MetaCart
We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bitshift operations. During its calculation the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropycoding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate vs. distortion performance is comparable to those of the most efficient lossy compression methods.
Image Coding based on Mixture Modeling of Wavelet Coefficients and a Fast EstimationQuantization Framework
, 1997
"... We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the spacefrequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients ..."
Abstract

Cited by 152 (11 self)
 Add to MetaCart
We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the spacefrequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent Generalized Gaussian distribution field, of fixed unknown shape for each subband, having zero mean and unknown slowly spatiallyvarying variances. Based on this model, we develop a powerful "on the fly" EstimationQuantization (EQ) framework that consists of: (i) first finding the MaximumLikelihood estimate of the individual spatiallyvarying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then applying an offline RateDistortion (RD) optimized quantization /entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A distinctive feature of o...
Spacefrequency Quantization for Wavelet Image Coding
, 1997
"... Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with treestructured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical re ..."
Abstract

Cited by 152 (15 self)
 Add to MetaCart
Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with treestructured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation [1, 2]. This paper addresses the problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder. We consider zerotree quantization (zeroing out treestructured sets of wavelet coefficients) and the simplest form of scalar quantization (a single common uniform scalar quantizer applied to all nonzeroed coefficients), and we formalize the problem of optimizing their joint application and we develop an image coding algorithm for solving the resulting optimization problem. Despite the basic form of the two quantizers considered, the resulting algorithm demonstrates coding performance that is competitive (often...
LowComplexity Image Denoising Based on Statistical Modeling of Wavelet Coefficients
, 1999
"... We introduce a simple spatially adaptive statistical model for wavelet image coe#cients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the Estimation Quantization coder. We model wavelet image coefficients as zeromean Gaussian random varia ..."
Abstract

Cited by 151 (13 self)
 Add to MetaCart
We introduce a simple spatially adaptive statistical model for wavelet image coe#cients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the Estimation Quantization coder. We model wavelet image coefficients as zeromean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate Maximum A Posteriori Probability rule. Then we apply an approximate Minimum Mean Squared Error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature.
LowComplexity Video Coding for ReceiverDriven Layered Multicast
 IEEE Journal on Selected Areas in Communications
, 1997
"... In recent years, the "Internet Multicast Backbone," or MBone, has risen from a small, research curiosity to a largescale and widely used communications infrastructure. A driving force behind this growth was the development of multipoint audio, video, and shared whiteboard conferencing applications. ..."
Abstract

Cited by 144 (4 self)
 Add to MetaCart
In recent years, the "Internet Multicast Backbone," or MBone, has risen from a small, research curiosity to a largescale and widely used communications infrastructure. A driving force behind this growth was the development of multipoint audio, video, and shared whiteboard conferencing applications. Because these realtime media are transmitted at a uniform rate to all of the receivers in the network, a source must either run at the bottleneck rate or overload portions of its multicast distribution tree. We overcome this limitation by moving the burden of rate adaptation from the source to the receivers with a scheme we call receiverdriven layered multicast, or RLM. In RLM, a source distributes a hierarchical signal by striping the different layers across multiple multicast groups, and receivers adjust their reception rate by simply joining and leaving multicast groups. In this paper, we describe a layered video compression algorithm which, when combined with RLM, provides a comprehensive solution for scalable multicast video transmission in heterogeneous networks. In addition to a layered representation, our coder has low complexity (admitting an efficient software implementation) and high loss resilience (admitting robust operation in loosely controlled environments like the Internet) . Even with these constraints, our hybrid DCT/waveletbased coder exhibits good compression performance. It outperforms all publicly available Internet video codecs while maintaining comparable runtime performance. We have implemented our coder in a "real" applicationthe UCB/LBL videoconferencing tool vic. Unlike previous work on layered video compression and transmission, we have built a fully operational system that is currently being deployed on a very large scale over the MBone.
Data compression and harmonic analysis
 IEEE Trans. Inform. Theory
, 1998
"... In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory... ..."
Abstract

Cited by 142 (24 self)
 Add to MetaCart
In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory...
Statistical Models for Images: Compression, Restoration and Synthesis
 In 31st Asilomar Conf on Signals, Systems and Computers
, 1997
"... this paper, we examine the problem of decomposing digitized images, through linear and/or nonlinear transformations, into statistically independent components. The classical approach to such a problem is Principal Components Analysis (PCA), also known as the KarhunenLoeve (KL) or Hotelling transfor ..."
Abstract

Cited by 138 (33 self)
 Add to MetaCart
this paper, we examine the problem of decomposing digitized images, through linear and/or nonlinear transformations, into statistically independent components. The classical approach to such a problem is Principal Components Analysis (PCA), also known as the KarhunenLoeve (KL) or Hotelling transform. This is a linear transform that removes secondorder dependencies between input pixels. The most wellknown description of image statistics is that their power spectra take the form of a power law [e.g., 20, 11, 24]. Coupled with a constraint of translationinvariance, this suggests that the Fourier transform is an appropriate PCA representation. Fourier and related representations are widely used in image processing applications.
Bivariate Shrinkage Functions for WaveletBased Denoising Exploiting Interscale Dependency
, 2002
"... Most simple nonlinear thresholding rules for waveletbased denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. In this paper, we will only consider the dependencies between the coefficients and their parents i ..."
Abstract

Cited by 138 (4 self)
 Add to MetaCart
Most simple nonlinear thresholding rules for waveletbased denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. In this paper, we will only consider the dependencies between the coefficients and their parents in detail. For this purpose, new nonGaussian bivariate distributions are proposed, and corresponding nonlinear threshold functions (shrinkage functions) are derived from the models using Bayesian estimation theory. The new shrinkage functions do not assume the independence of wavelet coefficients. We will show three image denoising examples in order to show the performance of these new bivariate shrinkage rules. In the second example, a simple subbanddependent datadriven image denoising system is described and compared with effective datadriven techniques in the literature, namely VisuShrink, SureShrink, BayesShrink, and hidden Markov models. In the third example, the same idea is applied to the dualtree complex wavelet coefficients.
Bayesian TreeStructured Image Modeling using Waveletdomain Hidden Markov Models
 IEEE Trans. Image Processing
, 1999
"... Waveletdomain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of realworld data. One potential drawback to the HMT framework ..."
Abstract

Cited by 131 (15 self)
 Add to MetaCart
Waveletdomain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of realworld data. One potential drawback to the HMT framework is the need for computationally expensive iterative training to fit an HMT model to a given data set (using the ExpectationMaximization algorithm, for example). In this paper, we greatly simplify the HMT model by exploiting the inherent selfsimilarity of realworld images. This simplified model specifies the HMT parameters with just nine metaparameters (independent of the size of the image and the number of wavelet scales). We also introduce a Bayesian universal HMT (uHMT) that fixes these nine parameters. The uHMT requires no training of any kind. While extremely simple, we show using a series of image estimation /denoising experiments that these two new models retain nearly all of the key structure modeled by the full HMT. Finally, we propose a fast shiftinvariant HMT estimation algorithm that outperforms other waveletbased estimators in the current literature, both in meansquare error and visual metrics.