Results 11  20
of
1,386
Analysis Of Multiresolution Image Denoising Schemes Using GeneralizedGaussian Priors
 IEEE TRANS. INFO. THEORY
, 1998
"... In this paper, we investigate various connections between wavelet shrinkage methods in image processing and Bayesian estimation using Generalized Gaussian priors. We present fundamental properties of the shrinkage rules implied by Generalized Gaussian and other heavytailed priors. This allows us to ..."
Abstract

Cited by 185 (9 self)
 Add to MetaCart
In this paper, we investigate various connections between wavelet shrinkage methods in image processing and Bayesian estimation using Generalized Gaussian priors. We present fundamental properties of the shrinkage rules implied by Generalized Gaussian and other heavytailed priors. This allows us to show a simple relationship between differentiability of the logprior at zero and the sparsity of the estimates, as well as an equivalence between universal thresholding schemes and Bayesian estimation using a certain Generalized Gaussian prior.
An Image Multiresolution Representation for Lossless and Lossy Compression
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 1996
"... We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bitshift operations. During its calculation the number of bits ..."
Abstract

Cited by 183 (11 self)
 Add to MetaCart
We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bitshift operations. During its calculation the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropycoding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate vs. distortion performance is comparable to those of the most efficient lossy compression methods.
Spacefrequency Quantization for Wavelet Image Coding
, 1997
"... Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with treestructured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical re ..."
Abstract

Cited by 160 (15 self)
 Add to MetaCart
Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with treestructured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation [1, 2]. This paper addresses the problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder. We consider zerotree quantization (zeroing out treestructured sets of wavelet coefficients) and the simplest form of scalar quantization (a single common uniform scalar quantizer applied to all nonzeroed coefficients), and we formalize the problem of optimizing their joint application and we develop an image coding algorithm for solving the resulting optimization problem. Despite the basic form of the two quantizers considered, the resulting algorithm demonstrates coding performance that is competitive (often...
LowComplexity Image Denoising Based on Statistical Modeling of Wavelet Coefficients
, 1999
"... We introduce a simple spatially adaptive statistical model for wavelet image coe#cients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the Estimation Quantization coder. We model wavelet image coefficients as zeromean Gaussian random varia ..."
Abstract

Cited by 158 (13 self)
 Add to MetaCart
We introduce a simple spatially adaptive statistical model for wavelet image coe#cients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the Estimation Quantization coder. We model wavelet image coefficients as zeromean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate Maximum A Posteriori Probability rule. Then we apply an approximate Minimum Mean Squared Error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature.
Image Coding based on Mixture Modeling of Wavelet Coefficients and a Fast EstimationQuantization Framework
, 1997
"... We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the spacefrequency characterization of the wavelet image representation. Specifically, we model image wavelet co ..."
Abstract

Cited by 156 (11 self)
 Add to MetaCart
We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the spacefrequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent Generalized Gaussian distribution field, of fixed unknown shape for each subband, having zero mean and unknown slowly spatiallyvarying variances. Based on this model, we develop a powerful "on the fly" EstimationQuantization (EQ) framework that consists of: (i) first finding the MaximumLikelihood estimate of the individual spatiallyvarying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then applying an offline RateDistortion (RD) optimized quantization /entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A distinctive feature of o...
Distributed source coding for sensor networks
 In IEEE Signal Processing Magazine
, 2004
"... n recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. MIT Technology Review ranked wireless sensor networks that consist of many tiny, low ..."
Abstract

Cited by 156 (2 self)
 Add to MetaCart
(Show Context)
n recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. MIT Technology Review ranked wireless sensor networks that consist of many tiny, lowpower and cheap wireless sensors as the number one emerging technology. Unlike PCs or the Internet, which are designed to support all types of applications, sensor networks are usually mission driven and application specific (be it detection of biological agents and toxic chemicals; environmental measurement of temperature, pressure and vibration; or realtime area video surveillance). Thus they must operate under a set of unique constraints and requirements. For example, in contrast to many other wireless devices (e.g., cellular phones, PDAs, and laptops), in which energy can be recharged from time to time, the energy provisioned for a wireless sensor node is not expected to be renewed throughout its mission. The limited amount of energy available to wireless sensors has a significant impact on all aspects of a wireless sensor network, from the amount of information that the node can process, to the volume of wireless communication it can carry across large distances. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies; it relies on many components working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies for sensor networks is distributed source coding (DSC), which refers to the compression of multiple correlated sensor outputs [1]–[4] that do not communicate with each other (hence distributed coding). These sensors send their compressed outputs to a central point [e.g., the base station (BS)] for joint decoding. I
Bivariate Shrinkage Functions for WaveletBased Denoising Exploiting Interscale Dependency
, 2002
"... Most simple nonlinear thresholding rules for waveletbased denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. In this paper, we will only consider the dependencies between the coefficients and their parents i ..."
Abstract

Cited by 155 (4 self)
 Add to MetaCart
Most simple nonlinear thresholding rules for waveletbased denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. In this paper, we will only consider the dependencies between the coefficients and their parents in detail. For this purpose, new nonGaussian bivariate distributions are proposed, and corresponding nonlinear threshold functions (shrinkage functions) are derived from the models using Bayesian estimation theory. The new shrinkage functions do not assume the independence of wavelet coefficients. We will show three image denoising examples in order to show the performance of these new bivariate shrinkage rules. In the second example, a simple subbanddependent datadriven image denoising system is described and compared with effective datadriven techniques in the literature, namely VisuShrink, SureShrink, BayesShrink, and hidden Markov models. In the third example, the same idea is applied to the dualtree complex wavelet coefficients.
LowComplexity Video Coding for ReceiverDriven Layered Multicast
 IEEE Journal on Selected Areas in Communications
, 1997
"... In recent years, the "Internet Multicast Backbone," or MBone, has risen from a small, research curiosity to a largescale and widely used communications infrastructure. A driving force behind this growth was the development of multipoint audio, video, and shared whiteboard conferencing appl ..."
Abstract

Cited by 151 (4 self)
 Add to MetaCart
(Show Context)
In recent years, the "Internet Multicast Backbone," or MBone, has risen from a small, research curiosity to a largescale and widely used communications infrastructure. A driving force behind this growth was the development of multipoint audio, video, and shared whiteboard conferencing applications. Because these realtime media are transmitted at a uniform rate to all of the receivers in the network, a source must either run at the bottleneck rate or overload portions of its multicast distribution tree. We overcome this limitation by moving the burden of rate adaptation from the source to the receivers with a scheme we call receiverdriven layered multicast, or RLM. In RLM, a source distributes a hierarchical signal by striping the different layers across multiple multicast groups, and receivers adjust their reception rate by simply joining and leaving multicast groups. In this paper, we describe a layered video compression algorithm which, when combined with RLM, provides a comprehensive solution for scalable multicast video transmission in heterogeneous networks. In addition to a layered representation, our coder has low complexity (admitting an efficient software implementation) and high loss resilience (admitting robust operation in loosely controlled environments like the Internet) . Even with these constraints, our hybrid DCT/waveletbased coder exhibits good compression performance. It outperforms all publicly available Internet video codecs while maintaining comparable runtime performance. We have implemented our coder in a "real" applicationthe UCB/LBL videoconferencing tool vic. Unlike previous work on layered video compression and transmission, we have built a fully operational system that is currently being deployed on a very large scale over the MBone.
Data compression and harmonic analysis
 IEEE Trans. Inform. Theory
, 1998
"... In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory... ..."
Abstract

Cited by 149 (24 self)
 Add to MetaCart
(Show Context)
In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory...
Image Decomposition via the Combination of Sparse Representations and a Variational Approach
 IEEE Transactions on Image Processing
, 2004
"... The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years several pioneering works suggested such a separation based on variational formulation, and others using independent component analysis and s ..."
Abstract

Cited by 144 (28 self)
 Add to MetaCart
(Show Context)
The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years several pioneering works suggested such a separation based on variational formulation, and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the Basis Pursuit Denoising (BPDN) algorithm and the TotalVariation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures, and the other for the natural scene parts, assumed to be piecewisesmooth. Both dictionaries are chosen such that they lead to sparse representations over one type of imagecontent (either texture or piecewise smooth). The use of the BPDN with the two augmented dictionaries leads to the desired separation, along with noise removal as a byproduct. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly e#cient numerical scheme to solve the combined optimization problem posed in our model, and show several experimental results that validate the algorithm's performance.