Results 1  10
of
113
Image Coding based on Mixture Modeling of Wavelet Coefficients and a Fast EstimationQuantization Framework
, 1997
"... We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the spacefrequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients ..."
Abstract

Cited by 152 (11 self)
 Add to MetaCart
We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the spacefrequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent Generalized Gaussian distribution field, of fixed unknown shape for each subband, having zero mean and unknown slowly spatiallyvarying variances. Based on this model, we develop a powerful "on the fly" EstimationQuantization (EQ) framework that consists of: (i) first finding the MaximumLikelihood estimate of the individual spatiallyvarying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then applying an offline RateDistortion (RD) optimized quantization /entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A distinctive feature of o...
Low BitRate, Scalable Video Coding with 3D Set Partitioning in Hierarchical Trees (3D SPIHT)
, 2000
"... In this paper, we propose a low bitrate embedded video coding scheme that utilizes a threedimensional (3D) extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has proved so successful in still image coding. Threedimensional spatiotemporal orientation trees coupled w ..."
Abstract

Cited by 115 (18 self)
 Add to MetaCart
In this paper, we propose a low bitrate embedded video coding scheme that utilizes a threedimensional (3D) extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has proved so successful in still image coding. Threedimensional spatiotemporal orientation trees coupled with powerful SPIHT sorting and refinement renders 3D SPIHT video coder so efficient that it provides comparable performance to H.263 objectively and subjectively when operated at the bitrates of 30 to 60 kilobits per second with minimal system complexity. Extension to colorembedded video coding is accomplished without explicit bit allocation, and can be used for any color plane representation. In addition to being rate scalable, the proposed video coder allows multiresolutional scalability in encoding and decoding in both time and space from one bitstream. This added functionality along with many desirable attributes, such as full embeddedness for progressive transmission, precise ...
Distributed compressed sensing
, 2005
"... Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algori ..."
Abstract

Cited by 84 (21 self)
 Add to MetaCart
Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multisignal ensembles that exploit both intra and intersignal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the SlepianWolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically bestpossible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.
A new compressive imaging camera architecture using opticaldomain compression
 in Proc. of Computational Imaging IV at SPIE Electronic Imaging
, 2006
"... Compressive Sensing is an emerging field based on the revelation that a small number of linear projections of a compressible signal contain enough information for reconstruction and processing. It has many promising implications and enables the design of new kinds of Compressive Imaging systems and ..."
Abstract

Cited by 69 (6 self)
 Add to MetaCart
Compressive Sensing is an emerging field based on the revelation that a small number of linear projections of a compressible signal contain enough information for reconstruction and processing. It has many promising implications and enables the design of new kinds of Compressive Imaging systems and cameras. In this paper, we develop a new camera architecture that employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudorandom binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while sampling the image fewer times than the number of pixels. Other attractive properties include its universality, robustness, scalability, progressivity, and computational asymmetry. The most intriguing feature of the system is that, since it relies on a single photon detector, it can be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers.
Multiple Description Wavelet Based Image Coding
, 1998
"... We consider the problem of coding images for transmission over errorprone channels. The impairments we target are transient channel shutdowns, as would occur in a packet network when a packet is lost, or in a wireless system during a deep fade: when data is delivered it is assumed to be errorfree, ..."
Abstract

Cited by 64 (7 self)
 Add to MetaCart
We consider the problem of coding images for transmission over errorprone channels. The impairments we target are transient channel shutdowns, as would occur in a packet network when a packet is lost, or in a wireless system during a deep fade: when data is delivered it is assumed to be errorfree, but some of the data may never reach the receiver. The proposed algorithms are based on a combination of multiple description scalar quantizers with techniques successfully applied to the construction of some of the most ecient subband coders. A given image is encoded into multiple independent packets of roughly equal length. When packets are lost, the quality of the approximation computed at the receiver depends only on the number of packets received, but does not depend on exactly which packets are actually received. When compared with previously reported results on the performance of robust image coders based on multiple descriptions, on standard test images, our coders attain s...
A Tutorial on Modern Lossy Wavelet Image Compression: Foundations of JPEG 2000
, 2001
"... The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based o ..."
Abstract

Cited by 63 (0 self)
 Add to MetaCart
The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based on the wavelet transform, in contrast to the discrete cosine transform (DCT) used in the original JPEG standard. The significant change in coding methods between the two standards leads one to ask: What prompted the JPEG committee to adopt such a dramatic change? The answer to this question comes from considering the state of image coding at the time the original JPEG standard was being formed. At that time wavelet analysis and wavelet coding were still
Wavelet Packet Image Coding Using Spacefrequency Quantization
 IEEE Trans. Image Processing
, 1998
"... We extend our previous work on spacefrequency quantization (SFQ) [1] for image coding from wavelet transforms to the more general wavelet packet transforms [2]. The resulting wavelet packet coder offers an universal transform coding framework within the constraints of filter bank structures by allo ..."
Abstract

Cited by 60 (5 self)
 Add to MetaCart
We extend our previous work on spacefrequency quantization (SFQ) [1] for image coding from wavelet transforms to the more general wavelet packet transforms [2]. The resulting wavelet packet coder offers an universal transform coding framework within the constraints of filter bank structures by allowing joint transform and quantizer design without assuming a priori statistics of the input image. In other words, the new coder adaptively chooses the representation to suit the image and the quantization to suit the representation. Experimental results show that, for some image classes, our new coder gives excellent coding performance. 1 Introduction Recently, wavelet transforms have attracted considerable attention, especially with applications to image coding, due to their ability to provide attractive spacefrequency resolution tradeoffs for natural images [3, 4]. In addition to conventional scalar (or vector) quantization strategies that are common in subband coding [5], the hierarchi...
Fast adaptive wavelet packet image compression
 IEEE Transactions on Image Processing
, 2000
"... Abstract—Wavelets are illsuited to represent oscillatory patterns: rapid variations of intensity can only be described by the small scale wavelet coefficients, which are often quantized to zero, even at high bit rates. Our goal in this paper is to provide a fast numerical implementation of the best ..."
Abstract

Cited by 46 (18 self)
 Add to MetaCart
Abstract—Wavelets are illsuited to represent oscillatory patterns: rapid variations of intensity can only be described by the small scale wavelet coefficients, which are often quantized to zero, even at high bit rates. Our goal in this paper is to provide a fast numerical implementation of the best wavelet packet algorithm [1] in order to demonstrate that an advantage can be gained by constructing a basis adapted to a target image. Emphasis in this paper has been placed on developing algorithms that are computationally efficient. We developed a new fast twodimensional (2D) convolutiondecimation algorithm with factorized nonseparable 2D filters. The algorithm is four times faster than a standard convolutiondecimation. An extensive evaluation of the algorithm was performed on a large class of textured images. Because of its ability to reproduce textures so well, the wavelet packet coder significantly out performs one of the best wavelet coder [2] on images such as Barbara and fingerprints, both visually and in term of PSNR. Index Terms—Adaptive transform, best basis, image compression, ladder structure, wavelet packet. I.
A WaveletBased Analysis of Fractal Image Compression
 IEEE Trans. Image Processing
, 1997
"... Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address. We introduce a new waveletbased framework for analyzing block ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address. We introduce a new waveletbased framework for analyzing blockbased fractal compression schemes. Within this framework we are able to draw upon insights from the wellestablished transform coder paradigm in order to address the issue of why fractal block coders work. We show that fractal block coders of the form introduced by Jacquin[1] are a Haar wavelet subtree quantization scheme. We examine a generalization of this scheme to smooth wavelets with additional vanishing moments. The performance of our generalized coder is comparable to the best results in the literature for a Jacquinstyle coding scheme. Our wavelet framework gives new insight into the convergence properties of fractal block coders, and leads us to develop an unconditionally convergen...
A Progressive Transmission Image Coder Using Linear Phase Uniform Filterbanks as Block Transforms
, 1999
"... This paper presents a novel image coding scheme using Mchannel linear phase perfect reconstruction filterbanks (LPPRFB's) in the embedded zerotree wavelet (EZW) framework introduced by Shapiro [1]. The innovation here is to replace the EZW's dyadic wavelet transform by Mchannel uniformband maxi ..."
Abstract

Cited by 41 (20 self)
 Add to MetaCart
This paper presents a novel image coding scheme using Mchannel linear phase perfect reconstruction filterbanks (LPPRFB's) in the embedded zerotree wavelet (EZW) framework introduced by Shapiro [1]. The innovation here is to replace the EZW's dyadic wavelet transform by Mchannel uniformband maximally decimated LPPRFB's, which offer finer frequency spectrum partitioning and higher energy compaction. The transform stage can now be implemented as a block transform which supports parallel processing mode and facilitates regionof interest coding/decoding. For hardware implementation, the transform boasts efficient lattice structures, which employ a minimal number of delay elements and are robust under the quantization of lattice coefficients. The resulted compression algorithm also retains all attractive properties of the EZW coder and its variations such as progressive image transmission, embedded quantization, exact bit rate control, and idempotency. Despite its simplicity, our new coder outperforms some of the best image coders published recently in literature [1][4], for almost all test images (especially natural, hardtocode ones) at almost all bit rates.