Results 1  10
of
177
Ratedistortion methods for image and video compression
 IEEE Signal Process. Mag. 1998
"... In this paper we provide an overview of ratedistortion (RD) based optimization techniques and their practical application to image and video coding. We begin with a short discussion of classical ratedistortion theory and then we show how in many practical coding scenarios, such as in standardsco ..."
Abstract

Cited by 222 (7 self)
 Add to MetaCart
(Show Context)
In this paper we provide an overview of ratedistortion (RD) based optimization techniques and their practical application to image and video coding. We begin with a short discussion of classical ratedistortion theory and then we show how in many practical coding scenarios, such as in standardscompliant coding environments, resource allocation can be put in an RD framework. We then introduce two popular techniques for resource allocation, namely, Lagrangian optimization and dynamic programming. After a discussion of these two techniques as well as some of their extensions, we conclude with a quick review of recent literature in these areas citing a number of applications related to image and video compression and transmission. We
Image Coding based on Mixture Modeling of Wavelet Coefficients and a Fast EstimationQuantization Framework
, 1997
"... We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the spacefrequency characterization of the wavelet image representation. Specifically, we model image wavelet co ..."
Abstract

Cited by 169 (12 self)
 Add to MetaCart
We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the spacefrequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent Generalized Gaussian distribution field, of fixed unknown shape for each subband, having zero mean and unknown slowly spatiallyvarying variances. Based on this model, we develop a powerful "on the fly" EstimationQuantization (EQ) framework that consists of: (i) first finding the MaximumLikelihood estimate of the individual spatiallyvarying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then applying an offline RateDistortion (RD) optimized quantization /entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A distinctive feature of o...
Low BitRate, Scalable Video Coding with 3D Set Partitioning in Hierarchical Trees (3D SPIHT)
, 2000
"... In this paper, we propose a low bitrate embedded video coding scheme that utilizes a threedimensional (3D) extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has proved so successful in still image coding. Threedimensional spatiotemporal orientation trees coupled w ..."
Abstract

Cited by 148 (21 self)
 Add to MetaCart
In this paper, we propose a low bitrate embedded video coding scheme that utilizes a threedimensional (3D) extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has proved so successful in still image coding. Threedimensional spatiotemporal orientation trees coupled with powerful SPIHT sorting and refinement renders 3D SPIHT video coder so efficient that it provides comparable performance to H.263 objectively and subjectively when operated at the bitrates of 30 to 60 kilobits per second with minimal system complexity. Extension to colorembedded video coding is accomplished without explicit bit allocation, and can be used for any color plane representation. In addition to being rate scalable, the proposed video coder allows multiresolutional scalability in encoding and decoding in both time and space from one bitstream. This added functionality along with many desirable attributes, such as full embeddedness for progressive transmission, precise ...
Distributed compressed sensing
, 2005
"... Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algori ..."
Abstract

Cited by 139 (25 self)
 Add to MetaCart
Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multisignal ensembles that exploit both intra and intersignal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the SlepianWolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically bestpossible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.
A new compressive imaging camera architecture using opticaldomain compression
 in Proc. of Computational Imaging IV at SPIE Electronic Imaging
, 2006
"... Compressive Sensing is an emerging field based on the revelation that a small number of linear projections of a compressible signal contain enough information for reconstruction and processing. It has many promising implications and enables the design of new kinds of Compressive Imaging systems and ..."
Abstract

Cited by 108 (9 self)
 Add to MetaCart
(Show Context)
Compressive Sensing is an emerging field based on the revelation that a small number of linear projections of a compressible signal contain enough information for reconstruction and processing. It has many promising implications and enables the design of new kinds of Compressive Imaging systems and cameras. In this paper, we develop a new camera architecture that employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudorandom binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while sampling the image fewer times than the number of pixels. Other attractive properties include its universality, robustness, scalability, progressivity, and computational asymmetry. The most intriguing feature of the system is that, since it relies on a single photon detector, it can be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers.
A Tutorial on Modern Lossy Wavelet Image Compression: Foundations of JPEG 2000
, 2001
"... The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based o ..."
Abstract

Cited by 91 (0 self)
 Add to MetaCart
(Show Context)
The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based on the wavelet transform, in contrast to the discrete cosine transform (DCT) used in the original JPEG standard. The significant change in coding methods between the two standards leads one to ask: What prompted the JPEG committee to adopt such a dramatic change? The answer to this question comes from considering the state of image coding at the time the original JPEG standard was being formed. At that time wavelet analysis and wavelet coding were still
Multiple Description Wavelet Based Image Coding
, 1998
"... We consider the problem of coding images for transmission over errorprone channels. The impairments we target are transient channel shutdowns, as would occur in a packet network when a packet is lost, or in a wireless system during a deep fade: when data is delivered it is assumed to be errorfree, ..."
Abstract

Cited by 80 (8 self)
 Add to MetaCart
We consider the problem of coding images for transmission over errorprone channels. The impairments we target are transient channel shutdowns, as would occur in a packet network when a packet is lost, or in a wireless system during a deep fade: when data is delivered it is assumed to be errorfree, but some of the data may never reach the receiver. The proposed algorithms are based on a combination of multiple description scalar quantizers with techniques successfully applied to the construction of some of the most ecient subband coders. A given image is encoded into multiple independent packets of roughly equal length. When packets are lost, the quality of the approximation computed at the receiver depends only on the number of packets received, but does not depend on exactly which packets are actually received. When compared with previously reported results on the performance of robust image coders based on multiple descriptions, on standard test images, our coders attain s...
Wavelet Packet Image Coding Using Spacefrequency Quantization
 IEEE Trans. Image Processing
, 1998
"... We extend our previous work on spacefrequency quantization (SFQ) [1] for image coding from wavelet transforms to the more general wavelet packet transforms [2]. The resulting wavelet packet coder offers an universal transform coding framework within the constraints of filter bank structures by allo ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
(Show Context)
We extend our previous work on spacefrequency quantization (SFQ) [1] for image coding from wavelet transforms to the more general wavelet packet transforms [2]. The resulting wavelet packet coder offers an universal transform coding framework within the constraints of filter bank structures by allowing joint transform and quantizer design without assuming a priori statistics of the input image. In other words, the new coder adaptively chooses the representation to suit the image and the quantization to suit the representation. Experimental results show that, for some image classes, our new coder gives excellent coding performance. 1 Introduction Recently, wavelet transforms have attracted considerable attention, especially with applications to image coding, due to their ability to provide attractive spacefrequency resolution tradeoffs for natural images [3, 4]. In addition to conventional scalar (or vector) quantization strategies that are common in subband coding [5], the hierarchi...
Directionlets: Anisotropic Multidirectional Representation with Separable Filtering
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2004
"... In spite of the success of the standard wavelet transform (WT) in image processing in recent years, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in the horizontal and vertical directions. Onedimensional (1D) discontinuities in images (edges a ..."
Abstract

Cited by 60 (6 self)
 Add to MetaCart
In spite of the success of the standard wavelet transform (WT) in image processing in recent years, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in the horizontal and vertical directions. Onedimensional (1D) discontinuities in images (edges and contours) that are very important elements in visual perception, intersect too many wavelet basis functions and lead to a nonsparse representation. To capture efficiently these anisotropic geometrical structures characterized by many more than the horizontal and vertical directions, a more complex multidirectional (MDIR) and anisotropic transform is required. We present a new latticebased perfect reconstruction and critically sampled anisotropic MDIR WT. The transform retains the separable filtering and subsampling and the simplicity of computations and filter design from the standard twodimensional (2D) WT. The corresponding anisotropic basis functions (directionlets) have directional vanishing moments (DVM) along any two directions with rational slopes. Furthermore, we show that this novel transform provides an efficient tool for nonlinear approximation (NLA) of images, achieving the approximation power O(N −1.55), which is competitive to the rates achieved by the other oversampled transform constructions.
A WaveletBased Analysis of Fractal Image Compression
 IEEE Trans. Image Processing
, 1997
"... Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address. We introduce a new waveletbased framework for analyzing block ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
(Show Context)
Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address. We introduce a new waveletbased framework for analyzing blockbased fractal compression schemes. Within this framework we are able to draw upon insights from the wellestablished transform coder paradigm in order to address the issue of why fractal block coders work. We show that fractal block coders of the form introduced by Jacquin[1] are a Haar wavelet subtree quantization scheme. We examine a generalization of this scheme to smooth wavelets with additional vanishing moments. The performance of our generalized coder is comparable to the best results in the literature for a Jacquinstyle coding scheme. Our wavelet framework gives new insight into the convergence properties of fractal block coders, and leads us to develop an unconditionally convergen...