Results 1 
7 of
7
Multiple Description Coding: Compression Meets the Network
, 2001
"... This article focuses on the compressed representations of the pictures ..."
Abstract

Cited by 433 (9 self)
 Add to MetaCart
This article focuses on the compressed representations of the pictures
Theoretical Foundations of Transform Coding
, 2001
"... This article explains the fundamental principles of transform coding; these principles apply equally well to images, audio, video, and various other types of data, so abstract formulations are given. Much of the material presented here is adapted from [14, Chap. 2, 4]. The details on wavelet transfo ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
This article explains the fundamental principles of transform coding; these principles apply equally well to images, audio, video, and various other types of data, so abstract formulations are given. Much of the material presented here is adapted from [14, Chap. 2, 4]. The details on wavelet transformbased image compression and the JPEG2000 image compression standard are given in the following two articles of this special issue [38], [37]
Quantization based on a novel sampleadaptive product quantizer (SAPQ)
 IEEE TRANS. INFORM. THEORY
, 1999
"... In this paper, we propose a novel feedforward adaptive quantization scheme called the sampleadaptive product quantizer (SAPQ). This is a structurally constrained vector quantizer that uses unions of product codebooks. SAPQ is based on a concept of adaptive quantization to the varying samples of th ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a novel feedforward adaptive quantization scheme called the sampleadaptive product quantizer (SAPQ). This is a structurally constrained vector quantizer that uses unions of product codebooks. SAPQ is based on a concept of adaptive quantization to the varying samples of the source and is very different from traditional adaptation techniques for nonstationary sources. SAPQ quantizes each source sample using a sequence of quantizers. Even when using scalar quantization in SAPQ, we can achieve performance comparable to vector quantization (with the complexity still close to that of scalar quantization). We also show that important latticebased vector quantizers can be constructed using scalar quantization in SAPQ. We mathematically analyze SAPQ and propose a simple algorithm to implement it. We numerically study SAPQ for independent and identically distributed Gaussian and Laplacian sources. Through our numerical study, we find that SAPQ using scalar quantizers achieves typical gains of 1–3 dB in distortion over the Lloyd–Max quantizer. We also show that SAPQ can be used in conjunction with vector quantizers to further improve the gains.
unknown title
"... The standard theoretical model for transform coding has strict modularity, meaning that the transform, quantization, and entropy coding blocks operate independently. mapping from I to strings of bits. The former is called a lossy encoder and the latter a lossless code or an entropy code. The decoder ..."
Abstract
 Add to MetaCart
(Show Context)
The standard theoretical model for transform coding has strict modularity, meaning that the transform, quantization, and entropy coding blocks operate independently. mapping from I to strings of bits. The former is called a lossy encoder and the latter a lossless code or an entropy code. The decoder inverts γ and then approximates x from the index α ( x) ∈I. This is shown in the top half of Fig. 1. It is assumed that communication between the encoder and decoder is perfect. (The last article of this issue [13] describes techniques that work when some transmitted bits are lost.) To assess the quality of a lossy source code, we need numerical measures of approximation accuracy and description length. The measure for description length is simply the expected number of bits output by the encoder divided by N; this is called the rate in bits per scalar sample and denoted by R. Here we will measure approximation accuracy by squared Euclidean norm divided by the vector length
Novel Quantization Schemes for Very Low Bit Rate Video Coding Based on Sample Adaptation
, 1999
"... This document has been made available through Purdue ePubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for ..."
Abstract
 Add to MetaCart
(Show Context)
This document has been made available through Purdue ePubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for
Multistate video coding over error prone channels
, 2004
"... The work investigates MultiState Video Coding (MSVC) which is a Multiple Description Scheme. MSVC is interesting because of its low complexity and its low delay property which makes it attractive for streaming applications. The performance of MSVC is explored in terms of the average PSNR of the re ..."
Abstract
 Add to MetaCart
The work investigates MultiState Video Coding (MSVC) which is a Multiple Description Scheme. MSVC is interesting because of its low complexity and its low delay property which makes it attractive for streaming applications. The performance of MSVC is explored in terms of the average PSNR of the reconstructed video sequence. We compare MSVC to SingleState Video Coding (SSVC) and Temporal Layered Coding (TLC) under different channel conditions and reception scenarios. Moreover we investigate the tradeoff between rate allocated to quantization accuracy and to intracoding in terms of its effect on the average PSNR over errorprone channels. Besides balanced MSVC, unbalanced MSVC by adaptation of quantization is analyzed. It was shown under which conditions it is practical to switch to unbalanced operation from the balanced one. Moreover, an improved version of the original MSVC approach is developed where state recovery is not only used for error concealment in case of losses but also then whenever it enables a larger frame PSNR. It was shown that the improved version yields better results for all channel conditions and rate allocations. In the second part of the work, a recursive decoder distortion estimation model based