Results 1  10
of
146
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 700 (12 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems
 Proceedings of the IEEE
, 1998
"... this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, ph ..."
Abstract

Cited by 259 (11 self)
 Add to MetaCart
(Show Context)
this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, physics, biology, control and signal processing, information theory, complexity theory, and psychology (see [45]). Neural networks have provided a fertile soil for the infusion (and occasionally confusion) of ideas, as well as a meeting ground for comparing viewpoints, sharing tools, and renovating approaches. It is within the illdefined boundaries of the field of neural networks that researchers in traditionally distant fields have come to the realization that they have been attacking fundamentally similar optimization problems.
Unequal Loss Protection: Graceful Degradation of Image Quality over Packet Erasure Channels throught Forward Error Correction
 IN DCC
, 2000
"... We present the unequal loss protection (ULP) framework in which unequal amounts of forward error correction are applied to progressive data to provide graceful degradation of image quality as packet losses increase. We develop a simple algorithm that can find a good assignment within the ULP framew ..."
Abstract

Cited by 117 (6 self)
 Add to MetaCart
We present the unequal loss protection (ULP) framework in which unequal amounts of forward error correction are applied to progressive data to provide graceful degradation of image quality as packet losses increase. We develop a simple algorithm that can find a good assignment within the ULP framework. We use the Set Partitioning in Hierarchical Trees coder in this work, but our algorithm can protect any progressive compression scheme. In addition, we promote the use of a PMF of expected channel conditions so that our system can work with almost any model or estimate of packet losses. We find that when optimizing for an exponential packet loss model with a mean loss rate of 20 % and using a total rate of 0.2 bits per pixel on the Lenna image, good image quality can be obtained even when 40% of transmitted packets are lost.
Generalized multiple description coding with correlating transforms
 IEEE Trans. Inform. Theory
, 2001
"... Abstract—Multiple description (MD) coding is source coding in which several descriptions of the source are produced such that various reconstruction qualities are obtained from different subsets of the descriptions. Unlike multiresolution or layered source coding, there is no hierarchy of descriptio ..."
Abstract

Cited by 66 (2 self)
 Add to MetaCart
Abstract—Multiple description (MD) coding is source coding in which several descriptions of the source are produced such that various reconstruction qualities are obtained from different subsets of the descriptions. Unlike multiresolution or layered source coding, there is no hierarchy of descriptions; thus, MD coding is suitable for packet erasure channels or networks without priority provisions. Generalizing work by Orchard, Wang, Vaishampayan, and Reibman, a transformbased approach is developed for producing descriptions of antuple source,. The descriptions are sets of transform coefficients, and the transform coefficients of different descriptions are correlated so that missing coefficients can be estimated. Several transform optimization results are presented for memoryless Gaussian sources, including a complete solution of the aP, aPcase with arbitrary weighting of the descriptions. The technique is effective only when independent components of the source have differing variances. Numerical studies show that this method performs well at low redundancies, as compared to uniform MD scalar quantization. Index Terms—Erasure channels, integertointeger transforms, packet networks, robust source coding.
Vector Quantization with Complexity Costs
, 1993
"... Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. ..."
Abstract

Cited by 58 (19 self)
 Add to MetaCart
Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression, i.e., we quantize the wavelet coefficients of gray level images and measure the reconstruction error. Our approach establishes a unifying framework for different quantization methods like Kmeans clustering and its fuzzy version, entropy constrained vector quantizati...
Channel Coding and Transmission Aspects for Wireless Multimedia
 PROCEEDINGS OF THE IEEE
, 1999
"... Multimedia transmission has to handle a variety of compressed and uncompressed source ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
Multimedia transmission has to handle a variety of compressed and uncompressed source
Joint Source and Channel Coding for Image Transmission Over Lossy Packet Networks
 In Conf. Wavelet Applications to Digital Image Processing
, 1996
"... We describe a joint source/channel allocation scheme for transmitting images lossily over block erasure channels such as the Internet. The goal is to reduce image transmission latency. Our subbandlevel and bitplanelevel optimization procedures give rise to an embedded channel coding strategy. Sour ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
(Show Context)
We describe a joint source/channel allocation scheme for transmitting images lossily over block erasure channels such as the Internet. The goal is to reduce image transmission latency. Our subbandlevel and bitplanelevel optimization procedures give rise to an embedded channel coding strategy. Source and channel coding bits are allocated in order to minimize an expected distortion measure. More perceptually important low frequency channels of images are shielded heavily using channel codes; higher frequencies are shielded lightly. The result is a more efficient use of channel codes that can reduce channel coding overhead. This reduction is most pronounced on bursty channels for which the uniform application of channel codes is expensive. We derive optimal source/channel coding tradeoffs for our block erasure channel. Keywords: joint source/channel coding, erasure channel, lossy image transmission 1 INTRODUCTION With the rising popularity of Web browsers, image transmission has becom...
Quantization of memoryless and Gauss–Markov sources over binary Markov channels,” Univ
, 1994
"... Abstract — Joint source–channel coding for stationary memoryless and Gauss–Markov sources and binary Markov channels is considered. The channel is an additivenoise channel where the noise process is an Mthorder Markov chain. Two joint sourcechannel coding schemes are considered. The first is a ch ..."
Abstract

Cited by 27 (12 self)
 Add to MetaCart
(Show Context)
Abstract — Joint source–channel coding for stationary memoryless and Gauss–Markov sources and binary Markov channels is considered. The channel is an additivenoise channel where the noise process is an Mthorder Markov chain. Two joint sourcechannel coding schemes are considered. The first is a channeloptimized vector quantizer—optimized for both source and channel. The second scheme consists of a scalar quantizer and a maximum a posteriori detector. In this scheme, it is assumed that the scalar quantizer output has residual redundancy that can be exploited by the maximum a posteriori detector to combat the correlated channel noise. These two schemes are then compared against two schemes which use channel interleaving. Numerical results show that the proposed schemes outperform the interleaving schemes. For very noisy channels with high noise correlation, gains of 4–5 dB in signaltonoise ratio are possible. Index Terms — Channels with memory, joint source–channel coding, MAP detection, Markov noise, vector quantization. I.
Joint sourcechannel coding error exponent for discrete communication systems with Markovian memory
 IEEE Trans. Info. Theory
, 2007
"... Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formula ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary source–channel pairs via Arimoto’s algorithm. When the channel’s distribution satisfies a symmetry property, the bounds admit closedform parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent and the tandem coding error exponent, which applies if the source and channel are separately coded. It is shown that 2. We establish conditions for which and for which =2. Numerical examples indicate that is close to2 for many source– channel pairs. This gain translates into a power saving larger than 2 dB for a binary source transmitted over additive white Gaussian noise (AWGN) channels and Rayleighfading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure. Index Terms—Discrete memoryless sources and channels, error exponent, Fenchel’s duality, Hamming distortion measure, joint source–channel coding, randomcoding exponent, reliability function, spherepacking exponent, symmetric channels, tandem source and channel coding. I.
Competitive Learning Algorithms for Robust Vector Quantization
, 1998
"... The efficient representation and encoding of signals with limited resources, e.g., finite storage capacity and restricted transmission bandwidth, is a fundamental problem in technical as well as biological information processing systems. Typically, under realistic circumstances, the encoding and com ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
The efficient representation and encoding of signals with limited resources, e.g., finite storage capacity and restricted transmission bandwidth, is a fundamental problem in technical as well as biological information processing systems. Typically, under realistic circumstances, the encoding and communication of messages has to deal with different sources of noise and disturbances. In this paper, we propose a unifying approach to data compression by robust vector quantization, which explicitly deals with channel noise, bandwidth limitations, and random elimination of prototypes. The resulting algorithm is able to limit the detrimental effect of noise in a very general communication scenario. In addition, the presented model allows us to derive a novel competitive neural networks algorithm, which covers topology preserving feature maps, the socalled neuralgas algorithm, and the maximum entropy softmax rule as special cases. Furthermore, continuation methods based on these noise models impr...