Results 1  10
of
51
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems
 Proceedings of the IEEE
, 1998
"... this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, ph ..."
Abstract

Cited by 248 (11 self)
 Add to MetaCart
this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, physics, biology, control and signal processing, information theory, complexity theory, and psychology (see [45]). Neural networks have provided a fertile soil for the infusion (and occasionally confusion) of ideas, as well as a meeting ground for comparing viewpoints, sharing tools, and renovating approaches. It is within the illdefined boundaries of the field of neural networks that researchers in traditionally distant fields have come to the realization that they have been attacking fundamentally similar optimization problems.
Tradeoff Between Source and Channel Coding
 IEEE TRANS. INFORM. THEORY
, 1997
"... A fundamental problem in the transmission of analog information across a noisy discrete channel is the choice of channel code rate that optimally allocates the available transmission rate between lossy source coding and block channel coding. We establish tight bounds on the channel code rate that mi ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
A fundamental problem in the transmission of analog information across a noisy discrete channel is the choice of channel code rate that optimally allocates the available transmission rate between lossy source coding and block channel coding. We establish tight bounds on the channel code rate that minimizes the average distortion of a vector quantizer cascaded with a channel coder and a binarysymmetric channel. Analytic expressions are derived in two cases of interest: small biterror probability and arbitrary source vector dimension; arbitrary biterror probability and large source vector dimension. We demonstrate that the optimal channel code rate is often substantially smaller than the channel capacity, and obtain a noisychannel version of the Zador highresolution distortion formula.
Softdecision demodulation design for COVQ over white, colored, and ISI Gaussian channels
 IEEE Trans. Commun
, 2000
"... Abstract—In this work, the design of abit (scalar and vector) softdecision demodulator for Gaussian channels with binary phaseshift keying modulation is investigated. The demodulator is used in conjunction with a softdecision channeloptimized vector quantization (COVQ) system. The COVQ is const ..."
Abstract

Cited by 29 (14 self)
 Add to MetaCart
Abstract—In this work, the design of abit (scalar and vector) softdecision demodulator for Gaussian channels with binary phaseshift keying modulation is investigated. The demodulator is used in conjunction with a softdecision channeloptimized vector quantization (COVQ) system. The COVQ is constructed for an expanded ( 1) discrete channel consisting of the concatenation of the modulator, the Gaussian channel, and the demodulator. It is found that as the demodulator resolution increases, the capacity of the expanded channel increases, resulting in an improvement of the COVQ performance. Consequently, the softdecision demodulator is designed to maximize the capacity of the expanded channel. Three Gaussian channel models are considered as follows: 1) additive white Gaussian noise channels; 2) additive colored Gaussian noise channels; and 3) Gaussian channels with intersymbol interference. Comparisons are made with a) harddecision COVQ systems, b) COVQ systems which utilize interleaving, and c) an unquantized ( = ) softdecision decoder proposed by Skoglund and Hedelin. It is shown that substantial improvements can be achieved over COVQ systems which utilize harddecision demodulation and/or channel interleaving. The performance of the proposed COVQ system is comparable with the system by Skoglund and Hedelin—though its computational complexity is substantially less.
Robust Image Transmission over EnergyConstrained TimeVarying Channels Using Multiresolution Joint SourceChannel Coding
 IEEE TRANS. SIGNAL PROCESSING
, 1998
"... We explore joint sourcechannel coding (JSCC) for timevarying channels using a multiresolution framework for both source coding and transmission via novel multiresolution modulation constellations. We consider the problem of still image transmission over timevarying channels with the channel state ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
We explore joint sourcechannel coding (JSCC) for timevarying channels using a multiresolution framework for both source coding and transmission via novel multiresolution modulation constellations. We consider the problem of still image transmission over timevarying channels with the channel state information (CSI) available at 1) receiver only and 2) both transmitter and receiver being informed about the state of the channel, and we quantify the effect of CSI availability on the performance. Our source model is based on the wavelet image decomposition, which generates a collection of subbands modeled by the family of generalized Gaussian distributions. We describe an algorithm that jointly optimizes the design of the multiresolution source codebook, the multiresolution constellation, and the decoding strategy of optimally matching the source resolution and signal constellation resolution “trees” in
Quantization of memoryless and Gauss–Markov sources over binary Markov channels,” Univ
, 1994
"... Abstract — Joint source–channel coding for stationary memoryless and Gauss–Markov sources and binary Markov channels is considered. The channel is an additivenoise channel where the noise process is an Mthorder Markov chain. Two joint sourcechannel coding schemes are considered. The first is a ch ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
Abstract — Joint source–channel coding for stationary memoryless and Gauss–Markov sources and binary Markov channels is considered. The channel is an additivenoise channel where the noise process is an Mthorder Markov chain. Two joint sourcechannel coding schemes are considered. The first is a channeloptimized vector quantizer—optimized for both source and channel. The second scheme consists of a scalar quantizer and a maximum a posteriori detector. In this scheme, it is assumed that the scalar quantizer output has residual redundancy that can be exploited by the maximum a posteriori detector to combat the correlated channel noise. These two schemes are then compared against two schemes which use channel interleaving. Numerical results show that the proposed schemes outperform the interleaving schemes. For very noisy channels with high noise correlation, gains of 4–5 dB in signaltonoise ratio are possible. Index Terms — Channels with memory, joint source–channel coding, MAP detection, Markov noise, vector quantization. I.
Joint sourcechannel coding error exponent for discrete communication systems with Markovian memory
 IEEE Trans. Info. Theory
, 2007
"... Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formula ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary source–channel pairs via Arimoto’s algorithm. When the channel’s distribution satisfies a symmetry property, the bounds admit closedform parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent and the tandem coding error exponent, which applies if the source and channel are separately coded. It is shown that 2. We establish conditions for which and for which =2. Numerical examples indicate that is close to2 for many source– channel pairs. This gain translates into a power saving larger than 2 dB for a binary source transmitted over additive white Gaussian noise (AWGN) channels and Rayleighfading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure. Index Terms—Discrete memoryless sources and channels, error exponent, Fenchel’s duality, Hamming distortion measure, joint source–channel coding, randomcoding exponent, reliability function, spherepacking exponent, symmetric channels, tandem source and channel coding. I.
Softdecision COVQ for Rayleighfading channels
 IEEE Commun. Let
, 1998
"... Abstract — A channeloptimized vector quantizer (COVQ) scheme that exploits the channel softdecision information is proposed. The scheme is designed for stationary memoryless Gaussian and Gauss–Markov sources transmitted over BPSKmodulated Rayleighfading channels. It is demonstrated that substanti ..."
Abstract

Cited by 22 (10 self)
 Add to MetaCart
Abstract — A channeloptimized vector quantizer (COVQ) scheme that exploits the channel softdecision information is proposed. The scheme is designed for stationary memoryless Gaussian and Gauss–Markov sources transmitted over BPSKmodulated Rayleighfading channels. It is demonstrated that substantial coding gains (2–3 dB in channel signaltonoise ratio (SNR) and 1–1.5 dB in source signaltodistortion ratio (SDR) can be achieved over COVQ systems designed for discrete (harddecision demodulated) channels. Index Terms—Combined sourcechannel coding, COVQ, softdecision decoding, Rayleighfading channels.
Competitive Learning Algorithms for Robust Vector Quantization
, 1998
"... The efficient representation and encoding of signals with limited resources, e.g., finite storage capacity and restricted transmission bandwidth, is a fundamental problem in technical as well as biological information processing systems. Typically, under realistic circumstances, the encoding and com ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
The efficient representation and encoding of signals with limited resources, e.g., finite storage capacity and restricted transmission bandwidth, is a fundamental problem in technical as well as biological information processing systems. Typically, under realistic circumstances, the encoding and communication of messages has to deal with different sources of noise and disturbances. In this paper, we propose a unifying approach to data compression by robust vector quantization, which explicitly deals with channel noise, bandwidth limitations, and random elimination of prototypes. The resulting algorithm is able to limit the detrimental effect of noise in a very general communication scenario. In addition, the presented model allows us to derive a novel competitive neural networks algorithm, which covers topology preserving feature maps, the socalled neuralgas algorithm, and the maximum entropy softmax rule as special cases. Furthermore, continuation methods based on these noise models impr...
Asymptotic Bounds on Optimal Noisy Channel Quantization Via Random Coding
, 1994
"... Asymptotically optimal zerodelay vector quantization in the presence of channel noise is studied using random coding techniques. First, an upper bound is derived for the average r th  power distortion of channel optimized kdimensional vector quantization at transmission rate R on a binary symm ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
Asymptotically optimal zerodelay vector quantization in the presence of channel noise is studied using random coding techniques. First, an upper bound is derived for the average r th  power distortion of channel optimized kdimensional vector quantization at transmission rate R on a binary symmetric channel with bit error probability ffl. The upper bound asymptotically equals 2 \GammarRg(ffl;k;r) , where k k+r h 1 \Gamma log 2 i 1 + 2 p ffl(1 \Gamma ffl) ji g(ffl; k; r) 1 for all ffl 0, lim ffl!0 g(ffl; k; r) = 1, and lim k!1 g(ffl; k; r) = 1. Numerical computations of g(ffl; k; r) are also given. This result is analogous to Zador's asymptotic distortion rate of 2 \GammarR for quantization on noiseless channels. Next, using a random coding argument on nonredundant index assignments, a useful upper bound is derived in terms of point density functions, on the minimum mean squared error of high resolution, regular, vector quantizers in the presence of channel noise. T...