Results 1  10
of
44
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Statistical Theory of Quantization
 IEEE Trans. on Instrumentation and Measurement
, 1995
"... The effect of uniform quantization can often be modeled by an additive noise that is uniformly distributed, uncorrelated with the input signal, and has a white spectrum. This paper surveys the theory behind this model, and discusses the conditions of its validity. The application of the model to flo ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
The effect of uniform quantization can often be modeled by an additive noise that is uniformly distributed, uncorrelated with the input signal, and has a white spectrum. This paper surveys the theory behind this model, and discusses the conditions of its validity. The application of the model to floatingpoint quantization is demonstrated. Keywords  Quantization, noise model, quantization noise, noise spectrum, statistical theory, finite bit number, roundoff error, arithmetic rounding, floatingpoint quantization.
Perfect Recovery and Sensitivity Analysis of Time Encoded Bandlimited Signals
 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS
, 2004
"... A time encoding machine is a realtime asynchronous
mechanism for encoding amplitude information into a time se
quence. We investigate the operating characteristics of a machine
consisting of a feedback loop containing an adder, a linear filter,
and a noninverting Schmitt trigger. We ..."
Abstract

Cited by 27 (19 self)
 Add to MetaCart
A time encoding machine is a realtime asynchronous
mechanism for encoding amplitude information into a time se
quence. We investigate the operating characteristics of a machine
consisting of a feedback loop containing an adder, a linear filter,
and a noninverting Schmitt trigger. We show that the amplitude
information of a bandlimited signal can be perfectly recovered
if the difference between any two consecutive values of the time
sequence is bounded by the inverse of the Nyquist rate. We also
show how to build a nonlinear inverse time decoding machine
(TDM) that perfectly recovers the amplitude information from
the time sequence. We demonstrate the close relationship between
the recovery algorithms for time encoding and irregular sampling.
We also show the close relationship between time encoding and
a number of nonlinear modulation schemes including FM and
asynchronous sigma–delta modulation. We analyze the sensitivity
of the time encoding recovery algorithm and demonstrate how to
construct a TDM that perfectly recovers the amplitude informa
tion from the time sequence and is trigger parameter insensitive.
We derive bounds on the error in signal recovery introduced by
the quantization of the time sequence. We compare these with the
recovery error introduced by the quantization of the amplitude
of the bandlimited signal when irregular sampling is employed.
Under Nyquisttype rate conditions, quantization of a bandlimited
signal in the time and amplitude domains are shown to be largely
equivalent methods of information representation.
Digital Cancellation of D/A Converter Noise in Pipelined A/D Converters
, 2000
"... Pipelined analogtodigital converters (ADCs) tend to be sensitive to component mismatches in their internal digitaltoanalog converters (DACs). The component mismatches give rise to error, referred to as DAC noise, which is not attenuated or cancelled along the pipeline as are other types of noise. ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
Pipelined analogtodigital converters (ADCs) tend to be sensitive to component mismatches in their internal digitaltoanalog converters (DACs). The component mismatches give rise to error, referred to as DAC noise, which is not attenuated or cancelled along the pipeline as are other types of noise. This paper describes an alldigital technique that significantly mitigates this problem. The technique continuously measures and cancels the portion of the ADC error arising from DAC noise during normal operation of the ADC, so no special calibration signal or autocalibration phase is required. The details of the technique are described in the context of a nominal 14bit pipelined ADC example at both the signal processing and register transfer levels. Through this example, the paper demonstrates that in the presence of realistic component matching limitations the technique can improve the overall ADC accuracy by several bits with only moderate digital hardware complexity. I. INTRODUCTION ...
Automatic Evaluation of the Accuracy of Fixedpoint Algorithms
 in Proc. ACM/IEEE Design Automation and Test in Europe Conf., 2002
, 2002
"... The minimization of cost, power consumption and timeto market of DSP applications requires the development of methodologies for the automatic implementation of floatingpoint algorithms in fixedpoint architectures. In this paper, a new methodology for evaluating the quality of an implementation th ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
The minimization of cost, power consumption and timeto market of DSP applications requires the development of methodologies for the automatic implementation of floatingpoint algorithms in fixedpoint architectures. In this paper, a new methodology for evaluating the quality of an implementation through the automatic determination of the Signal to Quantization Noise Ratio (SQNR) is under consideration. The theoretical concepts and the different phases of the methodology are explained. Then, the ability of our approach for computing the SQNR efficiently and its beneficial contribution in the process of data wordlength minimization are shown through some examples.
Perfect reconstruction versus MMSE filterbanks in source coding
 IEEE Trans. Signal Processing
, 1997
"... Abstract — Classically, the filter banks (FB’s) used in source coding schemes have been chosen to possess the perfect reconstruction (PR) property or to be maximally selective quadrature mirror filters (QMF’s). This paper puts this choice back into question and solves the problem of minimizing the r ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract — Classically, the filter banks (FB’s) used in source coding schemes have been chosen to possess the perfect reconstruction (PR) property or to be maximally selective quadrature mirror filters (QMF’s). This paper puts this choice back into question and solves the problem of minimizing the reconstruction distortion, which, in the most general case, is the sum of two terms: a first one due to the nonPR property of the FB and the other being due to signal quantization in the subbands. The resulting filter banks are called minimum mean square error (MMSE) filter banks. In this paper, several quantization noise models are considered. First, under the classical white noise assumption, the optimal positive bit rate allocation in any filter bank (possibly nonorthogonal) is expressed analytically, and an efficient optimization method of the MMSE filter banks is derived. Then, it is shown that while in a PR FB, the improvement brought by an accurate noise model over the classical white noise one is noticeable, it is not the case for MMSE FB. The optimization of the synthesis filters is also performed for two measures of the bit rate: the classical one, which is defined for uniform scalar quantization, and the orderone entropy measure. Finally, the comparison of ratedistortion curves (where the distortion is minimized for a given bit rate budget) enables us to quantify the SNR improvement brought by MMSE solutions. I.
Asymptotic Distribution of the Errors in Scalar and Vector Quantizers
"... Highrate (or asymptotic) quantization theory has found formulas for the average squared length (more generally, the qth moment of the length) of the error produced by various scalar and vector quantizers with many quantization points. In contrast, this paper finds an asymptotic formula for the pro ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Highrate (or asymptotic) quantization theory has found formulas for the average squared length (more generally, the qth moment of the length) of the error produced by various scalar and vector quantizers with many quantization points. In contrast, this paper finds an asymptotic formula for the probability density of the length of the error and, in certain special cases, for the probability density of the multidimensional error vector, itself. The latter can be used to analyze the distortion of twostage vector quantization. The former permits one to learn about the point density and cell shapes of a quantizer from a histogram of quantization error lengths. Histograms of the error lengths in simulations agree well with the derived formulas. Also presented are a number of properties of the error density, including the relationship between the error density, the point density and the cell shapes, the fact that its qth moment equals Bennett's integral (a formula for the average distortio...
Digital Background Correction of Harmonic Distortion in Pipelined ADCs
 Circuits and System I: Regular Papers, IEEE Transactions on
, 2006
"... Abstract—Pipelined analogtodigital converters (ADCs) are sensitive to distortion introduced by the residue amplifiers in their first few stages. Unfortunately, residue amplifier distortion tends to be inversely related to power consumption in practice, so the residue amplifiers usually are the dom ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract—Pipelined analogtodigital converters (ADCs) are sensitive to distortion introduced by the residue amplifiers in their first few stages. Unfortunately, residue amplifier distortion tends to be inversely related to power consumption in practice, so the residue amplifiers usually are the dominant consumers of power in highresolution pipelined ADCs. This paper presents a background calibration technique that digitally measures and cancels ADC error arising from distortion introduced by the residue amplifiers. It allows the use of higher distortion and, therefore, lower power residue amplifiers in highaccuracy pipelined ADCs, thereby significantly reducing overall power consumption relative to conventional pipelined ADCs. Index Terms—Analogtodigital conversion, calibration, harmonic distortion, mixed analog–digital integrated circuits (ICs).
EnergyConstrained Optimal Quantization for Wireless Sensor Networks
, 2008
"... As low power, low cost, and longevity of transceivers are major requirements in wireless sensor networks, optimizing their design under energy constraints is of paramount importance. To this end, we develop quantizers under strict energy constraints to effect optimal reconstruction at the fusion cen ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
As low power, low cost, and longevity of transceivers are major requirements in wireless sensor networks, optimizing their design under energy constraints is of paramount importance. To this end, we develop quantizers under strict energy constraints to effect optimal reconstruction at the fusion center. Propagation, modulation, as well as transmitter and receiver structures are jointly accounted for using a binary symmetric channel model. We first optimize quantization for reconstructing a single sensor’s measurement, and deriving the optimal number of quantization levels as well as the optimal energy allocation across bits. The constraints take into account not only the transmission energy but also the energy consumed by the transceiver’s circuitry. Furthermore, we consider multiple sensors collaborating to estimate a deterministic parameter in noise. Similarly, optimum energy allocation and optimum number of quantization bits are derived and tested with simulated examples. Finally, we study the effect of channel coding on the reconstruction performance under strict energy constraints and jointly optimize the number of quantization levels as well as the number of channel uses.
Image Coding with an L∞ Norm and Confidence Interval Criteria
 IEEE TRANS. ON SIGNAL PROCESSING
, 1998
"... A new image coding technique based on an L1 norm criterion and exploiting statistical properties of the reconstruction error is investigated. The original image is preprocessed, quantized, encoded, and reconstructed within a given confidence interval. Two important classes of preprocessing, namely ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A new image coding technique based on an L1 norm criterion and exploiting statistical properties of the reconstruction error is investigated. The original image is preprocessed, quantized, encoded, and reconstructed within a given confidence interval. Two important classes of preprocessing, namely linear prediction and iterated filterbanks, are used. The approach is also shown to be compatible with previous techniques. The approach allows a great flexibility in that it can perform lossless coding as well as a controlled lossy one: specifications are typically that only p % the reconstructed pixels are different from the original ones.