Results 1  10
of
81
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 877 (12 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Distributed parameter estimation in sensor networks: Nonlinear observation models and imperfect communication
 IEEE Transactions on Information Theory
, 2012
"... ar ..."
(Show Context)
Perfect Recovery and Sensitivity Analysis of Time Encoded Bandlimited Signals
 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS
, 2004
"... A time encoding machine is a realtime asynchronous
mechanism for encoding amplitude information into a time se
quence. We investigate the operating characteristics of a machine
consisting of a feedback loop containing an adder, a linear filter,
and a noninverting Schmitt trigger. We ..."
Abstract

Cited by 36 (21 self)
 Add to MetaCart
(Show Context)
A time encoding machine is a realtime asynchronous
mechanism for encoding amplitude information into a time se
quence. We investigate the operating characteristics of a machine
consisting of a feedback loop containing an adder, a linear filter,
and a noninverting Schmitt trigger. We show that the amplitude
information of a bandlimited signal can be perfectly recovered
if the difference between any two consecutive values of the time
sequence is bounded by the inverse of the Nyquist rate. We also
show how to build a nonlinear inverse time decoding machine
(TDM) that perfectly recovers the amplitude information from
the time sequence. We demonstrate the close relationship between
the recovery algorithms for time encoding and irregular sampling.
We also show the close relationship between time encoding and
a number of nonlinear modulation schemes including FM and
asynchronous sigma–delta modulation. We analyze the sensitivity
of the time encoding recovery algorithm and demonstrate how to
construct a TDM that perfectly recovers the amplitude informa
tion from the time sequence and is trigger parameter insensitive.
We derive bounds on the error in signal recovery introduced by
the quantization of the time sequence. We compare these with the
recovery error introduced by the quantization of the amplitude
of the bandlimited signal when irregular sampling is employed.
Under Nyquisttype rate conditions, quantization of a bandlimited
signal in the time and amplitude domains are shown to be largely
equivalent methods of information representation.
Digital Cancellation of D/A Converter Noise in Pipelined A/D Converters
, 2000
"... Pipelined analogtodigital converters (ADCs) tend to be sensitive to component mismatches in their internal digitaltoanalog converters (DACs). The component mismatches give rise to error, referred to as DAC noise, which is not attenuated or cancelled along the pipeline as are other types of noise. ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
Pipelined analogtodigital converters (ADCs) tend to be sensitive to component mismatches in their internal digitaltoanalog converters (DACs). The component mismatches give rise to error, referred to as DAC noise, which is not attenuated or cancelled along the pipeline as are other types of noise. This paper describes an alldigital technique that significantly mitigates this problem. The technique continuously measures and cancels the portion of the ADC error arising from DAC noise during normal operation of the ADC, so no special calibration signal or autocalibration phase is required. The details of the technique are described in the context of a nominal 14bit pipelined ADC example at both the signal processing and register transfer levels. Through this example, the paper demonstrates that in the presence of realistic component matching limitations the technique can improve the overall ADC accuracy by several bits with only moderate digital hardware complexity. I. INTRODUCTION ...
Automatic Evaluation of the Accuracy of Fixedpoint Algorithms
 in Proc. ACM/IEEE Design Automation and Test in Europe Conf., 2002
, 2002
"... The minimization of cost, power consumption and timeto market of DSP applications requires the development of methodologies for the automatic implementation of floatingpoint algorithms in fixedpoint architectures. In this paper, a new methodology for evaluating the quality of an implementation th ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
The minimization of cost, power consumption and timeto market of DSP applications requires the development of methodologies for the automatic implementation of floatingpoint algorithms in fixedpoint architectures. In this paper, a new methodology for evaluating the quality of an implementation through the automatic determination of the Signal to Quantization Noise Ratio (SQNR) is under consideration. The theoretical concepts and the different phases of the methodology are explained. Then, the ability of our approach for computing the SQNR efficiently and its beneficial contribution in the process of data wordlength minimization are shown through some examples.
A Framework for Control System Design Subject to Average DataRate Constraints
"... This paper studies discretetime control systems subject to average datarate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a sourcecoding scheme (with unity signal transfer function) has to be dep ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
This paper studies discretetime control systems subject to average datarate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a sourcecoding scheme (with unity signal transfer function) has to be deployed in the feedback path. For this situation, and by focusing on a class of sourcecoding schemes built around entropy coded dithered quantizers, we develop a framework to deal with average datarate constraints in a tractable manner that combines ideas from both information and control theories. As an illustration of the uses of our framework, we apply it to study the interplay between stability and average datarates in the considered architecture. It is shown that the proposed class of coding schemes can achieve mean square stability at average datarates that are, at most, 1.254 bits per sample away from the absolute minimum rate for stability established by Nair and Evans. This rate penalty is compensated by the simplicity of our approach.
Digital Background Correction of Harmonic Distortion in Pipelined ADCs
 Circuits and System I: Regular Papers, IEEE Transactions on
, 2006
"... Abstract—Pipelined analogtodigital converters (ADCs) are sensitive to distortion introduced by the residue amplifiers in their first few stages. Unfortunately, residue amplifier distortion tends to be inversely related to power consumption in practice, so the residue amplifiers usually are the dom ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
Abstract—Pipelined analogtodigital converters (ADCs) are sensitive to distortion introduced by the residue amplifiers in their first few stages. Unfortunately, residue amplifier distortion tends to be inversely related to power consumption in practice, so the residue amplifiers usually are the dominant consumers of power in highresolution pipelined ADCs. This paper presents a background calibration technique that digitally measures and cancels ADC error arising from distortion introduced by the residue amplifiers. It allows the use of higher distortion and, therefore, lower power residue amplifiers in highaccuracy pipelined ADCs, thereby significantly reducing overall power consumption relative to conventional pipelined ADCs. Index Terms—Analogtodigital conversion, calibration, harmonic distortion, mixed analog–digital integrated circuits (ICs).
Alternative dual frames for digitaltoanalog conversion in sigmadelta quantization
 Adv. Comput. Math
"... Abstract. We design alternative dual frames for linearly reconstructing signals from SigmaDelta (Σ∆) quantized finite frame coefficients. In the setting of sampling expansions for bandlimited functions, it is known that a stable rth order SigmaDelta quantizer produces approximations where the app ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We design alternative dual frames for linearly reconstructing signals from SigmaDelta (Σ∆) quantized finite frame coefficients. In the setting of sampling expansions for bandlimited functions, it is known that a stable rth order SigmaDelta quantizer produces approximations where the approximation error is at most of order 1/λr, and λ> 1 is the oversampling ratio. We show that the counterpart of this result is not true for several families of redundant finite frames for Rd when the canonical dual frame is used in linear reconstruction. As a remedy, we construct alternative dual frame sequences which enable an rth order SigmaDelta quantizer to achieve approximation error of order 1/Nr for certain sequences of frames where N is the frame size. We also present several numerical examples regarding the constructions. 1.
Perfect reconstruction versus MMSE filterbanks in source coding
 IEEE Trans. Signal Processing
, 1997
"... Abstract — Classically, the filter banks (FB’s) used in source coding schemes have been chosen to possess the perfect reconstruction (PR) property or to be maximally selective quadrature mirror filters (QMF’s). This paper puts this choice back into question and solves the problem of minimizing the r ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Classically, the filter banks (FB’s) used in source coding schemes have been chosen to possess the perfect reconstruction (PR) property or to be maximally selective quadrature mirror filters (QMF’s). This paper puts this choice back into question and solves the problem of minimizing the reconstruction distortion, which, in the most general case, is the sum of two terms: a first one due to the nonPR property of the FB and the other being due to signal quantization in the subbands. The resulting filter banks are called minimum mean square error (MMSE) filter banks. In this paper, several quantization noise models are considered. First, under the classical white noise assumption, the optimal positive bit rate allocation in any filter bank (possibly nonorthogonal) is expressed analytically, and an efficient optimization method of the MMSE filter banks is derived. Then, it is shown that while in a PR FB, the improvement brought by an accurate noise model over the classical white noise one is noticeable, it is not the case for MMSE FB. The optimization of the synthesis filters is also performed for two measures of the bit rate: the classical one, which is defined for uniform scalar quantization, and the orderone entropy measure. Finally, the comparison of ratedistortion curves (where the distortion is minimized for a given bit rate budget) enables us to quantify the SNR improvement brought by MMSE solutions. I.