Results 1  10
of
24
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 883 (12 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Successive refinement of information
 Applications
, 1989
"... AbstrocrThe successive refinement of information consists of first approximating data using a few bits of information, then iteratively improving the approximation as more and more information is supplied. The god is to achieve an optimal description at each stage. In general an ongoing description ..."
Abstract

Cited by 218 (0 self)
 Add to MetaCart
AbstrocrThe successive refinement of information consists of first approximating data using a few bits of information, then iteratively improving the approximation as more and more information is supplied. The god is to achieve an optimal description at each stage. In general an ongoing description is sought which is ratedistortion optimal whenever it is interrupted. It is shown that a rate distortion problem is successively refinable if and only if the individual solutions of the rate distortion problems can be written as a Markov chain. This implies in particular that tree structured descriptions are optimal if and only if the rate distortion problem is successively rethable. Successive refinement is shown to be possible for all fmite alphabet signals with Hamming distortion, for Gaussian signals with squarederror distortion, and for Laplacian signals with absoluteerror distortion. However, a simple counterexample witb absolute error distortion and a symmetric source distribution shows that successive refinement is not always achievable. lnder TermRate distortion, refinement, progressive transmission, multiuser information theory, squarederror distortion, tree structure. I.
Lossy Source Coding
 IEEE Trans. Inform. Theory
, 1998
"... Lossy coding of speech, highquality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called ratedistortion theory. For the first 25 year ..."
Abstract

Cited by 103 (1 self)
 Add to MetaCart
(Show Context)
Lossy coding of speech, highquality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called ratedistortion theory. For the first 25 years of its existence, ratedistortion theory had relatively little impact on the methods and systems actually used to compress real sources. Today, however, ratedistortion theoretic concepts are an important component of many lossy compression techniques and standards. We chronicle the development of ratedistortion theory and provide an overview of its influence on the practice of lossy source coding. Index TermsData compression, image coding, speech coding, rate distortion theory, signal coding, source coding with a fidelity criterion, video coding. I.
Coordination Capacity
, 2009
"... We develop elements of a theory of cooperation and coordination in networks. Rather than considering a communication network as a means of distributing information, or of reconstructing random processes at remote nodes, we ask what dependence can be established among the nodes given the communicatio ..."
Abstract

Cited by 48 (17 self)
 Add to MetaCart
(Show Context)
We develop elements of a theory of cooperation and coordination in networks. Rather than considering a communication network as a means of distributing information, or of reconstructing random processes at remote nodes, we ask what dependence can be established among the nodes given the communication constraints. Specifically, in a network with communication rates {Ri,j} between the nodes, we ask what is the set of all achievable joint distributions p(x1,..., xm) of actions at the nodes on the network. Several networks are solved, including arbitrarily large cascade networks. Distributed cooperation can be the solution to many problems such as distributed games, distributed control, and establishing mutual information bounds on the influence of one part of a physical system on another.
Shitz), “Optimality and approximate optimality of sourcechannel separation in networks
 in Proc. IEEE Int. Symp. Inf. Theory
, 2010
"... Abstract — We consider the sourcechannel separation architecture for lossy source coding in communication networks. It is shown that the separation approach is optimal in two general scenarios and is approximately optimal in a third scenario. The two scenarios for which separation is optimal compl ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
Abstract — We consider the sourcechannel separation architecture for lossy source coding in communication networks. It is shown that the separation approach is optimal in two general scenarios and is approximately optimal in a third scenario. The two scenarios for which separation is optimal complement each other: the first is when the memoryless sources at source nodes are arbitrarily correlated, each of which is to be reconstructed at possibly multiple destinations within certain distortions, but the channels in this network are synchronized, orthogonal, and memoryless pointtopoint channels; the second is when the memoryless sources are mutually independent, each of which is to be reconstructed only at one destination within a certain distortion, but the channels are general, including multiuser channels, such as multiple access, broadcast, interference, and relay channels, possibly with feedback. The third scenario, for which we demonstrate approximate optimality of sourcechannel separation, generalizes the second scenario by allowing each source to be reconstructed at multiple destinations with different distortions. For this case, the loss from optimality using the separation approach can be upperbounded when a difference distortion measure is taken, and in the special case of quadratic distortion measure, this leads to universal constant bounds. Index Terms — Joint sourcechannel coding, separation. I.
Lossy Source Coding for a Cascade Communication System with SideInformations
"... Abstract — We investigate source coding in a cascade communication system consisting of an encoder, a relay and an end terminal, where both the relay and the end terminal wish to reconstruct source X with certain fidelities. Additionally, sideinformations Z and Y are available at the relay and the ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We investigate source coding in a cascade communication system consisting of an encoder, a relay and an end terminal, where both the relay and the end terminal wish to reconstruct source X with certain fidelities. Additionally, sideinformations Z and Y are available at the relay and the end terminal, respectively. The sideinformation Z at the relay is a physically degraded version of sideinformation Y at the end terminal. Inner and outer bounds for the rate distortion region are provided in this work for general discrete memoryless sources. The rate distortion region is characterized when the source and sideinformations are jointly Gaussian and physically degraded. The doubly symmetric binary source is also investigated and the inner and outer bounds are shown to coincide in certain distortion regimes. A
COMMUNICATION IN NETWORKS FOR COORDINATING BEHAVIOR
, 2009
"... in my opinion, it ..."
(Show Context)
Multiterminal source coding with complementary delivery
, 2008
"... A coding problem for correlated information sources is investigated. Messages emitted from two correlated sources are jointly encoded, and delivered to two decoders. Each decoder has access to one of the two messages to enable it to reproduce the other message. The ratedistortion function for the c ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
A coding problem for correlated information sources is investigated. Messages emitted from two correlated sources are jointly encoded, and delivered to two decoders. Each decoder has access to one of the two messages to enable it to reproduce the other message. The ratedistortion function for the coding problem and its interesting properties are clarified.
On source coding with coded side information for a binary source with binary side information
 in Proc. ISIT
, 2007
"... Abstract — The lossless rate region for the coded side information problem is “solved, ” but its solution is expressed in terms of an auxiliary random variable. As a result, finding the rate region for any fixed example requires an optimization over a family of allowed auxiliary random variables. Wh ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract — The lossless rate region for the coded side information problem is “solved, ” but its solution is expressed in terms of an auxiliary random variable. As a result, finding the rate region for any fixed example requires an optimization over a family of allowed auxiliary random variables. While intuitive constructions are easy to come by and optimal solutions are known under some special conditions, proving the optimal solution is surprisingly difficult even for examples as basic as a binary source with binary side information. We derive the optimal auxiliary random variables and corresponding achievable rate regions for a family of problems where both the source and side information are binary. Our solution involves first tightening known bounds on the alphabet size of the auxiliary random variable and then optimizing the auxiliary random variable subject to this constraint. The technique used to tighten the bound on the alphabet size applies to a variety of problems beyond the one studied here. I.
Source coding theory for a triangular communication system
 IEEE Trans. Inf. Theory
, 1996
"... © 1996 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
© 1996 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in