Results 1  10
of
10
Universal Discrete Denoising: Known Channel
 IEEE Trans. Inform. Theory
, 2003
"... A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we pr ..."
Abstract

Cited by 79 (32 self)
 Add to MetaCart
A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary and ergodic. Moreover, the algorithm is universal also in a semistochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise.
Universal minimax discrete denoising under channel uncertainty
 IEEE Trans. Inf. Theory
, 2006
"... The goal of a denoising algorithm is to recover a signal from its noisecorrupted observations. Perfect recovery is seldom possible and performance is measured under a given singleletter fidelity criterion. For discrete signals corrupted by a known DMC, a denoising scheme, the DUDE algorithm, was r ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
The goal of a denoising algorithm is to recover a signal from its noisecorrupted observations. Perfect recovery is seldom possible and performance is measured under a given singleletter fidelity criterion. For discrete signals corrupted by a known DMC, a denoising scheme, the DUDE algorithm, was recently shown to perform this task practically and asymptotically optimally, with no knowledge of the statistical properties of the signal. In the present work we address the scenario where, in addition to the lack of knowledge of the source statistics, there is also uncertainty in the channel characteristics. We propose a family of discrete denoisers and establish their asymptotic optimality under a minimax criterion we argue appropriate for this setting. The proposed schemes can be implemented computationally efficiently. 1
The empirical distribution of rateconstrained source codes
 IEEE Trans. Inform. Theory
"... Let X =(X1,...) be a stationary ergodic finitealphabet source, X n denote its first n symbols, and Y n be the codeword assigned to X n by a lossy source code. The empirical kthorder joint distribution ˆ Q k [X n,Y n](x k,y k)is defined as the frequency of appearances of pairs of kstrings (x k,y k ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Let X =(X1,...) be a stationary ergodic finitealphabet source, X n denote its first n symbols, and Y n be the codeword assigned to X n by a lossy source code. The empirical kthorder joint distribution ˆ Q k [X n,Y n](x k,y k)is defined as the frequency of appearances of pairs of kstrings (x k,y k)alongthepair(X n,Y n). Our main interest is in the sample behavior of this (random) distribution. Letting I(Q k) denote the mutual information I(X k; Y k) when (X k,Y k) ∼ Q k we show that for any (sequence of) lossy source code(s) of rate ≤ R lim sup n→∞ 1 k I ˆQ k n n
Universally Attainable ErrorExponents for RateConstrained Denoising of Noisy Sources
, 2002
"... Consider the problem of rateconstrained reconstruction of a finitealphabet discrete memoryless signal X (X1 , . . . , Xn ), based on a noisecorrupted observation sequence Z , which is the finitealphabet output of a Discrete Memoryless Channel (DMC) whose input is X . Suppose that there ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Consider the problem of rateconstrained reconstruction of a finitealphabet discrete memoryless signal X (X1 , . . . , Xn ), based on a noisecorrupted observation sequence Z , which is the finitealphabet output of a Discrete Memoryless Channel (DMC) whose input is X . Suppose that there is some uncertainty in the source distribution, in the channel characteristics, or in both. Equivalently, suppose that the distribution of the pairs (X i , Z i ), rather than completely being known, is only known to belong to a set #. Suppose further that the relevant performance criterion is the probability of excess distortion, i.e., letting ) denote the reconstruction, we are interested in the behavior of P # , where # is a (normalized) block distortion induced by a singleletter distortion measure and P # denotes the probability measure corresponding to the case where (X i , Z i ) #, # #.
Estimation from misaligned observations with limited feedback
 In 39th Conference on Information Sciences and Systems (CISS 2005
, 2005
"... Abstract — In remote sensing applications with large numbers of sensors, the global correlation structure of the sensor observations may not be known locally at the sensors. However, some knowledge of this correlation structure can enable the sensors to communicate collaboratively to avoid interfere ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract — In remote sensing applications with large numbers of sensors, the global correlation structure of the sensor observations may not be known locally at the sensors. However, some knowledge of this correlation structure can enable the sensors to communicate collaboratively to avoid interference and combat channel noise. A model is proposed for the uncertainty in correlation structure, called a fading observation model. An example of multiplicative fading observations with Gaussian sources is analyzed. For this example, the endtoend distortion scales to 0 with the number of sensors M like M −1 without fading under one scheme, but with fading the same scheme induces a distortion that is bounded away from 0. A singlebit feedback scheme is proposed to align the sensor observations that yields a scaling rate of M −1/3. A more complicated class of fading observations for Gaussian sources is described that suggests that relative phase uncertainty is the dominant source of misalignment. I.
Source Coding With LimitedLookAhead Side Information at the Decoder
"... Abstract—We characterize the rate distortion function for the source coding with decoder side information setting when the ith reconstruction symbol is allowed to depend only on the first i + ` side information symbols, for some finite lookahead `, in addition to the index from the encoder. For the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—We characterize the rate distortion function for the source coding with decoder side information setting when the ith reconstruction symbol is allowed to depend only on the first i + ` side information symbols, for some finite lookahead `, in addition to the index from the encoder. For the case of causal side information, i.e., ` = 0, we find that the penalty of causality is the omission of the subtracted mutual information term in the Wyner–Ziv rate distortion function. For `> 0, we derive a computable “infiniteletter” expression for the rate distortion function. When specialized to the nearlossless case, our results characterize the best achievable rate for the Slepian–Wolf source coding problem with finite side information lookahead, and have some surprising implications. We find that side information is useless for any fixed ` when the joint probability mass function (PMF) of the source and side information satisfies the positivity condition P (x; y)> 0 for all (x; y). More generally, the optimal rate depends on the distribution of the pair X; Y only through the distribution of X and the bipartite graph whose edges represent the pairs x; y for which P (x; y)> 0. On the other hand, if side information lookahead is allowed to grow faster than logarithmic in the block length, then H(X j Y) is achievable. Finally, we apply our approach to derive a computable expression for channel capacity when state information is available at the encoder with limited lookahead. Index Terms—Causal source codes, delayconstrained coding, Gel’fand–Pinsker channel, rate distortion function, Slepian–Wolf coding, Wyner–Ziv coding. I.
Achievability Results for Statistical Learning Under Communication Constraints
, 901
"... Abstract — The problem of statistical learning is to construct an accurate predictor of a random variable as a function of a correlated random variable on the basis of an i.i.d. training sample from their joint distribution. Allowable predictors are constrained to lie in some specified class, and th ..."
Abstract
 Add to MetaCart
Abstract — The problem of statistical learning is to construct an accurate predictor of a random variable as a function of a correlated random variable on the basis of an i.i.d. training sample from their joint distribution. Allowable predictors are constrained to lie in some specified class, and the goal is to approach asymptotically the performance of the best predictor in the class. We consider two settings in which the learning agent only has access to ratelimited descriptions of the training data, and present informationtheoretic bounds on the predictor performance achievable in the presence of these communication constraints. Our proofs do not assume any separation structure between compression and learning and rely on a new class of operational criteria specifically tailored to joint design of encoders and learning algorithms in rateconstrained settings. I.
Anand Dilip SarwateAbstract Observation Uncertainty in Gaussian Sensor Networks by
"... All rights reserved. ..."
Achievability Results for Learning Under Communication Constraints
"... Abstract — The problem of statistical learning is to construct an accurate predictor of a random variable as a function of a correlated random variable on the basis of an i.i.d. training sample from their joint distribution. Allowable predictors are constrained to lie in some specified class, and th ..."
Abstract
 Add to MetaCart
Abstract — The problem of statistical learning is to construct an accurate predictor of a random variable as a function of a correlated random variable on the basis of an i.i.d. training sample from their joint distribution. Allowable predictors are constrained to lie in some specified class, and the goal is to approach asymptotically the performance of the best predictor in the class. We consider two settings in which the learning agent only has access to ratelimited descriptions of the training data, and present informationtheoretic bounds on the predictor performance achievable in the presence of these communication constraints. Our proofs do not assume any separation structure between compression and learning and rely on a new class of operational criteria specifically tailored to joint design of encoders and learning algorithms in rateconstrained settings. These operational criteria naturally lead to a learningtheoretic generalization of the ratedistortion function introduced recently by Kramer and Savari in the context of rateconstrained communication of probability distributions. I.