Results 1 
7 of
7
New Upper Bounds on Error Exponents
"... We derive new upper bounds on the error exponents for the maximum likelihood decoding and error detecting in the binary symmetric channels. This is an improvement on the straightline bound by ShannonGallagerBerlekamp (1967) and the McElieceOmura (1977) minimum distance bound. For the probability ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
We derive new upper bounds on the error exponents for the maximum likelihood decoding and error detecting in the binary symmetric channels. This is an improvement on the straightline bound by ShannonGallagerBerlekamp (1967) and the McElieceOmura (1977) minimum distance bound. For the probability of undetected error the new bounds are better than the recent bound by AbdelGhaffar (1997) and the minimum distance and straightline bounds by Levenshtein (1978, 1989). We further extend the range of rates where the undetected error exponent is known to be exact. Keywords: Error exponents, Undetected error, Maximum likelihood decoding, Distance distribution, Krawtchouk polynomials. Submitted to IEEE Transactions on Information Theory 1 Introduction A classical problem of the information theory is to estimate probabilities of undetected and decoding errors when a block code is used for information transmission over a binary symmetric channel (BSC). We will study here exponential bounds ...
Distance distribution of binary codes and the error probability of decoding
 IEEE TRANS. INFORM. THEORY
, 2005
"... We address the problem of bounding below the probability of error under maximumlikelihood decoding of a binary code with a known distance distribution used on a binarysymmetric channel (BSC). An improved upper bound is given for the maximum attainable exponent of this probability (the reliability ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We address the problem of bounding below the probability of error under maximumlikelihood decoding of a binary code with a known distance distribution used on a binarysymmetric channel (BSC). An improved upper bound is given for the maximum attainable exponent of this probability (the reliability function of the channel). In particular, we prove that the “random coding exponent ” is the true value of the channel reliability for codes rate in some interval immediately below the critical rate of the channel. An analogous result is obtained for the Gaussian channel.
Lower bounds on the error probability of block codes based on improvements on de Caen’s inequality
 IEEE TRANS. INFORM. THEORY
, 2004
"... New lower bounds on the error probability of block codes with maximumlikelihood decoding are proposed. The bounds are obtained by applying a new lower bound on the probability of a union of events, derived by improving on de Caen’s lower bound. The new bound includes an arbitrary function to be op ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
New lower bounds on the error probability of block codes with maximumlikelihood decoding are proposed. The bounds are obtained by applying a new lower bound on the probability of a union of events, derived by improving on de Caen’s lower bound. The new bound includes an arbitrary function to be optimized in order to achieve the tightest results. Since the optimal choice of this function is known, but leads to a trivial and useless identity, we find several useful approximations for it, each resulting in a new lower bound. For the additive white Gaussian noise (AWGN) channel and the binarysymmetric channel (BSC), the optimal choice of the optimization function is stated and several approximations are proposed. When the bounds are further specialized to linear codes, the only knowledge on the code used is its weight enumeration. The results are shown to be tighter than the latest bounds in the current literature, such as those by Seguin and by Keren and Litsyn. Moreover, for the BSC, the new bounds widen the range of rates for which the union bound analysis applies, thus improving on the bound to the error exponent compared with the de Caenbased bounds.
Polynomial Method in Coding and Information Theory
"... Polynomial, or Delsarte's, method in coding theory accounts for a variety of structural results on, and bounds on the size of, extremal configurations (codes and designs) in various metric spaces. In recent works of the authors the applicability of the method was extended to cover a wider range of p ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Polynomial, or Delsarte's, method in coding theory accounts for a variety of structural results on, and bounds on the size of, extremal configurations (codes and designs) in various metric spaces. In recent works of the authors the applicability of the method was extended to cover a wider range of problems in coding and information theory. In this paper we present a general framework for the method which includes previous results as particular cases. We explain how this generalization leads to new asymptotic bounds on the performance of codes in binaryinput memoryless channels and the Gaussian channel, which improve the results of Shannon et al. of 195967, and to a number of other results in combinatorial coding theory. 1 Introduction: Some problems of coding and information theory Let X be a metric space with distance function @(\Delta; \Delta). A code C is an arbitrary finite subset of X. The number d(C) = min c1 ;c2 2C; c1 6=c2 @(c 1 ; c 2 ) is called the distance of C. The stud...
Binomial Moments of the Distance Distribution: Bounds and Applications
 IEEE Trans. Inform. Theory
, 1999
"... We study a combinatorial invariant of codes vwhich counts the number of ordered pairs of codewords in all subcodes of restricted support in a code. This invariant can be expressed as a linear form of the components of the distance distribution of the code with binomial numbers as coefficients. For t ..."
Abstract
 Add to MetaCart
We study a combinatorial invariant of codes vwhich counts the number of ordered pairs of codewords in all subcodes of restricted support in a code. This invariant can be expressed as a linear form of the components of the distance distribution of the code with binomial numbers as coefficients. For this reason we call it a binomial moment of the distance distribution. Binomial moments appear in the proof of the MacWilliams identities and in many other problems of combinatorial coding theory. We introduce a linear programming problem for bounding these linear forms from below. It turns out that some known codes (1errorcorrecting perfect codes, Golay codes, NordstromRobinson code, etc) yield optimal solutions of this problem, i.e., have minimal possible binomial moments of the distance distribution. We derive several general feasible solutions of this problem, which give lower bounds on the binomial moments of codes with given parameters, and derive the corresponding asymptotic bounds. Applications of these bounds include new lower bounds on the probability of undetected error for binary codes used over the binary symmetric channel with crossover probability p and optimality of many codes for error detection. Asymptotic analysis of the bounds enables us to extend the range of code rates in which the upper bound on the undetected error exponent is tight. Keywords: Distance distribution, binomial moments, linear programming, extremal codes, undetected error, Rodemich's theorem.
Supplement to: Code Spectrum and Reliability Function: Binary Symmetric Channel
, 2007
"... A much simpler proof of Theorem 1 from [1] is presented below, using notation and formulas numeration of [1]. The text below replaces the subsection General case from §4 of [1, p. 11]. General case. In the general case for some ω we are interested in a pairs (xi, xj) with dij = ωn. But there may exi ..."
Abstract
 Add to MetaCart
A much simpler proof of Theorem 1 from [1] is presented below, using notation and formulas numeration of [1]. The text below replaces the subsection General case from §4 of [1, p. 11]. General case. In the general case for some ω we are interested in a pairs (xi, xj) with dij = ωn. But there may exist a pairs (xk, xl) with dkl < ωn. Using the “cleaning” procedure [2] we show that the influence of such pairs (xk, xl) on the value Pe is not large. It will allow us to reduce the general case to the model one. Note that if 1 n log Xmax(t, ω) = o(1) , n → ∞ , (S.1) then from (27) and (28) we get 1 1 log n Pe + min 0≤t≤1 min t log
Tradeoff Between Source and Channel Coding for Erasure Channels
, 2005
"... Abstract — In this paper, we investigate the optimal tradeoff between source and channel coding for channels with bit or packet erasure. Upper and Lower bounds on the optimal channel coding rate are computed to achieve minimal endtoend distortion. The bounds are calculated based on a combination o ..."
Abstract
 Add to MetaCart
Abstract — In this paper, we investigate the optimal tradeoff between source and channel coding for channels with bit or packet erasure. Upper and Lower bounds on the optimal channel coding rate are computed to achieve minimal endtoend distortion. The bounds are calculated based on a combination of sphere packing, straight line and expurgated error exponents and also high rate vector quantization theory. By modeling a packet erasure channel in terms of an equivalent bit erasure channel, we obtain bounds on the packet size for a specified limit on the distortion. Index terms–Joint source and channel coding, binary erasure channel, packet erasure, error exponent, high rate vector quantization. I.