## Entropy and the law of small numbers (2005)

### Cached

### Download Links

- [pages.cs.aueb.gr]
- [pages.cs.aueb.gr]
- [www.dam.brown.edu]
- [www.dam.brown.edu]
- [arxiv.org]
- [www.stats.bris.ac.uk]
- [www.maths.bristol.ac.uk]
- [www.maths.bris.ac.uk]
- [www.stats.bristol.ac.uk]
- [pages.cs.aueb.gr]
- [www.statistics.bristol.ac.uk]
- [apollo.maths.bris.ac.uk]
- [eprints.pascal-network.org]
- DBLP

### Other Repositories/Bibliography

Venue: | IEEE Trans. Inform. Theory |

Citations: | 29 - 11 self |

### BibTeX

@ARTICLE{Kontoyiannis05entropyand,

author = {I. Kontoyiannis and P. Harremoës and O. Johnson and Sn X X},

title = {Entropy and the law of small numbers},

journal = {IEEE Trans. Inform. Theory},

year = {2005},

volume = {51},

pages = {26}

}

### OpenURL

### Abstract

Two new information-theoretic methods are introduced for establishing Poisson approximation inequalities. First, using only elementary information-theoretic techniques it is shown that, when Sn = �n i=1 Xi is the sum of the (possibly dependent) binary random variables X1, X2,..., Xn, with E(Xi) = pi and E(Sn) = λ, then D(PSn�Po(λ)) ≤ n� i=1 p 2 i + � n � i=1 H(Xi) − H(X1, X2,..., Xn), where D(PSn�Po(λ)) is the relative entropy between the distribution of Sn and the Poisson(λ) distribution. The first term in this bound measures the individual smallness of the Xi and the second term measures their dependence. A general method is outlined for obtaining corresponding bounds when approximating the distribution of a sum of general discrete random variables by an infinitely divisible distribution. Second, in the particular case when the Xi are independent, the following sharper bound is established,

### Citations

8564 |
Elements of Information Theory
- Cover, Thomas
- 2006
(Show Context)
Citation Context ...sures, it is a natural measure of “similarity” in the context of statistics [20][8, Ch. 12]. Also, bounds in relative entropy can be translated into bounds in total variation via Pinsker’s inequality =-=[8]-=- 1 �P − Q�2 TV ≤ D(P �Q). (5) 2 Now let’s consider a triangular array of binary random variables (X (n) 1 , X (n) 2 , . . . , X (n) n ), n ≥ 1, and suppose that their distribution is such that the rig... |

1155 | Information Theory and Statistics
- Kullback
- 1959
(Show Context)
Citation Context ...on of Sn to be close to Po(λ) in the relative entropy sense. Although D(P �Q) is not a proper metric between probability measures, it is a natural measure of “similarity” in the context of statistics =-=[20]-=-[8, Ch. 12]. Also, bounds in relative entropy can be translated into bounds in total variation via Pinsker’s inequality [8] 1 �P − Q�2 TV ≤ D(P �Q). (5) 2 Now let’s consider a triangular array of bina... |

886 | Information Theory: Coding Theorems for Discrete Memoryless Systems - Csiszár, Körner - 1981 |

801 | Probability: theory and examples - Durrett - 2010 |

658 |
Large Deviations Techniques and Applications
- Dembo, Zeitouni
- 1998
(Show Context)
Citation Context ... proofs. The convergence of Markov chains has been studied by Rényi and 2sothers [24][19][8][6], and the theory of large deviations has been deeply influenced by information theoretic arguments [9][8]=-=[12]-=-[10]. Barron has given an information theoretic proof of the martingale convergence theorem [5][6], and O’Connell recently gave an elementary proof of the Hewitt-Savage 0-1 law [23]. We should mention... |

335 |
On Measures of Entropy and Information
- Rényi
- 1961
(Show Context)
Citation Context ...orem are not the only two cases of probabilistic results that have been given information theoretic interpretations and proofs. The convergence of Markov chains has been studied by Rényi and 2sothers =-=[24]-=-[19][8][6], and the theory of large deviations has been deeply influenced by information theoretic arguments [9][8][12][10]. Barron has given an information theoretic proof of the martingale convergen... |

203 |
Information-type measures of difference of probability distributions and indirect observation
- Csiszár
- 1967
(Show Context)
Citation Context ... [2][3]; see also [4]. For the special case of approximating a Binomial distribution by a Poisson, the sharpest results to date are established via these techniques combined with Pinsker’s inequality =-=[5]-=-[6][7], at least for most of the parameter values. Given α ∈ (0, 1) and a discrete random variable Y with distribution P on N0 = {0, 1, 2, . . .}, the α-thinning of P is the distribution Tα(P ) of the... |

180 |
Poisson Approximation
- Barbour, Holst, et al.
- 1992
(Show Context)
Citation Context ... i.e., all the pi are small. (c) The variables Xi are not strongly dependent. Such results are often referred to as “laws of small numbers” or “Poisson approximation results.” See [1][17, Section 2.6]=-=[3]-=- for details. Our purpose here is to illustrate how techniques based on information-theoretic ideas can be used to establish general Poisson approximation inequalities. In Section 2 we prove: Proposit... |

101 |
Some inequalities satisfied by the quantities of information of
- Stam
- 1959
(Show Context)
Citation Context ...riables X1,X2,...,Xn, with E(Sn) = �n i=1 pi = λ, D(PSn�Po(λ)) ≤ 1 λ n� i=1 p3 i . (5) 1 − pi The proof of Theorem 1 combines a natural discrete analog of Stam’s subbativity of the Fisher information =-=[35]-=-[7], and a recent logarithmic Sobolev inequality of Bobkov and Ledoux [8]. As we discuss extensively in Section 3, Theorem 1 is a significant improvement over Proposition 1, and in certain cases it le... |

91 | Two moments suffice for Poisson approximations: the Chen-Stein method
- Arratia, Goldstein, et al.
- 1989
(Show Context)
Citation Context ...Xi dominate the sum, i.e., all the pi are small. (c) The variables Xi are not strongly dependent. Such results are often referred to as “laws of small numbers” or “Poisson approximation results.” See =-=[1]-=-[13, Section 2.6] for rigorous statements of this general principle, or the monograph [3] for a comprehensive review. Our purpose in this note is to illustrate an elementary technique based on ideas f... |

80 |
The convolution inequality for entropy powers
- Blachman
- 1965
(Show Context)
Citation Context ...les X1,X2,...,Xn, with E(Sn) = �n i=1 pi = λ, D(PSn�Po(λ)) ≤ 1 λ n� i=1 p3 i . (5) 1 − pi The proof of Theorem 1 combines a natural discrete analog of Stam’s subbativity of the Fisher information [35]=-=[7]-=-, and a recent logarithmic Sobolev inequality of Bobkov and Ledoux [8]. As we discuss extensively in Section 3, Theorem 1 is a significant improvement over Proposition 1, and in certain cases it leads... |

76 |
Sanov property, generalized I-projection and a conditional limit theorem
- Csiszár
- 1984
(Show Context)
Citation Context ...ns and proofs. The convergence of Markov chains has been studied by Rényi and 2sothers [24][19][8][6], and the theory of large deviations has been deeply influenced by information theoretic arguments =-=[9]-=-[8][12][10]. Barron has given an information theoretic proof of the martingale convergence theorem [5][6], and O’Connell recently gave an elementary proof of the Hewitt-Savage 0-1 law [23]. We should ... |

71 |
Worst additive noise under covariance constraints
- Diggavi, Cover
- 2001
(Show Context)
Citation Context ...; Z) is minimized: X −−−−→ ⊕ −−−−→ Z ↑ ⏐ Y For continuous random variables X and Y with power constraints of the form E [ X 2] ≤ P and E [ Y 2] ≤ N, this is a classical problem; see, e.g., [13, p.263]=-=[14]-=- and the references therein. In that case, the Gaussian distributions with mean 0 and variances P and N, respectively, form a Nash equilibrium pair, in the sense that neither of the players would bene... |

65 |
Entropy and the central limit theorem,” The
- Barron
- 1986
(Show Context)
Citation Context ...m has been developed in a series of papers, where the limiting Gaussian is viewed as the maximum entropy distribution among all those with a given variance. A partial list of relevant works is [22][7]=-=[4]-=-[17]. It is interesting to recall that, motivated by the intuitive appeal of such statements, Gnedenko and Korolev in their book [14] suggest that “there should be [...] probabilistic models of the un... |

56 | Lectures on finite Markov chains - Saloff-Coste - 1996 |

52 |
Some Limit Theorems in Statistics
- BAHADUR
- 1971
(Show Context)
Citation Context ...ween P and Q, the best achievable type-I error probability is ≈ e −nD(P �Q) , which means that the larger the relative entropy between P and Q, the better we can tell them apart; cf. Stein’s lemma in =-=[2]-=-[8]. The next lemma states the well-known and very intuitive fact that we cannot do even better in a hypothesis test by simply pre-processing the data. It is a simple consequence of Jensen’s inequalit... |

36 |
On modified logarithmic Sobolev inequalities for Bernoulli and Poisson measures
- Bobkov, Ledoux
- 1998
(Show Context)
Citation Context ...1 p3 i . (5) 1 − pi The proof of Theorem 1 combines a natural discrete analog of Stam’s subbativity of the Fisher information [35][7], and a recent logarithmic Sobolev inequality of Bobkov and Ledoux =-=[8]-=-. As we discuss extensively in Section 3, Theorem 1 is a significant improvement over Proposition 1, and in certain cases it leads to total variation bounds that are asymptotically optimal up to multi... |

36 | Towards a theory of negative dependence
- Pemantle
- 2000
(Show Context)
Citation Context ...e conserved by thinning is the class of ultra log-concave distributions. Recall that P is ultra log-concave if the ratio between P and a Poisson distribution is a (discrete) log-concave function; see =-=[11]-=-[12]. In particular, the ultra log-concave class contains all distributions that arise from sums of independent (possibly non-identical) Bernoulli random variables. Proposition 3: For any α ∈ (0, 1), ... |

30 |
An approximation theorem for the Poisson binomial distribution,Pacific
- Cam
- 1960
(Show Context)
Citation Context ...nequality (2) as promised. In the case when the Xi are independent (2) reduces to i=1 D(PSn�Po(λ)) ≤ n� i=1 p 2 i . (9) Although this bound coincides with a simple total-variation bound due to Le Cam =-=[21]-=-, n� �PSn − Po(λ)�TV ≤ p 2 i , 5 i=1sapplying Pinsker’s inequality (5) to (9) only leads to the weaker (and strictly suboptimal) bound � � � �PSn − Po(λ)�TV ≤ 2� n � p2 i . In fact, much more accurate... |

30 |
Information theory and the central limit theorem
- Johnson
- 2004
(Show Context)
Citation Context ... the optimal strategies for both sender and jammer are given by the Poisson distribution. The central limit theorem has been established in the strong sense of information divergence in [8]; see also =-=[9]-=- and the references therein. The main results of this paper can be seen as analogous theorems for Poisson convergence. II. THINNING The thinning operation was introduced by Rényi in [10] in connection... |

27 | Conditional limit theorems under Markov conditioning
- Csiszar, Cover, et al.
- 1987
(Show Context)
Citation Context ...ofs. The convergence of Markov chains has been studied by Rényi and 2sothers [24][19][8][6], and the theory of large deviations has been deeply influenced by information theoretic arguments [9][8][12]=-=[10]-=-. Barron has given an information theoretic proof of the martingale convergence theorem [5][6], and O’Connell recently gave an elementary proof of the Hewitt-Savage 0-1 law [23]. We should mention tha... |

26 |
Refinements of Pinsker’s inequality
- Fedotov, Harremoës, et al.
- 2003
(Show Context)
Citation Context ...][3]; see also [4]. For the special case of approximating a Binomial distribution by a Poisson, the sharpest results to date are established via these techniques combined with Pinsker’s inequality [5]=-=[6]-=-[7], at least for most of the parameter values. Given α ∈ (0, 1) and a discrete random variable Y with distribution P on N0 = {0, 1, 2, . . .}, the α-thinning of P is the distribution Tα(P ) of the su... |

25 |
Binomial and Poisson distributions as maximum entropy distributions
- Harremoës
- 2001
(Show Context)
Citation Context ...ory angle, but their point of view is much closer to the one taken in [7] and [4] for the central limit theorem. Instead, the approach we take here is based on ideas from the recent work of Harremoës =-=[15]-=-. In Section 2 we collect some elementary information theoretic facts that will be needed later on, and in Section 3 we present our main argument via three examples. First we give several versions of ... |

22 | Log-concavity and the maximum entropy property of the Poisson distribution. Stochastic Process
- Johnson
(Show Context)
Citation Context ...nserved by thinning is the class of ultra log-concave distributions. Recall that P is ultra log-concave if the ratio between P and a Poisson distribution is a (discrete) log-concave function; see [11]=-=[12]-=-. In particular, the ultra log-concave class contains all distributions that arise from sums of independent (possibly non-identical) Bernoulli random variables. Proposition 3: For any α ∈ (0, 1), the ... |

17 |
Remarks on the Poisson process
- RÉNYI
- 1967
(Show Context)
Citation Context ... [8]; see also [9] and the references therein. The main results of this paper can be seen as analogous theorems for Poisson convergence. II. THINNING The thinning operation was introduced by Rényi in =-=[10]-=- in connection with the characterization theory of the Poisson process. Let α ∈ (0, 1) and P be a distribution on N0 = {0, 1, 2, . . .}. The α-thinning of P is the distribution Tα(P ) of the sum (1). ... |

16 |
An information-theoretic proof of the central limit theorem with the Lindeberg condition
- Linnik
- 1959
(Show Context)
Citation Context ... theorem has been developed in a series of papers, where the limiting Gaussian is viewed as the maximum entropy distribution among all those with a given variance. A partial list of relevant works is =-=[22]-=-[7][4][17]. It is interesting to recall that, motivated by the intuitive appeal of such statements, Gnedenko and Korolev in their book [14] suggest that “there should be [...] probabilistic models of ... |

14 |
A proof of the central limit theorem motivated by the Cramér-Rao inequality
- Brown
- 1982
(Show Context)
Citation Context ...orem has been developed in a series of papers, where the limiting Gaussian is viewed as the maximum entropy distribution among all those with a given variance. A partial list of relevant works is [22]=-=[7]-=-[4][17]. It is interesting to recall that, motivated by the intuitive appeal of such statements, Gnedenko and Korolev in their book [14] suggest that “there should be [...] probabilistic models of the... |

11 |
Limits of information, Markov chains, and projections
- Barron
- 2000
(Show Context)
Citation Context ...t the only two cases of probabilistic results that have been given information theoretic interpretations and proofs. The convergence of Markov chains has been studied by Rényi and 2sothers [24][19][8]=-=[6]-=-, and the theory of large deviations has been deeply influenced by information theoretic arguments [9][8][12][10]. Barron has given an information theoretic proof of the martingale convergence theorem... |

11 |
Random summation: limit theorems and applications
- Gnedenko, Korolev
- 1996
(Show Context)
Citation Context ...e with a given variance. A partial list of relevant works is [22][7][4][17]. It is interesting to recall that, motivated by the intuitive appeal of such statements, Gnedenko and Korolev in their book =-=[14]-=- suggest that “there should be [...] probabilistic models of the universal principle of nondecrease of uncertainty itself”[14, p. 211], and they urge [14, p. 215] that we should “give information proo... |

11 |
Une mesure d’information caractérisant la loi de Poisson
- Johnstone, MacGibbon
- 1987
(Show Context)
Citation Context ...rmation theoretic proof of the martingale convergence theorem [5][6], and O’Connell recently gave an elementary proof of the Hewitt-Savage 0-1 law [23]. We should mention that Johnstone and MacGibbon =-=[18]-=- have also considered the problem of Poisson convergence from the information theory angle, but their point of view is much closer to the one taken in [7] and [4] for the central limit theorem. Instea... |

10 |
Rate of convergence to Poisson law in terms of information divergence.” Accepted for publication
- Harremoës, Ruzankin
- 2003
(Show Context)
Citation Context ...]; see also [4]. For the special case of approximating a Binomial distribution by a Poisson, the sharpest results to date are established via these techniques combined with Pinsker’s inequality [5][6]=-=[7]-=-, at least for most of the parameter values. Given α ∈ (0, 1) and a discrete random variable Y with distribution P on N0 = {0, 1, 2, . . .}, the α-thinning of P is the distribution Tα(P ) of the sum, ... |

9 |
Entropy inequalities and the central limit theorem
- Johnson
- 2000
(Show Context)
Citation Context ...as been developed in a series of papers, where the limiting Gaussian is viewed as the maximum entropy distribution among all those with a given variance. A partial list of relevant works is [22][7][4]=-=[17]-=-. It is interesting to recall that, motivated by the intuitive appeal of such statements, Gnedenko and Korolev in their book [14] suggest that “there should be [...] probabilistic models of the univer... |

9 |
A semigroup approach to Poisson approximation
- Deheuvels, Pfeifer
- 1986
(Show Context)
Citation Context ...Sn − Po(λ)�TV ≤ (2 + ǫ) λ , for n ≥ λ/ǫ. n This is a definite improvement over the earlier 2λ/ √ n bound from (4), and, except for the constant factor, it is asymptotically of the right order; see [3]=-=[15]-=- for details. Example 2. If the Xi are i.i.d.Bernoulli(µ/ √ n) random variables, Theorem 1 together with Pinsker’s inequality (2) yield, � �PSn − Po(µ √ n) � � ≤ TV µ � 2 √ n 1 − µ/ √ µ √ ≈ √ 2, n n w... |

9 |
Some characteristic properties of the Fisher information matrix via Cacoullos-type inequalities
- Papathanasiou
- 1993
(Show Context)
Citation Context ... the Xi are sufficiently weakly dependent. 6s3 Tighter Bounds for Independent Random Variables Next we take a different point of view that yields tighter bounds than Proposition 1. Recall that in [22]=-=[30]-=-[23], the Fisher information of a random variable X with distribution P on Z+ = {0,1,2,...}, is defined in a way analogous to that for continuous random variables, via �� P(X − 1) − P(X) �2� J(X) = E ... |

9 | Two moments su ce for Poisson approximation: The Chen-Stein method - Arratia, Goldstein, et al. - 1989 |

6 |
Poisson approximation, The Clarendon
- Barbour, Holst, et al.
- 1992
(Show Context)
Citation Context ...pendent. Such results are often referred to as “laws of small numbers” or “Poisson approximation results.” See [1][13, Section 2.6] for rigorous statements of this general principle, or the monograph =-=[3]-=- for a comprehensive review. Our purpose in this note is to illustrate an elementary technique based on ideas from information theory, which can be used to prove general Poisson approximation results.... |

6 |
On an inequality of Chernoff
- Klaassen
- 1985
(Show Context)
Citation Context ...r the Poisson(λ) probabilities, then for all functions g in L2 (q), � Poλ(x)(g(x) − µ) 2 ≤ λ � Poλ(x)(∆g(x)) 2 , (13) x where µ = � x g(x)Poλ(x) is the mean of g under Po(λ); see for example Klaassen =-=[25]-=-. Using the simple fact that ( √ u − 1) 2 ≤ ( √ u − 1) 2 ( √ u + 1) 2 = (u − 1) 2 , for all u ≥ 0, 10 xswe get that K(X) = λ � x � �2 P(x + 1)Poλ(x) P(x) − 1 ≥ λ Poλ(x + 1)P(x) � �� �2 P(x + 1)Poλ(x) ... |

6 |
Entropy and the Central Limit Theorem,” Annals Probab
- Barron
- 1986
(Show Context)
Citation Context ...on game, where the optimal strategies for both sender and jammer are given by the Poisson distribution. The central limit theorem has been established in the strong sense of information divergence in =-=[8]-=-; see also [9] and the references therein. The main results of this paper can be seen as analogous theorems for Poisson convergence. II. THINNING The thinning operation was introduced by Rényi in [10]... |

5 |
Information theory and the limit theorem for Markov chains and processes with a countable in…nity of states
- Kendall
- 1964
(Show Context)
Citation Context ... are not the only two cases of probabilistic results that have been given information theoretic interpretations and proofs. The convergence of Markov chains has been studied by Rényi and 2sothers [24]=-=[19]-=-[8][6], and the theory of large deviations has been deeply influenced by information theoretic arguments [9][8][12][10]. Barron has given an information theoretic proof of the martingale convergence t... |

5 |
The Information Topology
- Harremoës
(Show Context)
Citation Context ...nse. Although D(P �Q) is not a proper metric, it is a natural measure of “dissimilarity” in the context of statistics [26][11, Ch. 12], and it can be used to define a topology on probability measures =-=[20]-=-. Also, bounds in relative entropy can be translated into bounds in total variation via Pinsker’s inequality [11] For example, if the Xi are independent (1) reduces to 1 �P − Q�2 TV ≤ D(P �Q). (2) 2 D... |

4 |
Mutual information and conditional mean estimation
- Guo, Shamai, et al.
- 2004
(Show Context)
Citation Context ...ribution is a well studied subject in probability; see [1] for an extensive account. Strong connections between these results and information-theoretic techniques were established in [2][3]; see also =-=[4]-=-. For the special case of approximating a Binomial distribution by a Poisson, the sharpest results to date are established via these techniques combined with Pinsker’s inequality [5][6][7], at least f... |

3 |
Information theory and martingales
- Barron
- 1991
(Show Context)
Citation Context ... and the theory of large deviations has been deeply influenced by information theoretic arguments [9][8][12][10]. Barron has given an information theoretic proof of the martingale convergence theorem =-=[5]-=-[6], and O’Connell recently gave an elementary proof of the Hewitt-Savage 0-1 law [23]. We should mention that Johnstone and MacGibbon [18] have also considered the problem of Poisson convergence from... |

3 |
Letter to the editor: “A discrete version of Stam inequality and a characterization of the Poisson distribution
- KAGAN
(Show Context)
Citation Context ... Xi are sufficiently weakly dependent. 6s3 Tighter Bounds for Independent Random Variables Next we take a different point of view that yields tighter bounds than Proposition 1. Recall that in [22][30]=-=[23]-=-, the Fisher information of a random variable X with distribution P on Z+ = {0,1,2,...}, is defined in a way analogous to that for continuous random variables, via �� P(X − 1) − P(X) �2� J(X) = E , P(... |

3 |
Compound Poisson approximations for sums of random variables
- Serfozo
- 1986
(Show Context)
Citation Context ...gh this bound is sufficient to prove that PSn converges to the Poisson distribution, it leads to a convergence rate in total variation of order � (log n)/n, compared to the O(1/n) bound derived in [3]=-=[33]-=-[34]. 5sA Compound Poisson Approximation Example. Let X1,... ,Xn be independent Bernoulli random variables with parameters pi = E(Xi), write λ = � n i=1 pi, and let α1,α2,...,αn be i.i.d., independent... |

2 |
Convergence to Poisson distribution
- Harremoes
- 2001
(Show Context)
Citation Context ...en λ is large, like the following inequality also due to Le Cam [21], �PSn − Po(λ)� TV ≤ 8 λ See [3] for an extensive discussion. In fact, in the case when the Xi are i.i.d. with pi = λ/n, Harremomës =-=[16]-=- has shown that D(PSn�Po(λ)) ≈ λ2 4n2 , as opposed to the λ2 n bound we obtain here. This shows that the use of the data processing inequality in (7) is far from optimal. 3.2 A Markov Chain Example He... |

1 |
Information-theoretic proof of the hewitt-savage zero-one law
- O’Connell
- 2000
(Show Context)
Citation Context ...ic arguments [9][8][12][10]. Barron has given an information theoretic proof of the martingale convergence theorem [5][6], and O’Connell recently gave an elementary proof of the Hewitt-Savage 0-1 law =-=[23]-=-. We should mention that Johnstone and MacGibbon [18] have also considered the problem of Poisson convergence from the information theory angle, but their point of view is much closer to the one taken... |

1 |
On the convergence of Markov binomial to Poisson distribution
- Čekanavičius
(Show Context)
Citation Context ...matrix ⎛ ⎝ n n+1 n−1 n+1 and with each X (n) i having (the stationary) Bernoulli( 1 n ) distribution. The convergence of the distribution of Sn = �n i=1 X(n) i to Po(1) is a well-studied problem; see =-=[10]-=- and the references therein. Applying Proposition 1 (or, equivalently, inequality (7)) in this case translates to since I(X (n) D(PSn�Po(1)) ≤ n� i=1 n−1 1 + n2 i=1 � I(X (n) 1 n+1 2 n+1 ⎞ ⎠ i ;X (n) ... |

1 |
A Nash Equilibrium related to the Poisson Channel
- Harremoës, Vignat
- 2004
(Show Context)
Citation Context ...C (λ) , λ ≤ λin and Y ∈ ULC (µ) , µ ≤ λnoise, where ULC(λ) denotes the class of ultra log-concave distributions on N0 with mean λ. A similar but more restricted version of this game was considered in =-=[15]-=-. The sets of strategies are not convex, so Von Neumann’s classical result on the existence of a game-theoretic equilibrium cannot be used. Nevertheless, our next result states that Poisson distributi... |

1 |
bounds on divergence in central limit theorem
- Harremoës, “Lower
- 2004
(Show Context)
Citation Context ...lue of m such that fmm = λ m and put γ = fmm0 . Lower bounds on the rate of convergence are essentially given in terms of m0 and γ. Using techniques that were developed for the central limit theorem =-=[16]-=-, we can obtain that, lim inf 2λm0 . We conjecture that this lower bound is asymptotically tight. n→∞ n2m0−2 D ( T 1/n (P ∗n ) ∥ ∥ P o (λ) ) ≥ m0! (γ − λm0 ) 2 VI. CHARACTERIZATIONS OF THE POISSON DIS... |

1 |
Re nements of Pinsker's Inequality
- Fedotov, Harremoës, et al.
- 2003
(Show Context)
Citation Context ...meters the best bounds on total variation between a binomial distribution and a Poisson distribution with the same mean have been proved by ideas from information theory via Pinsker's inequality [4], =-=[5]-=- and [6]. Here we shall see that the idea of thinning can be used to formulate a Law of Small Numbers (Poisson's Law) in a way that resembles the i.i.d. formulation of the Central Limit Theorem. Resul... |