Results 11  20
of
114
An extension of Sanov’s theorem: application to the Gibbs conditioning principle. Bernoulli 8
, 2002
"... Abstract. A large deviation principle is proved for the empirical measures of independent identically distributed random variables with a topology based on functions having only some exponential moments. The rate function differs from the usual relative entropy: It involves linear forms which are ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
Abstract. A large deviation principle is proved for the empirical measures of independent identically distributed random variables with a topology based on functions having only some exponential moments. The rate function differs from the usual relative entropy: It involves linear forms which are no longer measures. Following D.W. Stroock and O. Zeitouni, the Gibbs Conditioning Principle (GCP) is then derived with the help of the previous result. Besides a rather direct proof, the main improvements with respect to already published GCP’s are the following: Convergence holds in situations where the underlying logLaplace transform (the pressure) may not be steep and the constraints are built on energy functions admitting only some finite exponential moments. Basic techniques from Orlicz spaces theory appear to be a powerful tool.
Information Theoretic Methods in Probability and Statistics
, 2001
"... Ideas of information theory have found fruitful applications not only in various fields of science and engineering but also within mathematics, both pure and applied. This is illustrated by several typical applications of information theory specifically in probability and statistics. ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Ideas of information theory have found fruitful applications not only in various fields of science and engineering but also within mathematics, both pure and applied. This is illustrated by several typical applications of information theory specifically in probability and statistics.
Genealogies and Increasing Propagations of Chaos for FeynmanKac and Genetic Models, in "Annals of Applied Probability
 n o 4, p. 11661198. Activity Report INRIA 2012
"... A pathvalued interacting particle systems model for the genealogical structure of genetic algorithms is presented. We connect the historical process and the distribution of the whole ancestral tree with a class of FeynmanKac formulae on path space. We also prove increasing and uniform versions o ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
A pathvalued interacting particle systems model for the genealogical structure of genetic algorithms is presented. We connect the historical process and the distribution of the whole ancestral tree with a class of FeynmanKac formulae on path space. We also prove increasing and uniform versions of propagation of chaos for appropriate particle block size and time horizon yielding what seems to be the first result of this type for this class of particle systems. 1. Introduction. Over
Information and entropy econometrics – editor’s view
 Journal of Econometrics 107:1–16
, 2002
"... ..."
Relative entropy and exponential deviation bounds for general Markov chains
 in Proceedings of the 2005 IEEE International Symposium on Information Theory
, 2005
"... Abstract — We develop explicit, general bounds for the probability that the normalized partial sums of a function of a Markov chain on a general alphabet will exceed the steadystate mean of that function by a given amount. Our bounds combine simple informationtheoretic ideas together with techniqu ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Abstract — We develop explicit, general bounds for the probability that the normalized partial sums of a function of a Markov chain on a general alphabet will exceed the steadystate mean of that function by a given amount. Our bounds combine simple informationtheoretic ideas together with techniques from optimization and some fairly elementary tools from analysis. In one direction, we obtain a general bound for the important class of Doeblin chains; this bound is optimal, in the sense that in the special case of independent and identically distributed random variables it essentially reduces to the classical Hoeffding bound. In another direction, motivated by important problems in simulation, we develop a series of bounds in a form which is particularly suited to these problems, and which apply to the more general class of “geometrically ergodic ” Markov chains. I.
2004): “Optimal Scaling of MALA for Nonlinear Regression,” The Annals of Applied Probability
"... ar ..."
(Show Context)
Increasing Propagation of Chaos for Mean Field Models
, 1999
"... Let ¯ (N) denote a meanfield measure with potential F . Asymptotic independence properties of the measure ¯ (N) are investigated. In particular, with H(\Deltaj¯) denoting relative entropy, if there exists a unique nondegenerate minimum of H(\Deltaj¯) \Gamma F (\Delta), then propagation of cha ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Let ¯ (N) denote a meanfield measure with potential F . Asymptotic independence properties of the measure ¯ (N) are investigated. In particular, with H(\Deltaj¯) denoting relative entropy, if there exists a unique nondegenerate minimum of H(\Deltaj¯) \Gamma F (\Delta), then propagation of chaos holds for blocks of size o(N ). Certain degenerate situations are also studied. The results are applied for the Langevin dynamics of a system of interacting particles leading to a McKeanVlasov limit. R'esum'e Soit ¯ (N) une mesure de type champmoyen avec potentiel d'interaction F . Les propri'et'es asymptotiques d'ind'ependance de la mesure ¯ (N) sont 'etudie'es. En particulier, si H(\Deltaj¯) designe l'entropie relative, on montre que, s'il existe un unique minimum non d'eg'en'er'e de H(\Deltaj¯) \Gamma F (\Delta), alors la propagation du chaos est valide pour les block de taille o(N ). Certains cas de minima d'eg'en'er'e sont aussi 'etudi'es. Les resultats sont appliqu'es `a la dy...
Identification via Compressed Data
, 1998
"... We introduce and analyze a new coding problem for a correlated source (X n ; Y n ) 1 n=1 . The observer of X n can transmit data depending on X n at a prescribed rate R. Based on these data the observer of Y n tries to identify whether for some distortion measure ae (like the Hamming dis ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
We introduce and analyze a new coding problem for a correlated source (X n ; Y n ) 1 n=1 . The observer of X n can transmit data depending on X n at a prescribed rate R. Based on these data the observer of Y n tries to identify whether for some distortion measure ae (like the Hamming distance) 1 n ae(X n ; Y n ) d, a prescribed fidelity criterion. We investigate as functions of R and d the exponents of two error probabilities, the probabilities for misacceptance and the probabilities for misrejection. Our analysis has led to a new method for proving converses. Its basis is "The Inherently Typical Subset Lemma". It goes considerably beyond the "Entropy Characterisation" of [2], the "Image Size Characterisation" of [3], and its extensions in [5]. It is conceivable that it has a strong impact on Multiuser Information Theory. Key words: Correlated source, identification with fidelity, misacceptance and misrejection error probabilities. I Introduction and Formulation ...
Minimizers of energy functionals
, 2001
"... We consider a general class of problems of minimization of convex integral functionals (maximization of entropy) subject to linear constraints. Under general assumptions, the minimizing solutions are characterized. Our results improve previous literature on the subject in the following directions: ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We consider a general class of problems of minimization of convex integral functionals (maximization of entropy) subject to linear constraints. Under general assumptions, the minimizing solutions are characterized. Our results improve previous literature on the subject in the following directions: a necessary and sufficient condition for the shape of the minimizing density is proved without constraint qualification under infinitely many linear constraints subject to natural integrability conditions (no topological restrictions). As an illustration, we give the general shape of the minimizing density for the marginal problem on a product space. Finally, a counterexample of I. Csiszár is clarified. Our proofs mainly rely on convex duality.
Refinements of the Gibbs conditioning principle
 Prob. Theory and Related Fields 104
, 1996
"... Refinements of Sanov's large deviations theorem lead via Csisz'ar's information theoretic identity to refinements of the Gibbs conditioning principle which are valid for blocks whose length increase with the length of the conditioning sequence. Sharp bounds on the growth of the block ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Refinements of Sanov's large deviations theorem lead via Csisz'ar's information theoretic identity to refinements of the Gibbs conditioning principle which are valid for blocks whose length increase with the length of the conditioning sequence. Sharp bounds on the growth of the block length with the length of the conditioning sequence are derived. Extensions of Csisz'ar's triangle inequality and information theoretic identity to the Markov chain setup lead to similar refinements in the Markov case. 1 Introduction Throughout this paper, X 1 ; X 2 ; : : : denotes a sequence of independent, identically distributed random variables, distributed over a Polish space (\Sigma; B \Sigma ) with common distribution PX . Here, B \Sigma denotes the Borel oefield of \Sigma. Let L n = 1 n P n i=1 ffi X i denote the empirical measure of the sequence Partially supported by NSF DMS9209712 grant and by a USISRAEL BSF grant. y Partially supported by a USIsrael BSF grant and by the fund for ...