Results 1  10
of
28
Informationtheoretic asymptotics of Bayes methods
 IEEE Transactions on Information Theory
, 1990
"... AbstractIn the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian densit ..."
Abstract

Cited by 107 (10 self)
 Add to MetaCart
AbstractIn the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian density and show that the asymptotic distance is (d/2Xlogn)+ c, where d is the dimension of the parameter vector. Therefore, the relative entropy rate D,,/n converges to zero at rate (logn)/n. The constant c, which we explicitly identify, depends only on the prior density function and the Fisher information matrix evaluated at the true parameter value. Consequences are given for density estimation, universal data compression, composite hypothesis testing, and stockmarket portfolio selection. 1.
A Sequential Particle Filter Method for Static Models
, 2000
"... Particle filter methods are complex inference procedures, which combine importance sampling and Monte Carlo schemes, in order to consistently explore a sequence of multiple distributions of interest. The purpose of this article is to show that such methods can also offer an efficient estimation tool ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
Particle filter methods are complex inference procedures, which combine importance sampling and Monte Carlo schemes, in order to consistently explore a sequence of multiple distributions of interest. The purpose of this article is to show that such methods can also offer an efficient estimation tool in "static" setups; in this case, π(θy_1, ..., y_N) is the only posterior distribution of interest but the preliminary exploration of partial posteriors π(θy_1, ..., y_N) (n < N) makes computing time savings possible. A complete "blackbox" algorithm is proposed for independent or Markov models. Our method is shown to possibly challenge other common estimation procedures, in terms of robustness and execution time, especially when the sample size is important. Two classes of examples are discussed and illustrated by numerical results: mixture models and discrete generalized linear models.
Exploiting the generic viewpoint assumption
 IJCV
, 1996
"... The ¨generic viewpointässumption states that an observer is not in a special position relative to the scene. It is commonly used to disqualify scene interpretations that assume special viewpoints, following a binary decision that the viewpoint was either generic or accidental. In this paper, we appl ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
The ¨generic viewpointässumption states that an observer is not in a special position relative to the scene. It is commonly used to disqualify scene interpretations that assume special viewpoints, following a binary decision that the viewpoint was either generic or accidental. In this paper, we apply Bayesian statistics to quantify the probability of a view, and so derive a useful tool to estimate scene parameters. This approach may increase the scope and accuracy of scene estimates. It applies to a range of vision problems. We show shape from shading examples, where we rank shapes or reflectance functions in cases where these are otherwise unknown. The rankings agree with the perceived values.
Asymptotic normality of posterior distributions in highdimensional linear models, Bernoulli 5
, 1999
"... We study consistency and asymptotic normality of posterior distributions of the natural parameter for an exponential family when the dimension of the parameter grows with the sample size. Under certain growth restrictions on the dimension, we show that the posterior distributions concentrate in neig ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
We study consistency and asymptotic normality of posterior distributions of the natural parameter for an exponential family when the dimension of the parameter grows with the sample size. Under certain growth restrictions on the dimension, we show that the posterior distributions concentrate in neighbourhoods of the true parameter and can be approximated by an appropriate normal distribution.
Confidence intervals for a binomial proportion and asymptotic expansions
 Ann. Statist
, 2002
"... We address the classic problem of interval estimation of a binomial proportion. The Wald interval ˆp ± z α/2n −1/2 ( ˆp(1 −ˆp)) 1/2 is currently in near universal use. We first show that the coverage properties of the Wald interval are persistently poor and defy virtually all conventional wisdom. We ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We address the classic problem of interval estimation of a binomial proportion. The Wald interval ˆp ± z α/2n −1/2 ( ˆp(1 −ˆp)) 1/2 is currently in near universal use. We first show that the coverage properties of the Wald interval are persistently poor and defy virtually all conventional wisdom. We then proceed to a theoretical comparison of the standard interval and four additional alternative intervals by asymptotic expansions of their coverage probabilities and expected lengths. The four additional interval methods we study in detail are the scoretest interval (Wilson), the likelihoodratiotest interval, a Jeffreys prior Bayesian interval and an interval suggested by Agresti and Coull. The asymptotic expansions for coverage show that the first three of these alternative methods have coverages that fluctuate about the nominal value, while the Agresti– Coull interval has a somewhat larger and more nearly conservative coverage function. For the five interval methods we also investigate asymptotically their average coverage relative to distributions for p supported within (0, 1). In terms of expected length, asymptotic expansions show that the Agresti– Coull interval is always the longest of these. The remaining three are rather comparable and are shorter than the Wald interval except for p near 0 or 1. These analytical calculations support and complement the findings and the recommendations in Brown, Cai and DasGupta (Statist. Sci. (2001) 16
Asymptotics and the theory of inference
, 2003
"... Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution is unavailable. More importantly, asymptotic analysis can also provide insight into the inference process itself, suggesting what information is available and how this infor ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution is unavailable. More importantly, asymptotic analysis can also provide insight into the inference process itself, suggesting what information is available and how this information may be extracted. The development of likelihood inference over the past twentysome years provides an illustration of the interplay between techniques of approximation and statistical theory.
An inverse of Sanov's theorem
 Statist. Probab. Lett
, 1999
"... Let Xk be a sequence of i.i.d. random variables taking values in a nite set, and consider the problem of estimating the law of X1 in a Bayesian framework. We prove that the sequence of posterior distributions satis es a large deviation principle, and give an explicit expression for the rate function ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Let Xk be a sequence of i.i.d. random variables taking values in a nite set, and consider the problem of estimating the law of X1 in a Bayesian framework. We prove that the sequence of posterior distributions satis es a large deviation principle, and give an explicit expression for the rate function. As an application, we obtain an asymptotic formula for the predictive probability of ruin in the classical gambler’s ruin problem. c ○ 1999 Elsevier Science B.V. All rights reserved
On the uniform consistency of Bayes estimates for multinomial probabilities
, 1988
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Consistency of Bayes estimates for nonparametric regression: normal theory
 Bernoulli
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Laplace's method approximations for probabilistic inference in belief networks with continuous variables
 In de Mantaras
, 1994
"... Laplace's method, a family of asymptotic methods used to approximate integrals, is presented as a potential candidate for the tool box of techniques used for knowledge acquisition and probabilistic inference in belief networks with continuous variables. This technique approximates posterior moments ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Laplace's method, a family of asymptotic methods used to approximate integrals, is presented as a potential candidate for the tool box of techniques used for knowledge acquisition and probabilistic inference in belief networks with continuous variables. This technique approximates posterior moments and marginal posterior distributions with reasonable accuracy [errors are O(n,2) for posterior means] in many interesting cases. The method also seems promising for computing approximations for Bayes factors for use in the context of model selection, model uncertainty and mixtures of pdfs. The limitations, regularity conditions and computational di culties for the implementation of Laplace's method are comparable to those associated with the methods of maximum likelihood and posterior mode analysis. 1