Results 1  10
of
61
The Consistency of Posterior Distributions in Nonparametric Problems
 Ann. Statist
, 1996
"... We give conditions that guarantee that the posterior probability of every Hellinger... ..."
Abstract

Cited by 79 (4 self)
 Add to MetaCart
We give conditions that guarantee that the posterior probability of every Hellinger...
Posterior consistency of Dirichlet mixtures in density estimation
 Ann. Statist
, 1999
"... A Dirichlet mixture of normal densities is a useful choice for a prior distribution on densities in the problem of Bayesian density estimation. In the recent years, efficient Markov chain Monte Carlo method for the computation of the posterior distribution has been developed. The method has been app ..."
Abstract

Cited by 65 (20 self)
 Add to MetaCart
A Dirichlet mixture of normal densities is a useful choice for a prior distribution on densities in the problem of Bayesian density estimation. In the recent years, efficient Markov chain Monte Carlo method for the computation of the posterior distribution has been developed. The method has been applied to data arising from different fields of interest. The important issue of consistency was however left open. In this paper, we settle this issue in affirmative. 1. Introduction. Recent
Rates of Convergence of Posterior Distributions
, 1998
"... We compute the rate at which the posterior distribution concentrates around the true parameter value. The spaces we work in are quite general and include infinite dimensional cases. The rates are driven by two quantities: the size of the space, as measure by metric entropy or bracketing entropy, and ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
We compute the rate at which the posterior distribution concentrates around the true parameter value. The spaces we work in are quite general and include infinite dimensional cases. The rates are driven by two quantities: the size of the space, as measure by metric entropy or bracketing entropy, and the degree to which the prior concentrates in a small ball around the true parameter. We apply the results to several examples. In some cases, natural priors give suboptimal rates of convergence and better rates can be obtained by using sievebased priors such as those introduced by Zhao (1993, 1998). AMS 1990 classification: Primary, 62A15, Secondary: 62E20, 62G15. KEYWORDS: Bayesian inference, asymptotic inference, nonparametric models, sieves. 1 Introduction. Nonparametric Bayesian methods have become quite popular lately, largely because of advances in computing; see Dey, Mueller and Sinha (1998) for a recent account. Because of their growing popularity, it is important to understand ...
Convergence rates of posterior distributions
 Ann. Statist
, 2000
"... We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, D ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, Dirichlet processes and interval censoring. 1. Introduction. Suppose
Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities
 Ann. Statist
, 2001
"... We study the rates of convergence of the maximum likelihood estimator (MLE) and posterior distribution in density estimation problems, where the densities are location or locationscale mixtures of normal distributions with the scale parameter lying between two positive numbers. The true density is ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
We study the rates of convergence of the maximum likelihood estimator (MLE) and posterior distribution in density estimation problems, where the densities are location or locationscale mixtures of normal distributions with the scale parameter lying between two positive numbers. The true density is also assumed to lie in this class with the true mixing distribution either compactly supported or having subGaussian tails. We obtain bounds for Hellinger bracketing entropies for this class, and from these bounds, we deduce the convergence rates of (sieve) MLEs in Hellinger distance. The rate turns out to be �log n � κ / √ n, where κ ≥ 1 is a constant that depends on the type of mixtures and the choice of the sieve. Next, we consider a Dirichlet mixture of normals as a prior on the unknown density. We estimate the prior probability of a certain KullbackLeibler type neighborhood and then invoke a general theorem that computes the posterior convergence rate in terms the growth rate of the Hellinger entropy and the concentration rate of the prior. The posterior distribution is also seen to converge at the rate �log n � κ / √ n in, where κ now depends on the tail behavior of the base measure of the Dirichlet process. 1. Introduction. A
Convergence rates for density estimation with Bernstein polynomials
 Ann. Statist
, 2001
"... Mixture models for density estimation provide a very useful set up for the Bayesian or the maximum likelihood approach. For a density on the unit interval, mixtures of beta densities form a flexible model. The class of Bernstein densities is a muchsmaller subclass of the beta mixtures defined by Ber ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Mixture models for density estimation provide a very useful set up for the Bayesian or the maximum likelihood approach. For a density on the unit interval, mixtures of beta densities form a flexible model. The class of Bernstein densities is a muchsmaller subclass of the beta mixtures defined by Bernstein polynomials, which can approximate any continuous density. A Bernstein polynomial prior is obtained by putting a prior distribution on the class of Bernstein densities. The posterior distribution of a Bernstein polynomial prior is consistent under very general conditions. In this article, we present some results on the rate of convergence of the posterior distribution. If the underlying distribution generating the data is itself a Bernstein density, then we show that the posterior distribution converges at “nearly parametric rate ” �log n� / √ n for the Hellinger distance. If the true density is not of the Bernstein type, we show that the posterior converges at a rate n −1/3 �log n � 5/6 provided that the true density is twice differentiable and bounded away from 0. Similar rates are also obtained for sieve maximum likelihood estimates. These rates are inferior to the pointwise convergence rate of a kernel type estimator. We show that the Bayesian bootstrap method gives a proxy for the posterior distribution and has a convergence rate at par with that of the kernel estimator. 1. Introduction. Mixture models
Bayesian Model Selection in Finite Mixtures by Marginal Density Decompositions
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2001
"... ..."
Posterior Consistency in Nonparametric Regression Problems under Gaussian Process Priors
, 2004
"... Posterior consistency can be thought of as a theoretical justification of the Bayesian method. One of the most popular approaches to nonparametric Bayesian regression is to put a nonparametric prior distribution on the unknown regression function using Gaussian processes. In this paper, we study pos ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Posterior consistency can be thought of as a theoretical justification of the Bayesian method. One of the most popular approaches to nonparametric Bayesian regression is to put a nonparametric prior distribution on the unknown regression function using Gaussian processes. In this paper, we study posterior consistency in nonparametric regression problems using Gaussian process priors. We use an extension of the theorem of Schwartz (1965) for nonidentically distributed observations, verifying its conditions when using Gaussian process priors for the regression function with normal or double exponential (Laplace) error distributions. We define a metric topology on the space of regression functions and then establish almost sure consistency of the posterior distribution. Our metric topology is weaker than the popular L 1 topology. With additional assumptions, we prove almost sure consistency when the regression functions have L 1 topologies. When the covariate (predictor) is assumed to be a random variable, we prove almost sure consistency for the joint density function of the response and predictor using the Hellinger metric.
An inverse of Sanov's theorem
 Statist. Probab. Lett
, 1999
"... Let Xk be a sequence of i.i.d. random variables taking values in a nite set, and consider the problem of estimating the law of X1 in a Bayesian framework. We prove that the sequence of posterior distributions satis es a large deviation principle, and give an explicit expression for the rate function ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Let Xk be a sequence of i.i.d. random variables taking values in a nite set, and consider the problem of estimating the law of X1 in a Bayesian framework. We prove that the sequence of posterior distributions satis es a large deviation principle, and give an explicit expression for the rate function. As an application, we obtain an asymptotic formula for the predictive probability of ruin in the classical gambler’s ruin problem. c ○ 1999 Elsevier Science B.V. All rights reserved
Consistency issues in Bayesian Nonparametrics
 IN ASYMPTOTICS, NONPARAMETRICS AND TIME SERIES: A TRIBUTE
, 1998
"... ..."