Results 1  10
of
26
Practical Bayesian Density Estimation Using Mixtures Of Normals
 Journal of the American Statistical Association
, 1995
"... this paper, we propose some solutions to these problems. Our goal is to come up with a simple, practical method for estimating the density. This is an interesting problem in its own right, as well as a first step towards solving other inference problems, such as providing more flexible distributions ..."
Abstract

Cited by 115 (2 self)
 Add to MetaCart
this paper, we propose some solutions to these problems. Our goal is to come up with a simple, practical method for estimating the density. This is an interesting problem in its own right, as well as a first step towards solving other inference problems, such as providing more flexible distributions in hierarchical models. To see why the posterior is improper under the usual reference prior, we write the model in the following way. Let Z = (Z 1 ; : : : ; Z n ) and X = (X 1 ; : : : ; X n ). The Z
The Consistency of Posterior Distributions in Nonparametric Problems
 Ann. Statist
, 1996
"... We give conditions that guarantee that the posterior probability of every Hellinger... ..."
Abstract

Cited by 80 (4 self)
 Add to MetaCart
We give conditions that guarantee that the posterior probability of every Hellinger...
Posterior consistency of Dirichlet mixtures in density estimation
 Ann. Statist
, 1999
"... A Dirichlet mixture of normal densities is a useful choice for a prior distribution on densities in the problem of Bayesian density estimation. In the recent years, efficient Markov chain Monte Carlo method for the computation of the posterior distribution has been developed. The method has been app ..."
Abstract

Cited by 66 (20 self)
 Add to MetaCart
A Dirichlet mixture of normal densities is a useful choice for a prior distribution on densities in the problem of Bayesian density estimation. In the recent years, efficient Markov chain Monte Carlo method for the computation of the posterior distribution has been developed. The method has been applied to data arising from different fields of interest. The important issue of consistency was however left open. In this paper, we settle this issue in affirmative. 1. Introduction. Recent
Dirichlet Prior Sieves in Finite Normal Mixtures
 Statistica Sinica
, 2002
"... Abstract: The use of a finite dimensional Dirichlet prior in the finite normal mixture model has the effect of acting like a Bayesian method of sieves. Posterior consistency is directly related to the dimension of the sieve and the choice of the Dirichlet parameters in the prior. We find that naive ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
Abstract: The use of a finite dimensional Dirichlet prior in the finite normal mixture model has the effect of acting like a Bayesian method of sieves. Posterior consistency is directly related to the dimension of the sieve and the choice of the Dirichlet parameters in the prior. We find that naive use of the popular uniform Dirichlet prior leads to an inconsistent posterior. However, a simple adjustment to the parameters in the prior induces a random probability measure that approximates the Dirichlet process and yields a posterior that is strongly consistent for the density and weakly consistent for the unknown mixing distribution. The dimension of the resulting sieve can be selected easily in practice and a simple and efficient Gibbs sampler can be used to sample the posterior of the mixing distribution. Key words and phrases: BoseEinstein distribution, Dirichlet process, identification, method of sieves, random probability measure, relative entropy, weak convergence.
Mutual Information, Metric Entropy, and Cumulative Relative Entropy Risk
 Annals of Statistics
, 1996
"... Assume fP ` : ` 2 \Thetag is a set of probability distributions with a common dominating measure on a complete separable metric space Y . A state ` 2 \Theta is chosen by Nature. A statistician gets n independent observations Y 1 ; : : : ; Y n from Y distributed according to P ` . For each time ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
Assume fP ` : ` 2 \Thetag is a set of probability distributions with a common dominating measure on a complete separable metric space Y . A state ` 2 \Theta is chosen by Nature. A statistician gets n independent observations Y 1 ; : : : ; Y n from Y distributed according to P ` . For each time t between 1 and n, based on the observations Y 1 ; : : : ; Y t\Gamma1 , the statistician produces an estimated distribution P t for P ` , and suffers a loss L(P ` ; P t ). The cumulative risk for the statistician is the average total loss up to time n. Of special interest in information theory, data compression, mathematical finance, computational learning theory and statistical mechanics is the special case when the loss L(P ` ; P t ) is the relative entropy between the true distribution P ` and the estimated distribution P t . Here the cumulative Bayes risk from time 1 to n is the mutual information between the random parameter \Theta and the observations Y 1 ; : : : ;...
The Horseshoe Estimator for Sparse Signals
, 2008
"... This paper proposes a new approach to sparsity called the horseshoe estimator. The horseshoe is a close cousin of other widely used Bayes rules arising from, for example, doubleexponential and Cauchy priors, in that it is a member of the same family of multivariate scale mixtures of normals. But th ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
This paper proposes a new approach to sparsity called the horseshoe estimator. The horseshoe is a close cousin of other widely used Bayes rules arising from, for example, doubleexponential and Cauchy priors, in that it is a member of the same family of multivariate scale mixtures of normals. But the horseshoe enjoys a number of advantages over existing approaches, including its robustness, its adaptivity to different sparsity patterns, and its analytical tractability. We prove two theorems that formally characterize both the horseshoe’s adeptness at large outlying signals, and its superefficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using a combination of real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers one would get by pursuing a full Bayesian modelaveraging approach using a discrete mixture prior to model signals and noise.
Bayesian Model Selection in Finite Mixtures by Marginal Density Decompositions
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2001
"... ..."
General bounds on the mutual information between a parameter and n conditionally independent observations
 In Proceedings of the Seventh Annual ACM Workshop on Computational Learning Theory
, 1995
"... Each parameter in an abstract parameter space is associated with a di erent probability distribution on a set Y. A parameter is chosen at random from according to some a priori distribution on, and n conditionally independent random variables Y n = Y1�:::Yn are observed with common distribution dete ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Each parameter in an abstract parameter space is associated with a di erent probability distribution on a set Y. A parameter is chosen at random from according to some a priori distribution on, and n conditionally independent random variables Y n = Y1�:::Yn are observed with common distribution determined by. We obtain bounds on the mutual information between the random variable, giving the choice of parameter, and the random variable Y n, giving the sequence of observations. We also bound the supremum of the mutual information, over choices of the prior distribution on. These quantities have applications in density estimation, computational learning theory, universal coding, hypothesis testing, and portfolio selection theory. The bounds are given in terms of the metric and information dimensions of the parameter space with respect to the Hellinger distance. 1
On rates of convergence for posterior distributions in infinitedimensional
 Ann. Statist
, 2007
"... This paper introduces a new approach to the study of rates of convergence for posterior distributions. It is a natural extension of a recent approach to the study of Bayesian consistency. In particular, we improve on current rates of convergence for models including the mixture of Dirichlet process ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This paper introduces a new approach to the study of rates of convergence for posterior distributions. It is a natural extension of a recent approach to the study of Bayesian consistency. In particular, we improve on current rates of convergence for models including the mixture of Dirichlet process model and the random Bernstein polynomial model. 1. Introduction. Recently