Results 1  10
of
13
The Consistency of Posterior Distributions in Nonparametric Problems
 Ann. Statist
, 1996
"... We give conditions that guarantee that the posterior probability of every Hellinger... ..."
Abstract

Cited by 92 (4 self)
 Add to MetaCart
We give conditions that guarantee that the posterior probability of every Hellinger...
Kullback Leibler property of kernel mixture priors in Bayesian density estimation
 Electronic J. Statist
, 2008
"... Positivity of the prior probability of KullbackLeibler neighborhood around the true density, commonly known as the KullbackLeibler property, plays a fundamental role in posterior consistency. A popular prior for Bayesian estimation is given by a Dirichlet mixture, where the kernels are chosen depe ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Positivity of the prior probability of KullbackLeibler neighborhood around the true density, commonly known as the KullbackLeibler property, plays a fundamental role in posterior consistency. A popular prior for Bayesian estimation is given by a Dirichlet mixture, where the kernels are chosen depending on the sample space and the class of densities to be estimated. The KullbackLeibler property of the Dirichlet mixture prior has been shown for some special kernels like the normal density or Bernstein polynomial, under appropriate conditions. In this paper, we obtain easily verifiable sufficient conditions, under which a prior obtained by mixing a general kernel possesses the KullbackLeibler property. We study a wide variety of kernel used in practice, including the normal, t, histogram, Weibull densities and so on, and show that the KullbackLeibler property holds if some easily verifiable conditions are satisfied at the true density. This gives a catalog of conditions required for the KullbackLeibler property, which can be readily used in applications. AMS (2000) subject classification. Primary 62G07, 62G20.
The Elimination of Nuisance Parameters
, 2004
"... We review the Bayesian approach to the problem of the elimination of nuisance parameters from a statistical model. Many Bayesian statisticians feel that the framework of Bayesian statistics is so clear and simple that the elimination of nuisance parameters should not be considered a problem: one has ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We review the Bayesian approach to the problem of the elimination of nuisance parameters from a statistical model. Many Bayesian statisticians feel that the framework of Bayesian statistics is so clear and simple that the elimination of nuisance parameters should not be considered a problem: one has simply to compute the marginal posterior distribution of the parameter of interest. However we will argue that this exercise need not be so simple from a practical perspective. The paper is divided in two main parts: the first deals with regular parametric models whereas the second will focus on non regular problem, including the socalled Neyman and Scott’s class of models and semiparametric models where the nuisance parameter lies in an infinite dimensional space. Finally we relate the issues of the elimination of nuisance parameters to other, apparently different, problems. Occasionally, we will mention non Bayesian treatment of nuisance parameters, mainly for comparative analyses.
Characterizing the variance improvement in linear Dirichlet random effects models
 Statist. Probab. Lett
, 2009
"... An alternative to the classical mixed model with normal random effects is to use a Dirichlet process to model the random effects. Such models have proven useful in practice, and we have observed a noticeable variance reduction, in the estimation of the fixed effects, when the Dirichlet process is us ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
An alternative to the classical mixed model with normal random effects is to use a Dirichlet process to model the random effects. Such models have proven useful in practice, and we have observed a noticeable variance reduction, in the estimation of the fixed effects, when the Dirichlet process is used instead of the normal. In this paper we formalize this notion, and give a theoretical justification for the expected variance reduction. We show that for almost all data vectors, the posterior variance from the Dirichlet random effects model is smaller than that from the normal random effects model.
Semiparametric Bayesian . . .
, 2012
"... Bayesian partially identified models have received a growing attention in recent years in the econometric literature, dueto their broad applications in empirical studies. Classical Bayesian approach in this literature has been assuming a parametric model, by specifying an adhoc parametric likelihoo ..."
Abstract
 Add to MetaCart
Bayesian partially identified models have received a growing attention in recent years in the econometric literature, dueto their broad applications in empirical studies. Classical Bayesian approach in this literature has been assuming a parametric model, by specifying an adhoc parametric likelihood function. However, econometric models usually only identify a set of moment inequalities, and therefore assuming a known likelihood function suffers from the risk of misspecification, and may result in inconsistent estimations of the identified set. On the other hand, momentcondition based likelihoods such as the limited information and exponential tilted empirical likelihood, though guarantee the consistency, lack of probabilistic interpretations. We propose a semiparametric Bayesian partially identified model, by placing a nonparametric prior on the unknown likelihood function. Our approach thus only requires a set of moment conditions but still possesses a pure Bayesian interpretation. We study the posterior of the support function, which is essential when the object of interest is the identified set. The support function also enables us to construct twosided Bayesian credible sets (BCS) for the identified set. It is found that, while the BCS of the partially identified
mixture priors in Bayesian density estimation
"... Abstract: Positivity of the prior probability of KullbackLeibler neighborhood around the true density, commonly known as the KullbackLeibler property, plays a fundamental role in posterior consistency. A popular prior for Bayesian estimation is given by a Dirichlet mixture, where the kernels are c ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Positivity of the prior probability of KullbackLeibler neighborhood around the true density, commonly known as the KullbackLeibler property, plays a fundamental role in posterior consistency. A popular prior for Bayesian estimation is given by a Dirichlet mixture, where the kernels are chosen depending on the sample space and the class of densities to be estimated. The KullbackLeibler property of the Dirichlet mixture prior has been shown for some special kernels like the normal density or Bernstein polynomial, under appropriate conditions. In this paper, we obtain easily verifiable sufficient conditions, under which a prior obtained by mixing a general kernel possesses the KullbackLeibler property. We study a wide variety of kernel used in practice, including the normal, t, histogram, gamma, Weibull densities and so on, and show that the KullbackLeibler property holds if some easily verifiable conditions are satisfied at the true density. This gives a catalog of conditions required for the KullbackLeibler property, which can be readily used in applications.
© Institute of Mathematical Statistics, 2010 ESTIMATION IN DIRICHLET RANDOM EFFECTS MODELS
"... We develop a new Gibbs sampler for a linear mixed model with a Dirichlet process random effect term, which is easily extended to a generalized linear mixed model with a probit link function. Our Gibbs sampler exploits the properties of the multinomial and Dirichlet distributions, and is shown to be ..."
Abstract
 Add to MetaCart
We develop a new Gibbs sampler for a linear mixed model with a Dirichlet process random effect term, which is easily extended to a generalized linear mixed model with a probit link function. Our Gibbs sampler exploits the properties of the multinomial and Dirichlet distributions, and is shown to be an improvement, in terms of operator norm and efficiency, over other commonly used MCMC algorithms. We also investigate methods for the estimation of the precision parameter of the Dirichlet process, finding that maximum likelihood may not be desirable, but a posterior mode is a reasonable approach. Examples are given to show how these models perform on real data. Our results complement both the theoretical basis of the Dirichlet process nonparametric prior and the computational work that has been done to date. 1. Introduction. Linear