Results 1  10
of
28
Gibbs Sampling Methods for StickBreaking Priors
"... ... In this paper we present two general types of Gibbs samplers that can be used to fit posteriors of Bayesian hierarchical models based on stickbreaking priors. The first type of Gibbs sampler, referred to as a Polya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling meth ..."
Abstract

Cited by 222 (17 self)
 Add to MetaCart
... In this paper we present two general types of Gibbs samplers that can be used to fit posteriors of Bayesian hierarchical models based on stickbreaking priors. The first type of Gibbs sampler, referred to as a Polya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling method currently employed for Dirichlet process computing. This method applies to stickbreaking priors with a known P'olya urn characterization; that is priors with an explicit and simple prediction rule. Our second method, the blocked Gibbs sampler, is based on a entirely different approach that works by directly sampling values from the posterior of the random measure. The blocked Gibbs sampler can be viewed as a more general approach as it works without requiring an explicit prediction rule. We find that the blocked Gibbs avoids some of the limitations seen with the Polya urn approach and should be simpler for nonexperts to use.
Iterated random functions
 SIAM Review
, 1999
"... Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys ..."
Abstract

Cited by 135 (1 self)
 Add to MetaCart
Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys the field and presents some new examples. There is a simple unifying idea: the iterates of random Lipschitz functions converge if the functions are contracting on the average. 1. Introduction. The
A computational approach for full nonparametric Bayesian inference under Dirichlet process mixture models
 Journal of Computational and Graphical Statistics
, 2002
"... Widely used parametric generalizedlinear models are, unfortunately,a somewhat limited class of speci � cations. Nonparametric aspects are often introduced to enrich this class, resultingin semiparametricmodels. Focusing on single or ksample problems,many classical nonparametricapproachesare limited ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
Widely used parametric generalizedlinear models are, unfortunately,a somewhat limited class of speci � cations. Nonparametric aspects are often introduced to enrich this class, resultingin semiparametricmodels. Focusing on single or ksample problems,many classical nonparametricapproachesare limited to hypothesistesting. Those that allow estimation are limited to certain functionals of the underlying distributions. Moreover, the associated inference often relies upon asymptotics when nonparametric speci � cations are often most appealing for smaller sample sizes. Bayesian nonparametricapproachesavoid asymptotics but have, to date, been limited in the range of inference. Working with Dirichlet process priors, we overcome the limitations of existing simulationbasedmodel � tting approaches which yield inference that is con � ned to posterior moments of linear functionals of the population distribution.This article provides a computationalapproach to obtain the entire posterior distribution for more general functionals. We illustrate with three applications: investigation of extreme value distributions associated with a single population, comparison of medians in a ksample problem, and comparison of survival times from different populations under fairly heavy censoring.
Approximate Dirichlet Process Computing in Finite Normal Mixtures: Smoothing and Prior Information
 JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 2000
"... ..."
On the Dirichlet Prior and Bayesian Regularization
 In Advances in Neural Information Processing Systems 15
, 2002
"... A common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from highthroughput data sources. In this paper we examine how Bayesian regularization using a Dirichle ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
A common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from highthroughput data sources. In this paper we examine how Bayesian regularization using a Dirichlet prior over the model parameters affects the learned model structure in a domain with discrete variables. Surprisingly, a weak prior in the sense of smaller equivalent sample size leads to a strong regularization of the model structure (sparse graph) given a sufficiently large data set. In particular, the empty graph is obtained in the limit of a vanishing strength of prior belief. This is diametrically opposite to what one may expect in this limit, namely the complete graph from an (unregularized) maximum likelihood estimate. Since the prior affects the parameters as expected, the prior strength balances a "tradeoff" between regularizing the parameters or the structure of the model. We demonstrate the benefits of optimizing this tradeoff in the sense of predictive accuracy.
A Bayesian semiparametric model for random effects metaanalysis
 Journal of the American Statistical Association
"... In metaanalysis there is an increasing trend to explicitly acknowledge the presence of study variability through random effects models. That is, one assumes that for each study, there is a studyspecific effect and one is observing an estimate of this latent variable. In a random effects model, one ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In metaanalysis there is an increasing trend to explicitly acknowledge the presence of study variability through random effects models. That is, one assumes that for each study, there is a studyspecific effect and one is observing an estimate of this latent variable. In a random effects model, one assumes that these studyspecific effects come from some distribution, and one can estimate the parameters of this distribution, as well as the studyspecific effects themselves. This distribution is most often modelled through a parametric family, usually a family of normal distributions. The advantage of using a normal distribution is that the mean parameter plays an important role, and much of the focus is on determining whether or not this mean is 0. For example, it may be easier to justify funding further studies if it is determined that this mean is not 0. Typically, this normality assumption is made for the sake of convenience, rather than from some theoretical justification, and may not actually hold. We present a Bayesian model in which the distribution of the studyspecific effects is modelled through a certain class of nonparametric priors. These priors can be designed to concentrate most of their mass around the family of normal
Consistency issues in Bayesian Nonparametrics
 IN ASYMPTOTICS, NONPARAMETRICS AND TIME SERIES: A TRIBUTE
, 1998
"... ..."
A Bayesian Semiparametric Model for CaseControl Studies with Errors in Variables
"... this paper is based on a nonparametric model for the unknown joint distribution for the missing data, the observed covariates and the proxy. This nonparametric distribution defines the measurement error component of the model which relates the missing covariates X with a proxy W . The oxymoron " ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
this paper is based on a nonparametric model for the unknown joint distribution for the missing data, the observed covariates and the proxy. This nonparametric distribution defines the measurement error component of the model which relates the missing covariates X with a proxy W . The oxymoron "nonparametric Bayes" refers to a class of flexible mixture distributions. For the likelihood of disease, given covariates, we choose a logistic regression model. By using a parametric disease model and nonparametric exposure model we obtain robust, interpretable, results quantifying the effect of exposure. Some Key Words: Mixture of Dirichlet Processes, Logistic Regression, Measurement Error, Markov chain Monte Carlo, Gibbs Sampling, Metropolis Sampling. 1 Introduction We develop a model and a numerical estimation scheme for a Bayesian approach to inference in casecontrol studies with errors in covariables. In epidemiology covariates are frequently measured with error substantial enough that it can seriously affect the assessment of the relation between risk factors and disease outcome (Carroll, Spiegelman, Lan, Bailey & Abbott 1984). Although there are situations where it is possible to obtain accurate measurements of the covariate, at a substantially higher cost, for the majority of the observations these measurements will not be available. Nevertheless, the validation group can be used to form a model that indirectly links the errorprone measurement to the disease outcome, through its relation to the errorfree measurement. This general topic of measurement error models has received a considerable amount of attention in the frequentist literature (e.g., Fuller, 1987; Stefanski & Carroll, 1990), but until recently has received relatively little attention in the Bayesian li...
Bayesian mixture modeling for spatial Poisson process intensities, with applications to extreme value analysis
 Dept
, 2005
"... Abstract: We propose a method for the analysis of a spatial point pattern, which is assumed to arise as a set of observations from a spatial nonhomogeneous Poisson process. The spatial point pattern is observed in a bounded region, which, for most applications, is taken to be a rectangle in the spa ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Abstract: We propose a method for the analysis of a spatial point pattern, which is assumed to arise as a set of observations from a spatial nonhomogeneous Poisson process. The spatial point pattern is observed in a bounded region, which, for most applications, is taken to be a rectangle in the space where the process is defined. The method is based on modeling a density function, defined on this bounded region, that is directly related with the intensity function of the Poisson process. We develop a flexible nonparametric mixture model for this density using a bivariate Beta distribution for the mixture kernel and a Dirichlet process prior for the mixing distribution. Using posterior simulation methods, we obtain full inference for the intensity function and any other functional of the process that might be of interest. We discuss applications to problems where inference for clustering in the spatial point pattern is of interest. Moreover, we consider applications of the methodology to extreme value analysis problems. We illustrate the modeling approach with three previously published data sets. Two of the data sets are from forestry and consist of locations of trees. The third data set consists of extremes from the Dow Jones index over a period of 1303 days.
On Nonparametric Bayesian Inference for the Distribution of a Random Sample
, 1995
"... The nonparametric Bayesian approach for inference regarding the unknown distribution of a random sample customarily assumes that this distribution is random and arises through Dirichlet process mixing. Previous work within this setting has focused on the mean of the posterior distribution of this ra ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
The nonparametric Bayesian approach for inference regarding the unknown distribution of a random sample customarily assumes that this distribution is random and arises through Dirichlet process mixing. Previous work within this setting has focused on the mean of the posterior distribution of this random distribution which is the predictive distribution of a future observation given the sample. Our interest here is in learning about other features of this posterior distribution as well as about posteriors associated with functionals of the distribution of the data. We indicate how to do this in the case of linear functionals. An illustration, with a sample from a Gamma distribution, utilizes Dirichlet process mixtures of normals to recover this distribution and its features. Key Words: Dirichlet process, linear functional, posterior distribution, predictive distribution, samplingbased inference. AMS Subject Classifications: 62C10, 62G07. 1 Alan E. Gelfand is Professor and Saurabh Mukh...