Results 1  10
of
10
Convergence rates of posterior distributions
 Ann. Statist
, 2000
"... We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, D ..."
Abstract

Cited by 47 (11 self)
 Add to MetaCart
We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, Dirichlet processes and interval censoring. 1. Introduction. Suppose
Bayesian Model Selection in Finite Mixtures by Marginal Density Decompositions
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2001
"... ..."
Bayesian estimation of the spectral density of a time series
 J. Amer. Statist. Assoc
, 2004
"... This article describes a Bayesian approach to estimating the spectral density of a stationary time series. A nonparametric prior on the spectral density is described through Bernstein polynomials. Because the actual likelihood is very complicated, a pseudoposterior distribution is obtained by updati ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
This article describes a Bayesian approach to estimating the spectral density of a stationary time series. A nonparametric prior on the spectral density is described through Bernstein polynomials. Because the actual likelihood is very complicated, a pseudoposterior distribution is obtained by updating the prior using the Whittle likelihood. A Markov chain Monte Carlo algorithm for sampling from this posterior distribution is described that is used for computing the posterior mean, variance, and other statistics. A consistency result is established for this pseudoposterior distribution that holds for a shortmemory Gaussian time series and under some conditions on the prior. To prove this asymptotic result, a general consistency theorem of Schwartz is extended for a triangular array of independent, nonidentically distributed observations. This extension is also of independent interest. A simulation study is conducted to compare the proposed method with some existing methods. The method is illustrated with the wellstudied sunspot dataset.
DYNAMICS OF BAYESIAN UPDATING WITH DEPENDENT DATA AND MISSPECIFIED MODELS
, 2009
"... Recent work on the convergence of posterior distributions under Bayesian updating has established conditions under which the posterior will concentrate on the truth, if the latter has a perfect representation within the support of the prior, and under various dynamical assumptions, such as the data ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Recent work on the convergence of posterior distributions under Bayesian updating has established conditions under which the posterior will concentrate on the truth, if the latter has a perfect representation within the support of the prior, and under various dynamical assumptions, such as the data being independent and identically distributed or Markovian. Here I establish sufficient conditions for the convergence of the posterior distribution in nonparametric problems even when all of the hypotheses are wrong, and the datagenerating process has a complicated dependence structure. The main dynamical assumption is the generalized asymptotic equipartition (or “ShannonMcMillanBreiman”) property of information theory. I derive a kind of large deviations principle for the posterior measure, and discuss the advantages of predicting using a combination of models known to be wrong. An appendix sketches connections between the present results and the “replicator dynamics” of evolutionary theory.
Bayesian modeling of joint and conditional distributions. Unpublished manuscript
, 2009
"... In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. W ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi and nonparametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.
Consistency of Bayes estimators of a binary regression function
 Annals of Statistics
, 2006
"... Abstract. When do nonparametric Bayesian procedures “overfit? ” To shed light on this question, we consider a binary regression problem in detail and establish frequentist consistency for a certain class of Bayes procedures based on hierarchical priors, called uniform mixture priors. These are defin ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. When do nonparametric Bayesian procedures “overfit? ” To shed light on this question, we consider a binary regression problem in detail and establish frequentist consistency for a certain class of Bayes procedures based on hierarchical priors, called uniform mixture priors. These are defined as follows: let ν be any probability distribution on the nonnegative integers. To sample a function f from the prior π ν, first sample m from ν and then sample f uniformly from the set of step functions from [0,1] into [0, 1] that have exactly m jumps (i.e. sample all m jump locations and m + 1 function values independently and uniformly). The main result states that if a datastream is generated according to any fixed, measurable binaryregression function f0 ̸ ≡ 1/2 then frequentist consistency obtains: i.e. for any ν with infinite support, the posterior of π ν concentrates on any L 1 neighborhood of f0. Solution of an associated largedeviations problem is central to the consistency proof. 1.
Bayesian Kernel Mixtures for Counts
"... Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance l ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. We provide sufficient conditions on the kernels and prior for the mixing measure under which almost all count distributions fall within the KullbackLeibler support. This is shown to imply both weak and strong posterior consistency. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through an application to marketing data. Keywords: Bayesian nonparametrics; Dirichlet process mixtures; KullbackLeibler
BAYESIAN REGRESSION WITH NONPARAMETRIC HETEROSKEDASTICITY
"... This paper presents a large sample justification for a semiparametric Bayesian approach to inference in a linear regression model. The approach is to model the distribution of the error term by a normal distribution with the variance that is a flexible function of covariates. It is shown that even w ..."
Abstract
 Add to MetaCart
This paper presents a large sample justification for a semiparametric Bayesian approach to inference in a linear regression model. The approach is to model the distribution of the error term by a normal distribution with the variance that is a flexible function of covariates. It is shown that even when the data generating distribution of the error term is not normal the posterior distribution of the linear coefficients converges to a normal distribution with the mean equal to the asymptotically efficient estimator and the variance given by the semiparametric efficiency bound. This implies that the estimation procedure is robust and conservative from the Bayesian standpoint and at the same time it can be used as an implementation of semiparametrically efficient frequentist inference. Priors for the conditional variance based on splines, Bernstein polynomials, and Gaussian processes are shown to satisfy sufficient conditions of the aforementioned theoretical results.
DOI 10.1007/s1046300801682 Asymptotic properties of posterior distributions in nonparametric regression with nonGaussian errors
"... Abstract We investigate the asymptotic behavior of posterior distributions in nonparametric regression problems when the distribution of noise structure of the regression model is assumed to be nonGaussian but symmetric such as the Laplace distribution. Given prior distributions for the unknown reg ..."
Abstract
 Add to MetaCart
Abstract We investigate the asymptotic behavior of posterior distributions in nonparametric regression problems when the distribution of noise structure of the regression model is assumed to be nonGaussian but symmetric such as the Laplace distribution. Given prior distributions for the unknown regression function and the scale parameter of noise distribution, we show that the posterior distribution concentrates around the true values of parameters. Following the approach by Choi and Schervish (Journal of Multivariate Analysis, 98, 1969–1987, 2007) and extending their results, we prove consistency of the posterior distribution of the parameters for the nonparametric regression when errors are symmetric nonGaussian with suitable assumptions.
TWO BAYESIANS USING THE SAME PRIOR
"... While a lot of attention has been given to two Bayesians with different prior distributions, using the same data, there has to date been no consideration given to two Bayesians using the same prior with independent data sets coming from the same source. In this paper we consider two such Bayesians a ..."
Abstract
 Add to MetaCart
While a lot of attention has been given to two Bayesians with different prior distributions, using the same data, there has to date been no consideration given to two Bayesians using the same prior with independent data sets coming from the same source. In this paper we consider two such Bayesians and show under what conditions on the prior they will be guaranteed to eventually agree with each other as the samples increase.