Results 1  10
of
23
Convergence rates of posterior distributions
 Ann. Statist
, 2000
"... We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, D ..."
Abstract

Cited by 106 (14 self)
 Add to MetaCart
We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, Dirichlet processes and interval censoring. 1. Introduction. Suppose
Bayesian Model Selection in Finite Mixtures by Marginal Density Decompositions
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2001
"... ..."
Bayesian estimation of the spectral density of a time series
 J. Amer. Statist. Assoc
, 2004
"... This article describes a Bayesian approach to estimating the spectral density of a stationary time series. A nonparametric prior on the spectral density is described through Bernstein polynomials. Because the actual likelihood is very complicated, a pseudoposterior distribution is obtained by updati ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
This article describes a Bayesian approach to estimating the spectral density of a stationary time series. A nonparametric prior on the spectral density is described through Bernstein polynomials. Because the actual likelihood is very complicated, a pseudoposterior distribution is obtained by updating the prior using the Whittle likelihood. A Markov chain Monte Carlo algorithm for sampling from this posterior distribution is described that is used for computing the posterior mean, variance, and other statistics. A consistency result is established for this pseudoposterior distribution that holds for a shortmemory Gaussian time series and under some conditions on the prior. To prove this asymptotic result, a general consistency theorem of Schwartz is extended for a triangular array of independent, nonidentically distributed observations. This extension is also of independent interest. A simulation study is conducted to compare the proposed method with some existing methods. The method is illustrated with the wellstudied sunspot dataset.
DYNAMICS OF BAYESIAN UPDATING WITH DEPENDENT DATA AND MISSPECIFIED MODELS
, 2009
"... Recent work on the convergence of posterior distributions under Bayesian updating has established conditions under which the posterior will concentrate on the truth, if the latter has a perfect representation within the support of the prior, and under various dynamical assumptions, such as the data ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
Recent work on the convergence of posterior distributions under Bayesian updating has established conditions under which the posterior will concentrate on the truth, if the latter has a perfect representation within the support of the prior, and under various dynamical assumptions, such as the data being independent and identically distributed or Markovian. Here I establish sufficient conditions for the convergence of the posterior distribution in nonparametric problems even when all of the hypotheses are wrong, and the datagenerating process has a complicated dependence structure. The main dynamical assumption is the generalized asymptotic equipartition (or “ShannonMcMillanBreiman”) property of information theory. I derive a kind of large deviations principle for the posterior measure, and discuss the advantages of predicting using a combination of models known to be wrong. An appendix sketches connections between the present results and the “replicator dynamics” of evolutionary theory.
Bayesian Kernel Mixtures for Counts
"... Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance l ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. We provide sufficient conditions on the kernels and prior for the mixing measure under which almost all count distributions fall within the KullbackLeibler support. This is shown to imply both weak and strong posterior consistency. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through an application to marketing data. Keywords: Bayesian nonparametrics; Dirichlet process mixtures; KullbackLeibler
Bayesian modeling of joint and conditional distributions. Unpublished manuscript
, 2009
"... In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. W ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi and nonparametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.
Nonparametric Bayesian testing for monotonicity. Unpublished manuscript
, 2013
"... This paper studies the problem of testing whether a function is monotone from a nonparametric Bayesian perspective. Two new families of tests are constructed. The first uses constrained smoothing splines, together with a hierarchical stochasticprocess prior that explicitly controls the prior proba ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper studies the problem of testing whether a function is monotone from a nonparametric Bayesian perspective. Two new families of tests are constructed. The first uses constrained smoothing splines, together with a hierarchical stochasticprocess prior that explicitly controls the prior probability of monotonicity. The second uses regression splines, together with two proposals for the prior over the regression coefficients. The finitesample performance of the tests is shown via simulation to improve upon existing frequentist and Bayesian methods. The asymptotic properties of the Bayes factor for comparing monotone versus nonmonotone regression functions in a Gaussian model are also studied. Our results significantly extend those currently available, which chiefly focus on determining the dimension of a parametric linear model.
Consistency of Bayes estimators of a binary regression function
 Annals of Statistics
, 2006
"... Abstract. When do nonparametric Bayesian procedures “overfit? ” To shed light on this question, we consider a binaryregression problem in detail and establish frequentist consistency for a large class of Bayes procedures based on certain heirarchical priors, called uniform mixture priors. These are ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. When do nonparametric Bayesian procedures “overfit? ” To shed light on this question, we consider a binaryregression problem in detail and establish frequentist consistency for a large class of Bayes procedures based on certain heirarchical priors, called uniform mixture priors. These are defined as follows: let ν be any probability distribution on the nonnegative integers. To sample a function f from the prior π ν, first sample m from ν and then sample f uniformly from the set of step functions from [0, 1] into [0, 1] that have exactly m jumps (i.e. sample all m jump locations and m + 1 function values independently and uniformly). The main result states that with only one exception, if a datastream is generated according to any fixed, measurable binaryregression function f0 consistency obtains: i.e. for any ν with infinite support, the posterior of π ν concentrates on any L 1 neighborhood of f0. The only exception is that if f0 is identically 1, so that all class2 label information is pure noise, inconsistency occurs if the tail of ν is too long. Qualitatively, this is the same as the finding of Diaconis and Freedman for a class of related priors. However, because the uniform mixture priors have randomly located jumps, they are more flexible and presumably more “prone ” to overfitting. Solution of a largedeviations problem is central to the consistency proof. 1.
Semiparametric posterior limits
, 2013
"... We review the Bayesian theory of semiparametric inference following Bickel and Kleijn (2012) [5] and Kleijn and Knapik (2013) [47]. After an overview of efficiency in parametric and semiparametric estimation problems, we consider the Bernsteinvon Mises theorem (see, e.g., Le Cam and Yang (1990) [57 ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We review the Bayesian theory of semiparametric inference following Bickel and Kleijn (2012) [5] and Kleijn and Knapik (2013) [47]. After an overview of efficiency in parametric and semiparametric estimation problems, we consider the Bernsteinvon Mises theorem (see, e.g., Le Cam and Yang (1990) [57]) and generalize it to (LAN) regular and (LAE) irregular semiparametric estimation problems. We formulate a version of the semiparametric Bernsteinvon Mises theorem that does not depend on leastfavourable submodels, thus bypassing the most restrictive condition in the presentation of [5]. The results are applied to the (regular) estimation of the linear coefficient in partial linear regression (with a Gaussian nuisance prior) and of the kernel bandwidth in a model of normal location mixtures (with a Dirichlet nuisance prior), as well as the (irregular) estimation of the boundary of the support of a monotone family of densities (with a Gaussian nuisance
No Control Genes Required: Bayesian Analysis of qRT PCR Data
"... Background: Modelbased analysis of data from quantitative reversetranscription PCR (qRTPCR) is potentially more powerful and versatile than traditional methods. Yet existing modelbased approaches cannot properly deal with the higher sampling variances associated with lowabundant targets, nor do ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Background: Modelbased analysis of data from quantitative reversetranscription PCR (qRTPCR) is potentially more powerful and versatile than traditional methods. Yet existing modelbased approaches cannot properly deal with the higher sampling variances associated with lowabundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the modelfitting process. Results: In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poissonlognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poissonbased model allows for the correct specification of the meanvariance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the ‘‘classic’ ’ analysis, which uses standard data preprocessing steps (logarithmic transformation and multigene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard deltadelta