Results 1  10
of
25
Sequential Importance Sampling for Nonparametric Bayes Models: The Next Generation
 Journal of Statistics
, 1998
"... this paper, we exploit the similarities between the Gibbs sampler and the SIS, bringing over the improvements for Gibbs sampling algorithms to the SIS setting for nonparametric Bayes problems. These improvements result in an improved sampler and help satisfy questions of Diaconis (1995) pertaining t ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
this paper, we exploit the similarities between the Gibbs sampler and the SIS, bringing over the improvements for Gibbs sampling algorithms to the SIS setting for nonparametric Bayes problems. These improvements result in an improved sampler and help satisfy questions of Diaconis (1995) pertaining to convergence. Such an effort can see wide applications in many other problems related to dynamic systems where the SIS is useful (Berzuini et al. 1996; Liu and Chen 1996). Section 2 describes the specific model that we consider. For illustration we focus discussion on the betabinomial model, although the methods are applicable to other conjugate families. In Section 3, we describe the first generation of the SIS and Gibbs sampler in this context, and present the necessary conditional distributions upon which the techniques rely. Section 4 describes the alterations that create the second generation techniques, and provides specific algorithms for the model we consider. Section 5 presents a comparison of the techniques on a large set of data. Section 6 provides theory that ensures the proposed methods work and that is generally applicable to many other problems using importance sampling approaches. The final section presents discussion. 2 The Model
Bayesian Inference for Semiparametric Binary Regression
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1996
"... We propose a regression model for binary response data which places no structural restrictions on the link function except monotonicity and known location and scale. Predictors enter linearly. We demonstrate Bayesian inference calculations in this model. By modifying the Dirichlet process, we obtain ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
We propose a regression model for binary response data which places no structural restrictions on the link function except monotonicity and known location and scale. Predictors enter linearly. We demonstrate Bayesian inference calculations in this model. By modifying the Dirichlet process, we obtain a natural prior measure over this semiparametric model, and we use Polya sequence theory to formulate this measure in terms of a finite number of unobserved variables. A Markov chain Monte Carlo algorithm is designed for posterior simulation, and the methodology is applied to data on radiotherapy treatments for cancer.
A Nonparametric Bayes Method for Isotonic Regression
, 1994
"... We consider an isotonic regression problem: r(x), the modal value of Y when the covariate is equal to x, is known to be a monotonic function of x. The goal of the analysis is to make inference about the regression function r between fixed points x = a and x = b. If r(a) and r(b) were known then r co ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
We consider an isotonic regression problem: r(x), the modal value of Y when the covariate is equal to x, is known to be a monotonic function of x. The goal of the analysis is to make inference about the regression function r between fixed points x = a and x = b. If r(a) and r(b) were known then r could be rescaled by an affine transformation to be a cdf, in which case r could be modeled using Bayesian supported by NSF Grant #DMS9300137 Isotone Regression 2 nonparametric methods such as Dirichlet processes. In general, r(a) and r(b) are not known. We propose a model with a prior distribution for (r(a); r(b)) and, conditionally on (r(a); r(b)), a Dirichlet process for r. The error density is assumed to have mode 0 but is otherwise uncertain. It is modelled nonparametrically. Posterior distributions are estimated by successive substitution sampling. 1 The Model 1.1 The Regression Function We consider a simple regression problem with one predictor called x, possibly with subscripts,...
A Bayesian Approach to Robust Binary Nonparametric Regression
, 1997
"... This paper presents a Bayesian approach to binary nonparametric regression which assumes that the argument of the link is an additive function of the explanatory variables and their multiplicative interactions. The paper makes the following contributions. First, a comprehensive approach is presented ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
This paper presents a Bayesian approach to binary nonparametric regression which assumes that the argument of the link is an additive function of the explanatory variables and their multiplicative interactions. The paper makes the following contributions. First, a comprehensive approach is presented in which the function estimates are smoothing splines with the smoothing parameters integrated out, and the estimates made robust to outliers. Second, the approach can handle a wide rage of link functions. Third, efficient state space based algorithms are used to carry out the computations. Fourth, an extensive set of simulations is carried out which show that the Bayesian estimator works well and compares favorably to two estimators which are widely used in practice.
Gibbs Sampling
 Journal of the American Statistical Association
, 1995
"... 8> R f(`)d`. To marginalize, say for ` i ; requires h(` i ) = R f(`)d` (i) = R f(`)d` where ` (i) denotes all components of ` save ` i : To obtain Eg(` i ) requires similar integration; to obtain the marginal distribution of say g(`) or its expectation requires similar integration. When p is l ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
8> R f(`)d`. To marginalize, say for ` i ; requires h(` i ) = R f(`)d` (i) = R f(`)d` where ` (i) denotes all components of ` save ` i : To obtain Eg(` i ) requires similar integration; to obtain the marginal distribution of say g(`) or its expectation requires similar integration. When p is large (as it will be in the applications we envision) such integration is analytically infeasible (the socalled curse of dimensionality*). Gibbs sampling provides a Monte Carlo approach for carrying out such integrations. In what sorts of settings would we have need to mar
Bayesian Inference on Order Constrained Parameters in Generalized Linear Models
 Biometrics
, 2002
"... This article proposes a general Bayesian approach for inibrence on order constrained parameters in generalized linear models. Instead of choosing a prior distribution with support on the constrained space, which can result in major computational difficulties, we propose to map draws from an unconstr ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This article proposes a general Bayesian approach for inibrence on order constrained parameters in generalized linear models. Instead of choosing a prior distribution with support on the constrained space, which can result in major computational difficulties, we propose to map draws from an unconstrained posterior density using an isotonic regression transformation. This approach allows flat regions over which increases in the level of a predictor have no ef fect. Bayes factors for assessing ordered trends can be computed based on the output from a Gibbs sampling algorithm. Results from a simulatio' study are prese'ted and the approach is applied to data from a time to pregnancy study
Bayesian Inferences on Umbrella Orderings
 BIOMETRICS
, 2005
"... ... This article proposes a Bayesian approach for addressing this problem in the setting of normal linear and probit regression models. The regression coe#cients are assigned a conditionally conjugate prior density consisting of mixtures of point masses at zero and truncated normal densities, with a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
... This article proposes a Bayesian approach for addressing this problem in the setting of normal linear and probit regression models. The regression coe#cients are assigned a conditionally conjugate prior density consisting of mixtures of point masses at zero and truncated normal densities, with a (possibly unknown) changepoint parameter included to accommodate umbrella ordering. Two strategies of prior elicitation are considered: (1) a Bayesian Bonferroni approach in which the probability of the global null hypothesis is specified and local hypotheses are considered independent; and (2) an approach which treats these probabilities as random. A single Gibbs sampling chain can be used to obtain posterior probabilities for the di#erent hypotheses and to estimate regression coe#cients and predictive quantities, either by model averaging or under the preferred hypothesis. The methods are applied to data from a carcinogenesis study.
Optimal Design for Quantal Bioassay via Monte Carlo Methods
 In
, 1999
"... this paper we present a general decision theoretic setup for the design problem and develop a solution using Monte Carlobased methods. Following Kuo (1983), to relax the restrictive assumptions on the potency curve, we adopt a nonparametric Bayesian approach and assume a Dirichlet process prior (Fe ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
this paper we present a general decision theoretic setup for the design problem and develop a solution using Monte Carlobased methods. Following Kuo (1983), to relax the restrictive assumptions on the potency curve, we adopt a nonparametric Bayesian approach and assume a Dirichlet process prior (Ferguson, 1973) with parameter ffF 0 on the potency curve, where ff represents our strength of prior belief and F 0 is the prior mean of the random tolerance distribution. In particular, we model the potency curve as a random discrete distribution function with random locations and random size of the jumps with the specific distribution given in Sethuraman and Tiwari (1982). The prior mean of this random distribution evaluated at t is assumed to be F 0 (t) and the variance of this random distribution evaluated at t is assumed to be F 0 (t)(1 \Gamma F 0 (t))=(ff + 1). So ff can be interpreted as the strength of prior belief or the degree of concentration of the random distribution around F 0 . The larger the ff is, the more concentrated F is around F 0 . In Section 2, we describe the design problem and introduce its ingredients. We present a decision theoretic setup for the design problem in Section 3 and illustrate the difficulties involved in obtaining an analytical solution to the preposterior analysis. In Section 4 we discuss how MCMC methods can be used for evaluating the posterior expected loss. We present a Monte Carlo integration technique to evaluate the preposterior expected loss and discuss the potential computational burden involved in finding the optimal design. To alleviate such computational burden, we adopt the simulationbased approach of Muller and Parmigiani (1995) where the preposterior analysis is replaced by a curve (surface)fitting technique. This simulat...