Results 11  20
of
51
Partition Modelling
"... Introduction This chapter serves as an introduction to the use of partition models to estimate a spatial process z(x) over some pdimensional region of interest X . Partition models can be useful modelling tools as, unlike standard spatial models (e.g. kriging) they allow the correlation structure ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Introduction This chapter serves as an introduction to the use of partition models to estimate a spatial process z(x) over some pdimensional region of interest X . Partition models can be useful modelling tools as, unlike standard spatial models (e.g. kriging) they allow the correlation structure between points to vary over the space of interest. Typically, the correlation between points is assumed to be a xed function which is most likely to be parameterised by a few variables that can be estimated from the data (see, for example, Diggle, Tawn and Moyeed (1998)). Partition models avoid the need for preexamination of the data to nd a suitable correlation function to use. This removes the bias necessarily introduced by picking the correlation function and estimating its parameters using the same set of data. Spatial clusters are, by their nature, regions which are not representative of the entire space of intere
On MCMC Sampling in Hierarchical Longitudinal Models
 Statistics and Computing
, 1998
"... this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameter ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in nonGaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three realdata examples.
Spacevarying Regression Models: Specifications And Simulation
 COMPUTATIONAL STATISTICS & DATA ANALYSIS 42 (2003) 513  533
, 2003
"... Spacevarying regression models are generalizations of standard linear model where the regression coefficients areal/fkz to change in space. Thespatial structure is specified by a mul#TE/bhEf extension of pairwise difference priors, thusenablEk incorporation of neighboring structures and easysamplTk ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Spacevarying regression models are generalizations of standard linear model where the regression coefficients areal/fkz to change in space. Thespatial structure is specified by a mul#TE/bhEf extension of pairwise difference priors, thusenablEk incorporation of neighboring structures and easysamplTk schemes. Bayesian inference is performed by incorporation of a prior distribution for the hyperparameters. This approachlpro to anuntractabl posterior distribution. Inference is approximated by drawing samplg from the posterior distribution. Different samplen schemes areavailIfI and may be used in an MCMCal/zh#hT/ They basicalk differ in the way theyhandl bldl of regression coefficients. Approaches vary fromsamplkI each lch/###TE/bhhTk vector of coefficients tocomplfI ellfI/bhf of al regression coe#cients by anal#TE/b integration. These schemes are compared in terms of their computation, chain autocorrel ##TE/ andresulzI; inference.Resule areilh#hEf/bf withsimulhhf data andapplE# to a real dataset.Relset prior specifications that can accommodate thespatial structure in different forms are al/ discussed. The paperconclhh; with a few general remarks.
When did Bayesian inference become “Bayesian"?
 BAYESIAN ANALYSIS
, 2006
"... While Bayes’ theorem has a 250year history, and the method of inverse probability that flowed from it dominated statistical thinking into the twentieth century, the adjective “Bayesian” was not part of the statistical lexicon until relatively recently. This paper provides an overview of key Bayesi ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
While Bayes’ theorem has a 250year history, and the method of inverse probability that flowed from it dominated statistical thinking into the twentieth century, the adjective “Bayesian” was not part of the statistical lexicon until relatively recently. This paper provides an overview of key Bayesian developments, beginning with Bayes’ posthumously published 1763 paper and continuing up through approximately 1970, including the period of time when “Bayesian” emerged as the label of choice for those who advocated Bayesian methods.
Penalized loss functions for Bayesian model comparison
"... The deviance information criterion (DIC) is widely used for Bayesian model comparison, despite the lack of a clear theoretical foundation. DIC is shown to be an approximation to a penalized loss function based on the deviance, with a penalty derived from a crossvalidation argument. This approximati ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
The deviance information criterion (DIC) is widely used for Bayesian model comparison, despite the lack of a clear theoretical foundation. DIC is shown to be an approximation to a penalized loss function based on the deviance, with a penalty derived from a crossvalidation argument. This approximation is valid only when the effective number of parameters in the model is much smaller than the number of independent observations. In disease mapping, a typical application of DIC, this assumption does not hold and DIC underpenalizes more complex models. Another deviancebased loss function, derived from the same decisiontheoretic framework, is applied to mixture models, which have previously been considered an unsuitable application for DIC.
Gibbs Variable Selection using BUGS
 Artificial Intelligence
, 1999
"... In this paper we discuss and present in detail the implementation of Gibbs variable selection as defined by Dellaportas et al. (2000, 2002) using the BUGS software (Spiegelhalter et al., 1996a,b,c). The specification of the likelihood, prior and pseudoprior distributions of the parameters as well a ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In this paper we discuss and present in detail the implementation of Gibbs variable selection as defined by Dellaportas et al. (2000, 2002) using the BUGS software (Spiegelhalter et al., 1996a,b,c). The specification of the likelihood, prior and pseudoprior distributions of the parameters as well as the prior term and model probabilities are described in detail. Guidance is also provided for the calculation of the posterior probabilities within BUGS environment when the number of models is limited. We illustrate the application of this methodology in a variety of problems including linear regression, loglinear and binomial response models.
Monte Carlo Methods on Bayesian Analysis of Constrained Parameter Problems with Normalizing Constants
 Biometrika
, 1998
"... Constraints on the parameters in a Bayesian hierarchical model typically make Bayesian computation and analysis complicated. As Gelfand, Smith and Lee (1992) remarked, it is almost impossible to sample from a posterior distribution when its density contains analytically intractable integrals (normal ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Constraints on the parameters in a Bayesian hierarchical model typically make Bayesian computation and analysis complicated. As Gelfand, Smith and Lee (1992) remarked, it is almost impossible to sample from a posterior distribution when its density contains analytically intractable integrals (normalizing constants) that depend on the (hyper) parameters. Therefore, the Gibbs sampler or the Metropolis algorithm can not be directly applied to such problems. In this paper, using the idea of "reweighting mixtures" of Geyer (1994), we develop alternative Monte Carlo based methods to determine properties of the desired Bayesian posterior distribution. Necessary theory and two illustrative examples are provided. Keywords and Phrases: Bayesian computation; Bayesian hierarchical model; Gibbs sampler; Markov chain Monte Carlo; Marginal posterior density estimation; Posterior distribution; Sensitivity of prior specification. 1 Introduction In this article we consider a Bayesian hierarchical mod...
Combining information from related regressions
 Journal of Agricultural, Biological, and Environmental Statistics
, 1997
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Monte Carlo EM With Importance Reweighting and Its Applications in Random Effects Models
, 1999
"... In this paper we propose a new Monte Carlo EM algorithm to compute maximum likelihood estimates in the context of random effects models. The algorithm involves the construction of e cient sampling distributions for the Monte Carlo implementation of the Estep, together with a reweighting procedure t ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper we propose a new Monte Carlo EM algorithm to compute maximum likelihood estimates in the context of random effects models. The algorithm involves the construction of e cient sampling distributions for the Monte Carlo implementation of the Estep, together with a reweighting procedure that allows repeatedly using a same sample of random effects. In addition, we explore the use of stochastic approximations to speed up convergence once stability has been reached. Our algorithm is compared with that of McCulloch (1997). Extensions to more general problems are discussed.
Bayesian Analysis of Factorial Experiments By Mixture Modelling
, 2000
"... this paper we try our hands at it. One version of the classical theory of factorial experiments, going back to Fisher and further developed by Kempthorne (1955), completely avoids distributional assumptions, assuming only additivity, and uses randomisation to derive the standard tests of hypotheses ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
this paper we try our hands at it. One version of the classical theory of factorial experiments, going back to Fisher and further developed by Kempthorne (1955), completely avoids distributional assumptions, assuming only additivity, and uses randomisation to derive the standard tests of hypotheses about treatment effects. Here, we are interested in the more familiar classical approach via linear modelling and normal distribution theory. The corresponding Bayesian analysis has been developed mainly in the pioneering works of Box & Tiao (1973) and Lindley & Smith (1972). Box & Tiao (1973, Chapter 6) discuss Bayesian analysis of cross classified designs, including fixed, random and mixed effects models. They point out that in a Bayesian approach the appropriate inference procedure for fixed and random effects "depends upon the nature of the prior distribution used to represent the behavior of the factors". They also show (Chapter 7) that shrinkage estimates of specific effects may result when a random effects model is assumed. Lindley & Smith (1972) use a hierarchically structured linear model built on multivariate normal components (special cases of the model are considered by Lindley, 1972 and Smith, 1973), with the focus on estimation of treatment effects. These are authoritative and attractive approaches, albeit with modest compromises to the Bayesian paradigm  in respect of the estimation of the variance components  necessitated by the computational limitations of the time. Nevertheless, the inference is almost entirely estimative: questions about the indistinguishability of factor levels, or more general hypotheses about contrasts, are answered indirectly trough their joint posterior distribution, e.g. by checking whether the hypothesis falls in a highest poster...