Results 11  20
of
93
Inference and Hierarchical Modeling in the Social Sciences
, 1995
"... this paper I (1) examine three levels of inferential strength supported by typical social science datagathering methods, and call for a greater degree of explicitness, when HMs and other models are applied, in identifying which level is appropriate; (2) reconsider the use of HMs in school effective ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
this paper I (1) examine three levels of inferential strength supported by typical social science datagathering methods, and call for a greater degree of explicitness, when HMs and other models are applied, in identifying which level is appropriate; (2) reconsider the use of HMs in school effectiveness studies and metaanalysis from the perspective of causal inference; and (3) recommend the increased use of Gibbs sampling and other Markovchain Monte Carlo (MCMC) methods in the application of HMs in the social sciences, so that comparisons between MCMC and betterestablished fitting methodsincluding full or restricted maximum likelihood estimation based on the EM algorithm, Fisher scoring or iterative generalized least squaresmay be more fully informed by empirical practice.
Penalized loss functions for Bayesian model comparison
"... The deviance information criterion (DIC) is widely used for Bayesian model comparison, despite the lack of a clear theoretical foundation. DIC is shown to be an approximation to a penalized loss function based on the deviance, with a penalty derived from a crossvalidation argument. This approximati ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
The deviance information criterion (DIC) is widely used for Bayesian model comparison, despite the lack of a clear theoretical foundation. DIC is shown to be an approximation to a penalized loss function based on the deviance, with a penalty derived from a crossvalidation argument. This approximation is valid only when the effective number of parameters in the model is much smaller than the number of independent observations. In disease mapping, a typical application of DIC, this assumption does not hold and DIC underpenalizes more complex models. Another deviancebased loss function, derived from the same decisiontheoretic framework, is applied to mixture models, which have previously been considered an unsuitable application for DIC.
Partition Modelling
"... Introduction This chapter serves as an introduction to the use of partition models to estimate a spatial process z(x) over some pdimensional region of interest X . Partition models can be useful modelling tools as, unlike standard spatial models (e.g. kriging) they allow the correlation structure ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Introduction This chapter serves as an introduction to the use of partition models to estimate a spatial process z(x) over some pdimensional region of interest X . Partition models can be useful modelling tools as, unlike standard spatial models (e.g. kriging) they allow the correlation structure between points to vary over the space of interest. Typically, the correlation between points is assumed to be a xed function which is most likely to be parameterised by a few variables that can be estimated from the data (see, for example, Diggle, Tawn and Moyeed (1998)). Partition models avoid the need for preexamination of the data to nd a suitable correlation function to use. This removes the bias necessarily introduced by picking the correlation function and estimating its parameters using the same set of data. Spatial clusters are, by their nature, regions which are not representative of the entire space of intere
When did Bayesian inference become “Bayesian"?
 BAYESIAN ANALYSIS
, 2006
"... While Bayes’ theorem has a 250year history, and the method of inverse probability that flowed from it dominated statistical thinking into the twentieth century, the adjective “Bayesian” was not part of the statistical lexicon until relatively recently. This paper provides an overview of key Bayesi ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
While Bayes’ theorem has a 250year history, and the method of inverse probability that flowed from it dominated statistical thinking into the twentieth century, the adjective “Bayesian” was not part of the statistical lexicon until relatively recently. This paper provides an overview of key Bayesian developments, beginning with Bayes’ posthumously published 1763 paper and continuing up through approximately 1970, including the period of time when “Bayesian” emerged as the label of choice for those who advocated Bayesian methods.
Spacevarying Regression Models: Specifications And Simulation
 COMPUTATIONAL STATISTICS & DATA ANALYSIS 42 (2003) 513  533
, 2003
"... Spacevarying regression models are generalizations of standard linear model where the regression coefficients areal/fkz to change in space. Thespatial structure is specified by a mul#TE/bhEf extension of pairwise difference priors, thusenablEk incorporation of neighboring structures and easysamplTk ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Spacevarying regression models are generalizations of standard linear model where the regression coefficients areal/fkz to change in space. Thespatial structure is specified by a mul#TE/bhEf extension of pairwise difference priors, thusenablEk incorporation of neighboring structures and easysamplTk schemes. Bayesian inference is performed by incorporation of a prior distribution for the hyperparameters. This approachlpro to anuntractabl posterior distribution. Inference is approximated by drawing samplg from the posterior distribution. Different samplen schemes areavailIfI and may be used in an MCMCal/zh#hT/ They basicalk differ in the way theyhandl bldl of regression coefficients. Approaches vary fromsamplkI each lch/###TE/bhhTk vector of coefficients tocomplfI ellfI/bhf of al regression coe#cients by anal#TE/b integration. These schemes are compared in terms of their computation, chain autocorrel ##TE/ andresulzI; inference.Resule areilh#hEf/bf withsimulhhf data andapplE# to a real dataset.Relset prior specifications that can accommodate thespatial structure in different forms are al/ discussed. The paperconclhh; with a few general remarks.
Bayes and Empirical Bayes Estimation for the Chain Ladder Model
 ASTIN Bulletin
, 1990
"... The subject of predicting outstanding claims on a porfolio of general insurance policies is approached via the theory of hierarchical Bayesian linear models. This is particularly appropriate since the chain ladder technique can be expressed in the form of a linear model. The statistical methods whic ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
The subject of predicting outstanding claims on a porfolio of general insurance policies is approached via the theory of hierarchical Bayesian linear models. This is particularly appropriate since the chain ladder technique can be expressed in the form of a linear model. The statistical methods which are applied allow the practitioner to use different modelling assumptions from those implied by a classical formulation, and to arrive at forecasts which have a greater degree of inherent stability. The results can also be used for other linear models. By using a statistical structure, a sound approach to the chain ladder technique can be derived. The Bayesian results allow the input of collateral information i a formal manner. Empirical Bayes results are derived which can be interpreted as credibility estimates. The statistical assumptions which are made in the modelling procedure are clearly set out and can be tested by the practitioner. The results based on the statistical theory form one part of the reserving procedure, and should be followed by expert interpretation and analysis. An illustration of the use of Bayesian and empirical Bayes estimation methods is given.
2003) “Modeling Parameter Heterogeneity in CrossCountry Growth Regression Models” Reproduced
"... Given the failure of the conventional linear Solow growth model to establish reliable results in the analysis crosscountry growth performance, this paper proposes a new framework using the concept of hierarchy of timescales. By hierarchy of time scales, I mean that slower moving variables such as ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Given the failure of the conventional linear Solow growth model to establish reliable results in the analysis crosscountry growth performance, this paper proposes a new framework using the concept of hierarchy of timescales. By hierarchy of time scales, I mean that slower moving variables such as culture, play a major role in determining medium moving variables such as institutions, and which in turn play a major role in determining faster moving variables such as the conventional determinants of economics growth. This approach provides a systematic way of thinking about the heterogeneity in the crosscountry growth performance. In the context of the Solow growth model the hierarchical approach suggests a local generalization of the Solow growth model in the form of a semiparametric varying parameter model along the lines of Hastie and Tibshirani (1992). Using the varying coefficient model, this paper studies two examples. In the first example the parameters of the model vary according to initial human capital while in the second they vary according to a measure of ethnic diversity. The results suggest that there exists substantial parameter heterogeneity in the crosscountry growth process.
Monte Carlo Methods on Bayesian Analysis of Constrained Parameter Problems with Normalizing Constants
 Biometrika
, 1998
"... Constraints on the parameters in a Bayesian hierarchical model typically make Bayesian computation and analysis complicated. As Gelfand, Smith and Lee (1992) remarked, it is almost impossible to sample from a posterior distribution when its density contains analytically intractable integrals (normal ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Constraints on the parameters in a Bayesian hierarchical model typically make Bayesian computation and analysis complicated. As Gelfand, Smith and Lee (1992) remarked, it is almost impossible to sample from a posterior distribution when its density contains analytically intractable integrals (normalizing constants) that depend on the (hyper) parameters. Therefore, the Gibbs sampler or the Metropolis algorithm can not be directly applied to such problems. In this paper, using the idea of "reweighting mixtures" of Geyer (1994), we develop alternative Monte Carlo based methods to determine properties of the desired Bayesian posterior distribution. Necessary theory and two illustrative examples are provided. Keywords and Phrases: Bayesian computation; Bayesian hierarchical model; Gibbs sampler; Markov chain Monte Carlo; Marginal posterior density estimation; Posterior distribution; Sensitivity of prior specification. 1 Introduction In this article we consider a Bayesian hierarchical mod...
H (2006) Crossclassified and multiple membership structures in models: an introduction and review
"... ..."
Monte Carlo EM With Importance Reweighting and Its Applications in Random Effects Models
, 1999
"... In this paper we propose a new Monte Carlo EM algorithm to compute maximum likelihood estimates in the context of random effects models. The algorithm involves the construction of e cient sampling distributions for the Monte Carlo implementation of the Estep, together with a reweighting procedure t ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In this paper we propose a new Monte Carlo EM algorithm to compute maximum likelihood estimates in the context of random effects models. The algorithm involves the construction of e cient sampling distributions for the Monte Carlo implementation of the Estep, together with a reweighting procedure that allows repeatedly using a same sample of random effects. In addition, we explore the use of stochastic approximations to speed up convergence once stability has been reached. Our algorithm is compared with that of McCulloch (1997). Extensions to more general problems are discussed.