Results 1  10
of
122
Bayesian Data Analysis
, 1995
"... I actually own a copy of Harold Jeffreys’s Theory of Probability but have only read small bits of it, most recently over a decade ago to confirm that, indeed, Jeffreys was not too proud to use a classical chisquared pvalue when he wanted to check the misfit of a model to data (Gelman, Meng and Ste ..."
Abstract

Cited by 1230 (47 self)
 Add to MetaCart
I actually own a copy of Harold Jeffreys’s Theory of Probability but have only read small bits of it, most recently over a decade ago to confirm that, indeed, Jeffreys was not too proud to use a classical chisquared pvalue when he wanted to check the misfit of a model to data (Gelman, Meng and Stern, 2006). I do, however, feel that it is important to understand where our probability models come from, and I welcome the opportunity to use the present article by Robert, Chopin and Rousseau as a platform for further discussion of foundational issues. 2 In this brief discussion I will argue the following: (1) in thinking about prior distributions, we should go beyond Jeffreys’s principles and move toward weakly informative priors; (2) it is natural for those of us who work in social and computational sciences to favor complex models, contra Jeffreys’s preference for simplicity; and (3) a key generalization of Jeffreys’s ideas is to explicitly include model checking in the process of data analysis.
Modeling changing dependency structure in multivariate time series
 In International Conference in Machine Learning
, 2007
"... We show how to apply the efficient Bayesian changepoint detection techniques of Fearnhead in the multivariate setting. We model the joint density of vectorvalued observations using undirected Gaussian graphical models, whose structure we estimate. We show how we can exactly compute the MAP segmenta ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
We show how to apply the efficient Bayesian changepoint detection techniques of Fearnhead in the multivariate setting. We model the joint density of vectorvalued observations using undirected Gaussian graphical models, whose structure we estimate. We show how we can exactly compute the MAP segmentation, as well as how to draw perfect samples from the posterior over segmentations, simultaneously accounting for uncertainty about the number and location of changepoints, as well as uncertainty about the covariance structure. We illustrate the technique by applying it to financial data and to bee tracking data. 1.
Mplus: Statistical Analysis with Latent Variables (Version 4.21) [Computer software
, 2007
"... Chapter 3: Regression and path analysis 19 Chapter 4: Exploratory factor analysis 43 ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Chapter 3: Regression and path analysis 19 Chapter 4: Exploratory factor analysis 43
An Evaluation of Four SacramentoSan Joaquin River Delta Juvenile Salmon Survival Studies
, 1991
"... conducted several multiyear releaserecovery experiments with codedwiretagged juvenile Chinook salmon. The objectives of the studies were (1) to estimate survival through the lower portions of the Sacramento and San Joaquin river systems, the California Delta, and (2) to quantify the factors affe ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
conducted several multiyear releaserecovery experiments with codedwiretagged juvenile Chinook salmon. The objectives of the studies were (1) to estimate survival through the lower portions of the Sacramento and San Joaquin river systems, the California Delta, and (2) to quantify the factors affecting survival. Four of these studies, listed more or less by their historical start dates, are the Delta Cross Channel, Interior, Delta Action 8, and VAMP experiments. Delta Cross Channel: These studies focused on how the position of the Delta crosschannel (DCC) gate affected survival of outmigrating juvenile salmon. When the gate(s) is open, water flow from the Sacramento river into the central Delta increases. The a priori hypothesis for these studies was that survival would be lowered with the gate open since the probability of entering the interior Delta would increase and the fish would thereby be more vulnerable to the water export pumps at the state water project (SWP) and at the federal Central Valley project (CVP). Temporally paired releases were made above the DCC (near Courtland) and below the DCC (at Ryde) and recoveries were made at Chipps Island and in the ocean fisheries.
Default prior distributions and efficient posterior computation in Bayesian factor analysis
 Journal of Computational and Graphical Statistics
, 2009
"... Factor analytic models are widely used in social sciences. These models have also proven useful for sparse modeling of the covariance structure in multidimensional data. Normal prior distributions for factor loadings and inverse gamma prior distributions for residual variances are a popular choice b ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Factor analytic models are widely used in social sciences. These models have also proven useful for sparse modeling of the covariance structure in multidimensional data. Normal prior distributions for factor loadings and inverse gamma prior distributions for residual variances are a popular choice because of their conditionally conjugate form. However, such prior distributions require elicitation of many hyperparameters and tend to result in poorly behaved Gibbs samplers. In addition, one must choose an informative specification, as high variance prior distributions face problems due to impropriety of the posterior distribution. This article proposes a default, heavy tailed prior distribution specification, which is induced through parameter expansion while facilitating efficient posterior computation. We also develop an approach to allow uncertainty in the number of factors. The methods are illustrated through simulated examples and epidemiology and toxicology applications.
Handling sparsity via the horseshoe
 Journal of Machine Learning Research, W&CP
"... This paper presents a general, fully Bayesian framework for sparse supervisedlearning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learni ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
This paper presents a general, fully Bayesian framework for sparse supervisedlearning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Studentt priors (e.g. the relevance vector machine). The advantages of the horseshoe are its robustness at handling unknown sparsity and large outlying signals. These properties are justified theoretically via a representation theorem and accompanied by comprehensive empirical experiments that compare its performance to benchmark alternatives. 1
Responses to monetary policy shocks in the east and the west of europe: a comparison,” Center for Social and Economic Research 287
"... This paper compares impulse responses to monetary policy shocks in the euro area countries before the EMU and in the New Member States (NMS) from centraleastern Europe. We mitigate the smallsample problem, which is especially acute for the NMS, by using a Bayesian estimation that combines informati ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This paper compares impulse responses to monetary policy shocks in the euro area countries before the EMU and in the New Member States (NMS) from centraleastern Europe. We mitigate the smallsample problem, which is especially acute for the NMS, by using a Bayesian estimation that combines information across countries. The impulse responses in the NMS are broadly similar to those in the euro area countries. There is some evidence that in the NMS, which have had higher and more volatile inflation, the Phillips curve is steeper than in the euro area countries. This finding is consistent with economic theory.
Why we (usually) don’t have to worry about multiple comparisons ∗
, 2008
"... The problem of multiple comparisons can disappear when viewed from a Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. These address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low g ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
The problem of multiple comparisons can disappear when viewed from a Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. These address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low grouplevel variation, which is where multiple comparisons are a particular concern. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the pvalues corresponding to intervals of fixed width). Multilevel estimates make comparisons more conservative, in the sense that intervals for comparisons are less likely to include zero; as a result, those comparisons that are made with confidence are more likely to be valid.
MCMC Methods for Multiresponse Generalized Linear Mixed Models: The MCMCglmm R Package
"... Generalized linear mixed models provide a flexible framework for modeling a range of data, although with nonGaussian response variables the likelihood cannot be obtained in closed form. Markov chain Monte Carlo methods solve this problem by sampling from a series of simpler conditional distribution ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Generalized linear mixed models provide a flexible framework for modeling a range of data, although with nonGaussian response variables the likelihood cannot be obtained in closed form. Markov chain Monte Carlo methods solve this problem by sampling from a series of simpler conditional distributions that can be evaluated. The R package MCMCglmm, implements such an algorithm for a range of model fitting problems. More than one response variable can be analysed simultaneously, and these variables are allowed to follow Gaussian, Poisson, multi(bi)nominal, exponential, zeroinflated and censored distributions. A range of variance structures are permitted for the random effects, including interactions with categorical or continuous variables (i.e., random regression), and more complicated variance structures that arise through shared ancestry, either through a pedigree or through a phylogeny. Missing values are permitted in the response variable(s) and data can be known up to some level of measurement error as in metaanalysis. All simulation is done in C / C++ using the CSparse library for sparse linear systems. If you use the software please cite this article, as published in the Journal of Statistic Software
Transformed and parameterexpanded Gibbs samplers for multilevel linear and generalized linear models
, 2004
"... Hierarchical linear and generalized linear models can be fit using Gibbs samplers and Metropolis algorithms; these models, however, often have many parameters, and convergence of the seemingly most natural Gibbs and Metropolis algorithms can sometimes be slow. We examine solutions that involve repar ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Hierarchical linear and generalized linear models can be fit using Gibbs samplers and Metropolis algorithms; these models, however, often have many parameters, and convergence of the seemingly most natural Gibbs and Metropolis algorithms can sometimes be slow. We examine solutions that involve reparameterization and overparameterization. We begin with parameter expansion using working parameters, a strategy developed for the EM algorithm by Meng and van Dyk (1997) and Liu, Rubin, and Wu (1998). This strategy can lead to algorithms that are much less susceptible to becoming stuck near zero values of the variance parameters than are more standard algorithms. Second, we consider a simple rotation of the regression coefficients based on an estimate of their posterior covariance matrix. This leads to a Gibbs algorithm based on updating the transformed parameters one at a time or a Metropolis algorithm with vector jumps; either of these algorithms can perform much better (in terms of total CPU time) than the two standard algorithms: oneatatime updating of untransformed parameters or vector updating using a linear regression at each step. We present an innovative evaluation of the algorithms in terms of how quickly they can get away from remote areas of parameter space, along with some more standard evaluation of computation and convergence speeds. We illustrate our methods with examples from our applied work. Our ultimate goal is to develop a fast and reliable method for fitting a hierarchical linear model as easily as one can now fit a nonhierarchical model, and to increase understanding of Gibbs samplers for hierarchical models in general. Keywords: Bayesian computation, blessing of dimensionality, Markov chain Monte Carlo, multilevel modeling, mixed effects models, PXEM algorithm, random effects regression, redundant