Results 1  10
of
12
Bayesian covariance selection in generalized linear mixed models
 Biometrics
, 2006
"... SUMMARY. The generalized linear mixed model (GLMM), which extends the generalized linear model (GLM) to incorporate random effects characterizing heterogeneity among subjects, is widely used in analyzing correlated and longitudinal data. Although there is often interest in identifying the subset of ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
SUMMARY. The generalized linear mixed model (GLMM), which extends the generalized linear model (GLM) to incorporate random effects characterizing heterogeneity among subjects, is widely used in analyzing correlated and longitudinal data. Although there is often interest in identifying the subset of predictors that have random effects, random effects selection can be challenging, particularly when outcome distributions are nonnormal. This article proposes a fully Bayesian approach to the problem of simultaneous selection of fixed and random effects in GLMMs. Integrating out the random effects induces a covariance structure on the multivariate outcome data, and an important problem which we also consider is that of covariance selection. Our approach relies on variable selectiontype mixture priors for the components in a special LDU decomposition of the random effects covariance. A stochastic search MCMC algorithm is developed, which relies on Gibbs sampling, with Taylor series expansions used to approximate intractable integrals. Simulated data examples are presented for different exponential family distributions, and the approach is applied to discrete survival data from a timetopregnancy study.
Bayesian structural learning and estimation in Gaussian graphical models
"... We propose a new stochastic search algorithm for Gaussian graphical models called the mode oriented stochastic search. Our algorithm relies on the existence of a method to accurately and efficiently approximate the marginal likelihood associated with a graphical model when it cannot be computed in c ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We propose a new stochastic search algorithm for Gaussian graphical models called the mode oriented stochastic search. Our algorithm relies on the existence of a method to accurately and efficiently approximate the marginal likelihood associated with a graphical model when it cannot be computed in closed form. To this end, we develop a new Laplace approximation method to the normalizing constant of a GWishart distribution. We show that combining the mode oriented stochastic search with our marginal likelihood estimation method leads to excellent results with respect to other techniques discussed in the literature. We also describe how to perform inference through Bayesian model averaging based on the reduced set of graphical models identified. Finally, we give a novel stochastic search technique for multivariate regression models.
Posterior propriety and admissibility of hyperpriors in normal hierarchical models
 The Annals of Statistics
, 2005
"... Hierarchical modeling is wonderful and here to stay, but hyperparameter priors are often chosen in a casual fashion. Unfortunately, as the number of hyperparameters grows, the effects of casual choices can multiply, leading to considerably inferior performance. As an extreme, but not uncommon, examp ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Hierarchical modeling is wonderful and here to stay, but hyperparameter priors are often chosen in a casual fashion. Unfortunately, as the number of hyperparameters grows, the effects of casual choices can multiply, leading to considerably inferior performance. As an extreme, but not uncommon, example use of the wrong hyperparameter priors can even lead to impropriety of the posterior. For exchangeable hierarchical multivariate normal models, we first determine when a standard class of hierarchical priors results in proper or improper posteriors. We next determine which elements of this class lead to admissible estimators of the mean under quadratic loss; such considerations provide one useful guideline for choice among hierarchical priors. Finally, computational issues with the resulting posterior distributions are addressed. 1. Introduction. 1.1. The model and the problems. Consider the block multivariate normal situation (sometimes called the “matrix of means problem”) specified by the following hierarchical Bayesian model:
Bayesian covariance matrix estimation using a mixture of decomposable graphical models. Unpublished manuscript
, 2005
"... Summary. Estimating a covariance matrix efficiently and discovering its structure are important statistical problems with applications in many fields. This article takes a Bayesian approach to estimate the covariance matrix of Gaussian data. We use ideas from Gaussian graphical models and model sele ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Summary. Estimating a covariance matrix efficiently and discovering its structure are important statistical problems with applications in many fields. This article takes a Bayesian approach to estimate the covariance matrix of Gaussian data. We use ideas from Gaussian graphical models and model selection to construct a prior for the covariance matrix that is a mixture over all decomposable graphs, where a graph means the configuration of nonzero offdiagonal elements in the inverse of the covariance matrix. Our prior for the covariance matrix is such that the probability of each graph size is specified by the user and graphs of equal size are assigned equal probability. Most previous approaches assume that all graphs are equally probable. We give empirical results that show the prior that assigns equal probability over graph sizes outperforms the prior that assigns equal probability over all graphs, both in identifying the correct decomposable graph and in more efficiently estimating the covariance matrix. The advantage is greatest when the number of observations is small relative to the dimension of the covariance matrix. Our method requires the number of decomposable graphs for each graph size. We show how to estimate these numbers using simulation and that the simulation results agree with analytic results when such results are known. We also show how
Objective Priors for the Bivariate Normal Model with Multivariate Generalizations 1
, 2006
"... Study of the bivariate normal distribution raises the full range of issues involving objective Bayesian inference, including the different types of objective priors (e.g., Jeffreys, invariant, reference, matching), the different modes of inference (e.g., Bayesian, frequentist, fiducial), and the cri ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Study of the bivariate normal distribution raises the full range of issues involving objective Bayesian inference, including the different types of objective priors (e.g., Jeffreys, invariant, reference, matching), the different modes of inference (e.g., Bayesian, frequentist, fiducial), and the criteria involved in deciding on optimal objective priors (e.g., ease of computation, frequentist performance, marginalization paradoxes). Summary recommendations as to optimal objective priors are made for a variety of inferences involving the bivariate normal distribution. In the course of the investigation, a variety of surprising results were found, including the availability of objective priors that yield exact frequentist inferences for many functions of the bivariate normal parameters, including the correlation coefficient. Several generalizations to the multivariate normal distribution are given. Some key words: Reference priors, matching priors, Jeffreys priors, rightHaar prior, fiducial inference, frequentist coverage, marginalization paradox, rejection sampling, constructive posterior distributions. 1 This research was supported by the National Science Foundation, under grants DMS0103265 and SES
Generating Valid 4 × 4 Correlation Matrices ∗
, 2006
"... In this article, we provide an algorithm for generating valid 4 × 4 correlation matrices by creating bounds for the remaining three correlations that insure positive semidefiniteness once correlations between one variable and the other three are supplied. This is achieved by attending to the constra ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this article, we provide an algorithm for generating valid 4 × 4 correlation matrices by creating bounds for the remaining three correlations that insure positive semidefiniteness once correlations between one variable and the other three are supplied. This is achieved by attending to the constraints placed on the leading principal minor determinants. Prior work in this area is restricted to 3 × 3 matrices or provides a means to investigate whether a 4 × 4 matrix is valid but does not offer a method of construction. We do offer such a method and supply a computer program to deliver a 4 × 4 positive semidefinite correlation matrix. 1
Covariance Estimation: The GLM and Regularization Perspectives
"... Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covariance estimation, particularly in the recent highdimensional data environment where enforcing the positivedefinit ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covariance estimation, particularly in the recent highdimensional data environment where enforcing the positivedefiniteness constraint could be computationally expensive. We provide a survey of the progress made in modeling covariance matrices from the perspectives of generalized linear models (GLM) or parsimony and use of covariates in low dimensions, regularization (shrinkage, sparsity) for highdimensional data, and the role of various matrix factorizations. A viable and emerging regressionbased setup which is suitable for both the GLM and the regularization approaches is to link a covariance matrix, its inverse or their factors to certain regression models and then solve the relevant (penalized) least squares problems. We point out several instances of this regressionbased setup in the literature. A notable case is in the Gaussian graphical models where linear regressions with LASSO penalty are used to estimate the neighborhood of one node at a time (Meinshausen and Bühlmann, 2006). Some advantages
ModelBased Variable Clustering with Application to
, 2004
"... Cluster analysis is the method to put objects into groups on the basis of the property of the objects obtained from the observed data. Most of current cluster analysis is used to classify observations. Motivated by applications to neurophysiology, we are interested in variable clustering, which clas ..."
Abstract
 Add to MetaCart
Cluster analysis is the method to put objects into groups on the basis of the property of the objects obtained from the observed data. Most of current cluster analysis is used to classify observations. Motivated by applications to neurophysiology, we are interested in variable clustering, which classifies variables according to their association across repeated observations. The association for variable clustering is typically measured as correlation. Even for scalar data, it is already non– trivial to estimate a correlation matrix from a limited data. The problem becomes much harder for vector or function data. Motivated by applications to neurophysiology, we propose the variable clustering model based on variation across replications. We start with scalar variables, then extend the model to vector data. By using a basis representation (as in James and Sugar, 2003), we are able to construct one functional variable clustering model. The models perform variable clustering and estimation simultaneously. Some simulation results using MCMC, as well as comparisons with the existing models, are displayed. Also, we apply the model to analyze neuronal data. we found that some neurons are generally associated regardless of movement direction and time. The correlation pattern of some neuons switches between states of high correlation and near independence depending on the movement direction and time. There are also some pairs of neurons which are completely independent with each other in all movement directions and time periods. 1 1
unknown title
"... We study the role of partial autocorrelations in the reparameterization and parsimonious modeling of a covariance matrix. The work is motivated by and tries to mimic the phenomenal success of the partial autocorrelations function (PACF) in model formulation, removing the positivedefiniteness constra ..."
Abstract
 Add to MetaCart
We study the role of partial autocorrelations in the reparameterization and parsimonious modeling of a covariance matrix. The work is motivated by and tries to mimic the phenomenal success of the partial autocorrelations function (PACF) in model formulation, removing the positivedefiniteness constraint on the autocorrelation function of a stationary time series and in reparameterizing the stationarityinvertibility domain of ARMA models. It turns out that once an order is fixed among the variables of a general random vector, then the above properties continue to hold and follows from establishing a onetoone correspondence between a correlation matrix and its associated matrix of partial autocorrelations. Connections between the latter and the parameters of the modified Cholesky decomposition of a covariance matrix are discussed. Graphical tools similar to partial correlograms for model formulation and various priors based on the partial autocorrelations are proposed. We develop frequentist/Bayesian procedures for modelling correlation matrices, illustrate them using a real dataset, and explore their properties via simulations.