Results 1  10
of
12
Dynamic matrixvariate graphical models
 Bayesian Anal
, 2007
"... This paper introduces a novel class of Bayesian models for multivariate time series analysis based on a synthesis of dynamic linear models and graphical models. The models are then applied in the context of financial time series for predictive portfolio analysis providing a significant improvement i ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
This paper introduces a novel class of Bayesian models for multivariate time series analysis based on a synthesis of dynamic linear models and graphical models. The models are then applied in the context of financial time series for predictive portfolio analysis providing a significant improvement in performance of optimal investment decisions.
Objective Bayesian model selection in Gaussian graphical models
, 2007
"... This paper presents a default modelselection procedure for Gaussian graphical models that involves two new developments. First, we develop a default version of the hyperinverse Wishart prior for restricted covariance matrices, called the hyperinverse Wishart gprior, and show how it corresponds t ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
This paper presents a default modelselection procedure for Gaussian graphical models that involves two new developments. First, we develop a default version of the hyperinverse Wishart prior for restricted covariance matrices, called the hyperinverse Wishart gprior, and show how it corresponds to the implied fractional prior for covariance selection using fractional Bayes factors. Second, we apply a class of priors that automatically handles the problem of multiple hypothesis testing implied by covariance selection. We demonstrate our methods on a variety of simulated examples, concluding with a real example analysing covariation in mutualfund returns. These studies reveal that the combined use of a multiplicitycorrection prior on graphs and fractional Bayes factors for computing marginal likelihoods yields better performance than existing Bayesian methods. Some key words: covariance selection; hyperinverse Wishart distribution; fractional Bayes factors; Bayesian model selection; multiple hypothesis testing.
Bayesian structural learning and estimation in Gaussian graphical models
"... We propose a new stochastic search algorithm for Gaussian graphical models called the mode oriented stochastic search. Our algorithm relies on the existence of a method to accurately and efficiently approximate the marginal likelihood associated with a graphical model when it cannot be computed in c ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We propose a new stochastic search algorithm for Gaussian graphical models called the mode oriented stochastic search. Our algorithm relies on the existence of a method to accurately and efficiently approximate the marginal likelihood associated with a graphical model when it cannot be computed in closed form. To this end, we develop a new Laplace approximation method to the normalizing constant of a GWishart distribution. We show that combining the mode oriented stochastic search with our marginal likelihood estimation method leads to excellent results with respect to other techniques discussed in the literature. We also describe how to perform inference through Bayesian model averaging based on the reduced set of graphical models identified. Finally, we give a novel stochastic search technique for multivariate regression models.
Bayesian analysis of matrix normal graphical models
 Biometrika
, 2009
"... We develop Bayesian analysis of matrixvariate normal data with conditional independence graphical structuring of the characterising variance matrix parameters. This leads to fully Bayesian analysis of matrix normal graphical models, including discussion of novel prior specifications, the resulting ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We develop Bayesian analysis of matrixvariate normal data with conditional independence graphical structuring of the characterising variance matrix parameters. This leads to fully Bayesian analysis of matrix normal graphical models, including discussion of novel prior specifications, the resulting problems of posterior computation addressed using Markov chain Monte Carlo methods, and graphical model assessment that involves approximate evaluation of marginal likelihood functions under specified graphical models. Modelling and inference for spatial/image data via a novel class of Markov random fields that arise as natural examples of matrix normal graphical models is discussed. This is complemented by the development of a broad class of dynamic models for matrixvariate time series within which stochastic elements defining time series errors and structural changes over time are subject to graphical model structuring. Three examples illustrate these developments and highlight questions of graphical model uncertainty and comparison in matrix data contexts.
Geometric Representations of Hypergraphs for Prior Specification and Posterior Sampling
"... Abstract: A parametrization of hypergraphs based on the geometry of points in R d is developed. Informative prior distributions on hypergraphs are induced through this parametrization by priors on point configurations via spatial processes. This prior specification is used to infer conditional indep ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract: A parametrization of hypergraphs based on the geometry of points in R d is developed. Informative prior distributions on hypergraphs are induced through this parametrization by priors on point configurations via spatial processes. This prior specification is used to infer conditional independence models or Markov structure of multivariate distributions. Specifically, we can recover both the junction tree factorization as well as the hyper Markov law. This approach offers greater control on the distribution of graph features than ErdösRényi random graphs, supports inference of factorizations that cannot be retrieved by a graph alone, and leads to new Metropolis/Hastings Markov chain Monte Carlo algorithms with both local and global moves in graph space. We illustrate the utility of this parametrization and prior specification using simulations.
Sparse covariance estimation in heterogeneous samples
, 2011
"... Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to no ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the preEuro era.
AUTOREGRESSIVE MODELS FOR VARIANCE MATRICES: STATIONARY INVERSE WISHART PROCESSES
, 1107
"... We introduce and explore a new class of stationary time series models for variance matrices based on a constructive definition exploiting inverse Wishart distribution theory. The main class of models explored is a novel class of stationary, firstorder autoregressive (AR) processes on the cone of po ..."
Abstract
 Add to MetaCart
We introduce and explore a new class of stationary time series models for variance matrices based on a constructive definition exploiting inverse Wishart distribution theory. The main class of models explored is a novel class of stationary, firstorder autoregressive (AR) processes on the cone of positive semidefinite matrices. Aspects of the theory and structure of these new models for multivariate “volatility ” processes are described in detail and exemplified. We then develop approaches to model fitting via Bayesian simulationbased computations, creating a custom filtering method that relies on an efficient innovations sampler. An example is then provided in analysis of a multivariate electroencephalogram (EEG) time series in neurological studies. We conclude by discussing potential further developments of higherorder AR models and a number of connections with prior approaches. 1. Introduction. Modeling
Thesis Proposal: Nonparametric Hyper Markov Priors
, 2008
"... Markov distributions are used to describe multivariate data with conditional independence structure. Applications of Markov distributions arise in many fields including demography, flood prediction, and telecommunications. A hyper Markov law is a distribution over the space of all Markov distributio ..."
Abstract
 Add to MetaCart
Markov distributions are used to describe multivariate data with conditional independence structure. Applications of Markov distributions arise in many fields including demography, flood prediction, and telecommunications. A hyper Markov law is a distribution over the space of all Markov distributions; such laws have been used as prior distributions for various types of graphical models. Dirichlet processes have also been used to specify priors in a nonparametric form. I have developed a family of nonparametric hyper Markov laws that I call hyper Dirichlet processes, which combine the separate ideas of hyper Markov laws and nonparametric prior processes. In my thesis, I propose to describe these distributions and their properties, and to apply them to specific problems. For example, I define a hyper Markov mixture of Gaussians and use it in the form of a hyper Markov prior to provide a nonparametric way to mix graphical Gaussian distributions. 1
Biometrika (2013), xx, x, pp. 1–27 C 2007 Biometrika Trust Printed in Great Britain
"... We consider Bayesian estimation of a p × p precision matrix, when p can be much larger than the available sample size n. It is well known that consistent estimation in such ultrahigh dimensional situations requires regularisation such as banding, tapering or thresholding. We consider a banding stru ..."
Abstract
 Add to MetaCart
We consider Bayesian estimation of a p × p precision matrix, when p can be much larger than the available sample size n. It is well known that consistent estimation in such ultrahigh dimensional situations requires regularisation such as banding, tapering or thresholding. We consider a banding structure in the model and induce a prior distribution on a banded precision matrix 15 through a Gaussian graphical model, where an edge is present only when two vertices are within a given distance. We show that under a very mild growth condition and a proper choice of the order of graph, the posterior distribution based on the graphical model is consistent in the L∞operator norm uniformly over a class of precision matrices, even if the true precision matrix may20 2 S. BANERJEE AND S. GHOSAL not have a banded structure. Along the way to the proof, we also establish that the maximum likelihood estimator (MLE) is also consistent under the same set of conditions, which is of independent interest. The consistency and the convergence rate of the precision matrix given the data are also studied. We also conduct a simulation study to compare finite sample performance of the Bayes estimator and the MLE based on the graphical model with that obtained by using a 25 banding operation on the sample covariance matrix. Some key words: Precision matrix, GWishart, posterior consistency, convergence rate. 1.