Results 1  10
of
20
A SINful approach to Gaussian graphical model selection
 Journal of Statistical Planning and Inference
"... Abstract. Multivariate Gaussian graphical models are defined in terms of Markov properties, i.e., conditional independences associated with the underlying graph. Thus, model selection can be performed by testing these conditional independences, which are equivalent to specified zeroes among certain ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
Abstract. Multivariate Gaussian graphical models are defined in terms of Markov properties, i.e., conditional independences associated with the underlying graph. Thus, model selection can be performed by testing these conditional independences, which are equivalent to specified zeroes among certain (partial) correlation coefficients. For concentration graphs, covariance graphs, acyclic directed graphs, and chain graphs (both LWF and AMP), we apply Fisher’s ztransformation, ˇ Sidák’s correlation inequality, and Holm’s stepdown procedure, to simultaneously test the multiple hypotheses obtained from the Markov properties. This leads to a simple method for model selection that controls the overall error rate for incorrect edge inclusion. In practice, we advocate partitioning the simultaneous pvalues into three disjoint sets, a significant set S, an indeterminate set I, and a nonsignificant set N. Then our SIN model selection method selects two graphs, a graph whose edges correspond to the union of S and I, and a more conservative graph whose edges correspond to S only. Prior information about the presence and/or absence of particular edges can be incorporated readily. 1.
Iterative conditional fitting for Gaussian ancestral graph models
 In M. Chickering and J. Halpern (Eds.), Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence
, 2004
"... Ancestral graph models, introduced by Richardson and Spirtes (2002), generalize both Markov random fields and Bayesian networks to a class of graphs with a global Markov property that is closed under conditioning and marginalization. By design, ancestral graphs encode precisely the conditional indep ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Ancestral graph models, introduced by Richardson and Spirtes (2002), generalize both Markov random fields and Bayesian networks to a class of graphs with a global Markov property that is closed under conditioning and marginalization. By design, ancestral graphs encode precisely the conditional independence structures that can arise from Bayesian networks with selection and unobserved (hidden/latent) variables. Thus, ancestral graph models provide a potentially very useful framework for exploratory model selection when unobserved variables might be involved in the datagenerating process but no particular hidden structure can be specified. In this paper, we present the Iterative Conditional Fitting (ICF) algorithm for maximum likelihood estimation in Gaussian ancestral graph models. The name reflects that in each step of the procedure a conditional distribution is estimated, subject to constraints, while a marginal distribution is held fixed. This approach is in duality to the wellknown Iterative Proportional Fitting algorithm, in which marginal distributions are fitted while conditional distributions are held fixed. 1
Markov equivalence for ancestral graphs
, 2004
"... Ancestral graph models can encode conditional independence relations that arise in directed acyclic graph (DAG) models with latent and selection variables. However, for any 3JJ.cestral graph, there may be several other graphs to which it is Markov equivalent. We state and prove conditions under whic ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Ancestral graph models can encode conditional independence relations that arise in directed acyclic graph (DAG) models with latent and selection variables. However, for any 3JJ.cestral graph, there may be several other graphs to which it is Markov equivalent. We state and prove conditions under which two maximal ancestral graphs are Markov equivalent to each other, thereby extending analogous results for DAGs given by other authors. 'University of W2k'lhi.ng1;on Technical No. 466. Contents
A new algorithm for maximum likelihood estimation in Gaussian graphical models for marginal independence
 In U. Kjærulff and C. Meek (Eds.), Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence
, 2003
"... Graphical models with bidirected edges (↔) represent marginal independence: the absence of an edge between two vertices indicates that the corresponding variables are marginally independent. In this paper, we consider maximum likelihood estimation in the case of continuous variables with a Gaussian ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Graphical models with bidirected edges (↔) represent marginal independence: the absence of an edge between two vertices indicates that the corresponding variables are marginally independent. In this paper, we consider maximum likelihood estimation in the case of continuous variables with a Gaussian joint distribution, sometimes termed a covariance graph model. We present a new fitting algorithm which exploits standard regression techniques and establish its convergence properties. Moreover, we contrast our procedure to existing estimation algorithms. 1
Covariance Chains
 Bernoulli
, 2006
"... Covariance matrices which can be arranged in tridiagonal form are called covariance chains. They are used to clarify some issues of parameter equivalence and of independence equivalence for linear models in which a set of latent variables influences a set of observed variables. For this purpose, ort ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
Covariance matrices which can be arranged in tridiagonal form are called covariance chains. They are used to clarify some issues of parameter equivalence and of independence equivalence for linear models in which a set of latent variables influences a set of observed variables. For this purpose, orthogonal decompositions for covariance chains are derived first in explicit form. Covariance chains are also contrasted to concentration chains, for which estimation is explicit and simple. For this purpose, maximumlikelihood equations are derived first for exponential families when some parameters satisfy zero value constraints. From these equations explicit estimates are obtained, which are asymptotically efficient, and they are applied to covariance chains. Simulation results confirm the satisfactory behaviour of the explicit covariance chain estimates also in moderatesize samples.
Maximum Likelihood Estimation in Gaussian AMP Chain Graph Models and Gaussian Ancestral Graph Models
, 2004
"... The AMP Markov property is a recently proposed alternative Markov property for chain graphs. In the case of continuous variables with a joint multivariate Gaussian distribution, it is the AMP rather than the earlier introduced LWF Markov property that is coherent with datageneration by natural bloc ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
The AMP Markov property is a recently proposed alternative Markov property for chain graphs. In the case of continuous variables with a joint multivariate Gaussian distribution, it is the AMP rather than the earlier introduced LWF Markov property that is coherent with datageneration by natural blockrecursive regressions. In this paper, we show that maximum likelihood estimates in Gaussian AMP chain graph models can be obtained by combining generalized least squares and iterative proportional fitting to an iterative algorithm. In an appendix, we give useful convergence results for iterative partial maximization algorithms that apply in particular to the described algorithm. Key words: AMP chain graph, graphical model, iterative partial maximization, multivariate normal distribution, maximum likelihood estimation 1
Graphical methods for efficient likelihood inference in gaussian covariance models
 Journal of Machine Learning
, 2008
"... Abstract. In graphical modelling, a bidirected graph encodes marginal independences among random variables that are identified with the vertices of the graph. We show how to transform a bidirected graph into a maximal ancestral graph that (i) represents the same independence structure as the origi ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. In graphical modelling, a bidirected graph encodes marginal independences among random variables that are identified with the vertices of the graph. We show how to transform a bidirected graph into a maximal ancestral graph that (i) represents the same independence structure as the original bidirected graph, and (ii) minimizes the number of arrowheads among all ancestral graphs satisfying (i). Here the number of arrowheads of an ancestral graph is the number of directed edges plus twice the number of bidirected edges. In Gaussian models, this construction can be used for more efficient iterative maximization of the likelihood function and to determine when maximum likelihood estimates are equal to empirical counterparts. 1.
Counting and locating the solutions of polynomial systems of maximum likelihood equations, II: The BehrensFisher problem
, 2006
"... Let µ be a pdimensional vector, and let Σ1 and Σ2 be p × p positive definite covariance matrices. On being given random samples of sizes N1 and N2 from independent multivariate normal populations Np(µ, Σ1) and Np(µ, Σ2), respectively, the BehrensFisher problem is to solve the likelihood equations ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Let µ be a pdimensional vector, and let Σ1 and Σ2 be p × p positive definite covariance matrices. On being given random samples of sizes N1 and N2 from independent multivariate normal populations Np(µ, Σ1) and Np(µ, Σ2), respectively, the BehrensFisher problem is to solve the likelihood equations for estimating the unknown parameters µ, Σ1, and Σ2. It is wellknown that the likelihood equations cannot be solved explicitly, and this has led to many different approaches to the BehrensFisher problem and with a commensurately large number of publications on the topic. We prove that for N1, N2> p, there are, almost surely, exactly 3 p real or complex solutions of the likelihood equations. We propose a new iterative algorithm for solving the system of likelihood equations. For the case in which p = 2, we utilize Monte Carlo simulation to estimate how frequently a typical BehrensFisher is likely to have multiple real solutions; we find that multiple real solutions occur infrequently.