Results 1  10
of
32
ANCESTRAL GRAPH MARKOV MODELS
, 2002
"... This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of verti ..."
Abstract

Cited by 76 (18 self)
 Add to MetaCart
This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of vertices; every missing edge corresponds to an independence relation. These features lead to a simple parameterization of the corresponding set of distributions in the Gaussian case.
A SINful approach to Gaussian graphical model selection
 Journal of Statistical Planning and Inference
"... Abstract. Multivariate Gaussian graphical models are defined in terms of Markov properties, i.e., conditional independences associated with the underlying graph. Thus, model selection can be performed by testing these conditional independences, which are equivalent to specified zeroes among certain ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
Abstract. Multivariate Gaussian graphical models are defined in terms of Markov properties, i.e., conditional independences associated with the underlying graph. Thus, model selection can be performed by testing these conditional independences, which are equivalent to specified zeroes among certain (partial) correlation coefficients. For concentration graphs, covariance graphs, acyclic directed graphs, and chain graphs (both LWF and AMP), we apply Fisher’s ztransformation, ˇ Sidák’s correlation inequality, and Holm’s stepdown procedure, to simultaneously test the multiple hypotheses obtained from the Markov properties. This leads to a simple method for model selection that controls the overall error rate for incorrect edge inclusion. In practice, we advocate partitioning the simultaneous pvalues into three disjoint sets, a significant set S, an indeterminate set I, and a nonsignificant set N. Then our SIN model selection method selects two graphs, a graph whose edges correspond to the union of S and I, and a more conservative graph whose edges correspond to S only. Prior information about the presence and/or absence of particular edges can be incorporated readily. 1.
A new algorithm for maximum likelihood estimation in Gaussian graphical models for marginal independence
 In U. Kjærulff and C. Meek (Eds.), Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence
, 2003
"... Graphical models with bidirected edges (↔) represent marginal independence: the absence of an edge between two vertices indicates that the corresponding variables are marginally independent. In this paper, we consider maximum likelihood estimation in the case of continuous variables with a Gaussian ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Graphical models with bidirected edges (↔) represent marginal independence: the absence of an edge between two vertices indicates that the corresponding variables are marginally independent. In this paper, we consider maximum likelihood estimation in the case of continuous variables with a Gaussian joint distribution, sometimes termed a covariance graph model. We present a new fitting algorithm which exploits standard regression techniques and establish its convergence properties. Moreover, we contrast our procedure to existing estimation algorithms. 1
Partial inversion for linear systems and partial closure of independence graphs
 BIT, Numer. Math
"... We introduce and study a calculus for realvalued square matrices, called partial inversion, and an associated calculus for binary square matrices. The first, applied to systems of recursive linear equations, generates new sets of parameters for different types of statistical joint response models. ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
We introduce and study a calculus for realvalued square matrices, called partial inversion, and an associated calculus for binary square matrices. The first, applied to systems of recursive linear equations, generates new sets of parameters for different types of statistical joint response models. The corresponding generating graphs are directed and acyclic. The second calculus, applied to matrix representations of independence graphs, gives chain graphs induced by such a generating graph. Chain graphs are more complex independence graphs associated with recursive joint response models. Missing edges in independence graphs coincide with structurally zero parameters in linear systems. A wide range of consequences of an assumed independence structure can be derived by partial closure, but computationally efficient algorithms still need to be developed for applications to very large graphs.
Cumulative distribution networks and the derivativesumproduct algorithm
"... We introduce a new type of graphical model called a ‘cumulative distribution network’ (CDN), which expresses a joint cumulative distribution as a product of local functions. Each local function can be viewed as providing evidence about possible orderings, or rankings, of variables. Interestingly, we ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
We introduce a new type of graphical model called a ‘cumulative distribution network’ (CDN), which expresses a joint cumulative distribution as a product of local functions. Each local function can be viewed as providing evidence about possible orderings, or rankings, of variables. Interestingly, we find that the conditional independence properties of CDNs are quite different from other graphical models. We also describe a messagepassing algorithm that efficiently computes conditional cumulative distributions. Due to the unique independence properties of the CDN, these messages do not in general have a onetoone correspondence with messages exchanged in standard algorithms, such as belief propagation. We demonstrate the application of CDNs for structured ranking learning using a previouslystudied multiplayer gaming dataset. 1
Multiple testing and error control in Gaussian graphical model selection
 Statistical Science
"... Abstract. Graphical models provide a framework for exploration of multivariate dependence patterns. The connection between graph and statistical model is made by identifying the vertices of the graph with the observed variables and translating the pattern of edges in the graph into a pattern of cond ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. Graphical models provide a framework for exploration of multivariate dependence patterns. The connection between graph and statistical model is made by identifying the vertices of the graph with the observed variables and translating the pattern of edges in the graph into a pattern of conditional independences that is imposed on the variables ’ joint distribution. Focusing on Gaussian models, we review classical graphical models. For these models the defining conditional independences are equivalent to vanishing of certain (partial) correlation coefficients associated with individual edges that are absent from the graph. Hence, Gaussian graphical model selection can be performed by multiple testing of hypotheses about vanishing (partial) correlation coefficients. We show and exemplify how this approach allows one to perform model selection while controlling error rates for incorrect edge inclusion. Key words and phrases: Acyclic directed graph, Bayesian network, bidirected graph, chain graph, concentration graph, covariance graph, DAG, graphical model, multiple testing, undirected graph. 1.
Discrete chain graph models
 Bernoulli
, 2009
"... The statistical literature discusses different types of Markov properties for chain graphs that lead to four possible classes of chain graph Markov models. The different models are rather well understood when the observations are continuous and multivariate normal, and it is also known that one mode ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
The statistical literature discusses different types of Markov properties for chain graphs that lead to four possible classes of chain graph Markov models. The different models are rather well understood when the observations are continuous and multivariate normal, and it is also known that one model class, referred to as models of LWF (Lauritzen–Wermuth–Frydenberg) or block concentration type, yields discrete models for categorical data that are smooth. This paper considers the structural properties of the discrete models based on the three alternative Markov properties. It is shown by example that two of the alternative Markov properties can lead to nonsmooth models. The remaining model class, which can be viewed as a discrete version of multivariate regressions, is proven to comprise only smooth models. The proof employs a simple change of coordinates that also reveals that the model’s likelihood function is unimodal if the chain components of the graph are complete sets.
Graphical methods for efficient likelihood inference in gaussian covariance models
 Journal of Machine Learning
, 2008
"... Abstract. In graphical modelling, a bidirected graph encodes marginal independences among random variables that are identified with the vertices of the graph. We show how to transform a bidirected graph into a maximal ancestral graph that (i) represents the same independence structure as the origi ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. In graphical modelling, a bidirected graph encodes marginal independences among random variables that are identified with the vertices of the graph. We show how to transform a bidirected graph into a maximal ancestral graph that (i) represents the same independence structure as the original bidirected graph, and (ii) minimizes the number of arrowheads among all ancestral graphs satisfying (i). Here the number of arrowheads of an ancestral graph is the number of directed edges plus twice the number of bidirected edges. In Gaussian models, this construction can be used for more efficient iterative maximization of the likelihood function and to determine when maximum likelihood estimates are equal to empirical counterparts. 1.
Causal Reasoning in Graphical Time Series Models
"... We propose a definition of causality for time series in terms of the effect of an intervention in one component of a multivariate time series on another component at some later point in time. Conditions for identifiability, comparable to the back–door and front–door criteria, are presented and can a ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We propose a definition of causality for time series in terms of the effect of an intervention in one component of a multivariate time series on another component at some later point in time. Conditions for identifiability, comparable to the back–door and front–door criteria, are presented and can also be verified graphically. Computation of the causal effect is derived and illustrated for the linear case. 1
The hidden life of latent variables: Bayesian learning with mixed graph models
, 2008
"... Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of D ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of DAGs is not closed under marginalization of hidden variables. This means that in general we cannot use a DAG to represent the independencies over a subset of variables in a larger DAG. Directed mixed graphs (DMGs) are a representation that includes DAGs as a special case, and overcomes this limitation. This paper introduces algorithms for performing Bayesian inference in Gaussian and probit DMG models. An important requirement for inference is the characterization of the distribution over parameters of the models. We introduce a new distribution for covariance matrices of Gaussian DMGs. We discuss and illustrate how several Bayesian machine learning tasks can benefit from the principle presented here: the power to model dependencies that are generated from hidden variables, but without necessarily modelling such variables explicitly.