Results 1 
5 of
5
P.: A transformational characterization of markov equivalence for directed acyclic graphs with latent variables
 In: Proc. of the 21st Conference on Uncertainty in Artificial Intelligence (UAI
, 2005
"... The conditional independence relations present in a data set usually admit multiple causal explanations — typically represented by directed graphs — which are Markov equivalent in that they entail the same conditional independence relations among the observed variables. Markov equivalence between di ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The conditional independence relations present in a data set usually admit multiple causal explanations — typically represented by directed graphs — which are Markov equivalent in that they entail the same conditional independence relations among the observed variables. Markov equivalence between directed acyclic graphs (DAGs) has been characterized in various ways, each of which has been found useful for certain purposes. In particular, Chickering’s transformational characterization is useful in deriving properties shared by Markov equivalent DAGs, and, with certain generalization, is needed to justify a search procedure over Markov equivalence classes, known as the GES algorithm. Markov equivalence between DAGs with latent variables has also been characterized, in the spirit of Verma and Pearl (1990), via maximal ancestral graphs (MAGs). The latter can represent the observable conditional independence relations as well as some causal features of DAG models with latent variables. However, no characterization of Markov equivalent MAGs is yet available that is analogous to the transformational characterization for Markov equivalent DAGs. The main contribution of the current paper is to establish such a characterization for directed MAGs, which we expect will have similar uses as Chickering’s characterization does for DAGs. 1
Maximum likelihood fitting of acyclic directed mixed graphs to binary data
 Proceedings of the 26th International Conference on Uncertainty in Artificial Intelligence
, 2010
"... mixed graphs to binary data ..."
Learning Linear Bayesian Networks with Latent Variables
"... This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from loworder observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underly ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from loworder observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underlying directed acyclic graph (DAG) between observed and unobserved variables in the network, and it is satisfied by many natural families of DAGs that include multilevel DAGs, DAGs with effective depth one, as well as certain families of polytrees. 1.
A Characterization of Markov Equivalence Classes for Directed Acyclic Graphs with Latent Variables
"... Different directed acyclic graphs (DAGs) may be Markov equivalent in the sense that they entail the same conditional independence relations among the observed variables. Meek (1995) characterizes Markov equivalence classes for DAGs (with no latent variables) by presenting a set of orientation rules ..."
Abstract
 Add to MetaCart
Different directed acyclic graphs (DAGs) may be Markov equivalent in the sense that they entail the same conditional independence relations among the observed variables. Meek (1995) characterizes Markov equivalence classes for DAGs (with no latent variables) by presenting a set of orientation rules that can correctly identify all arrow orientations shared by all DAGs in a Markov equivalence class, given a member of that class. For DAG models with latent variables, maximal ancestral graphs (MAGs) provide a neat representation that facilitates model search. Earlier work (Ali et al. 2005) has identified a set of orientation rules sufficient to construct all arrowheads common to a Markov equivalence class of MAGs. In this paper, we provide extra rules sufficient to construct all common tails as well. We end up with a set of orientation rules sound and complete for identifying commonalities across a Markov equivalence class of MAGs, which is particularly useful for causal inference. 1