Results 1 
3 of
3
A theoretical study of Y structures for causal discovery
 Proceedings of the Conference on Uncertainty in Artificial Intelligence
, 2006
"... Causal discovery from observational data in the presence of unobserved variables is challenging. Identification of socalled Y substructures is a sufficient condition for ascertaining some causal relations in the large sample limit, without the assumption of no hidden common causes. An example of a ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Causal discovery from observational data in the presence of unobserved variables is challenging. Identification of socalled Y substructures is a sufficient condition for ascertaining some causal relations in the large sample limit, without the assumption of no hidden common causes. An example of a Y substructure is A → C, B → C, C → D. This paper describes the first asymptotically reliable and computationally feasible scorebased search for discrete Y structures that does not assume that there are no unobserved common causes. For any parameterization of a directed acyclic graph (DAG) that has scores with the property that any DAG that can represent the distribution beats any DAG that can’t, and for two DAGs that represent the distribution, if one has fewer parameters than the other, the one with the fewest parameter wins. In this framework there is no need to assign scores to causal structures with unobserved common causes. The paper also describes how the existence of a Y structure shows the presence of an unconfounded causal relation, without assuming that there are no hidden common causes. 1
Causal Discovery Algorithms based on Y Structures
"... Discovering relationships of the form “A causally influences B ” is valuable in different fields of study. These relationships are also referred to as “cause and effect” relationships where A represents the cause, and B denotes ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Discovering relationships of the form “A causally influences B ” is valuable in different fields of study. These relationships are also referred to as “cause and effect” relationships where A represents the cause, and B denotes
Bayesian Algorithms for Causal Data Mining
"... We present two Bayesian algorithms CDB and CDH for discovering unconfounded cause and effect relationships from observational data without assuming causal sufficiency which precludes hidden common causes for the observed variables. The CDB algorithm first estimates the Markov blanket of a node X ..."
Abstract
 Add to MetaCart
We present two Bayesian algorithms CDB and CDH for discovering unconfounded cause and effect relationships from observational data without assuming causal sufficiency which precludes hidden common causes for the observed variables. The CDB algorithm first estimates the Markov blanket of a node X using a Bayesian greedy search method and then applies Bayesian scoring methods to discriminate the parents and children of X. Using the set of parents and set of children CDB constructs a global Bayesian network and outputs the causal effects of a node X based on the identification of Y arcs. Recall that if a node X has two parent nodes A,B and a child node C such that there is no arc between A,B and A,B are not parents of C, then the arc from X to C is called a Y arc. The CDH algorithm uses the MMPC algorithm to estimate the union of parents and children of a target node X. The subsequent steps are similar to those of CDB. We evaluated the CDB and CDH algorithms empirically based on simulated data from four different Bayesian networks. We also present comparative results based on the identification of Y structures and Y arcs from the output of the PC, MMHC and FCI algorithms. The results appear promising for mining causal relationships that are unconfounded by hidden variables from observational data.