Results 11  20
of
34
Combining experiments to discover linear cyclic models with latent variables
 In AISTATS 2010
, 2010
"... We present an algorithm to infer causal relations between a set of measured variables on the basis of experiments on these variables. The algorithm assumes that the causal relations are linear, but is otherwise completely general: It provides consistent estimates when the true causal structure conta ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We present an algorithm to infer causal relations between a set of measured variables on the basis of experiments on these variables. The algorithm assumes that the causal relations are linear, but is otherwise completely general: It provides consistent estimates when the true causal structure contains feedback loops and latent variables, while the experiments can involve surgical or ‘soft ’ interventions on one or multiple variables at a time. The algorithm is ‘online’ in the sense that it combines the results from any set of available experiments, can incorporate background knowledge and resolves conflicts that arise from combining results from different experiments. In addition we provide a necessary and sufficient condition that (i) determines when the algorithm can uniquely return the true graph, and (ii) can be used to select the next best experiment until this condition is satisfied. We demonstrate the method by applying it to simulated data and the flow cytometry data of Sachs et al (2005). 1
Learning causal bayesian networks from observations and experiments: A decision theoretic approach
 In Modeling Decisions in Artificial Intelligence, LNCS
, 2006
"... Abstract. We discuss a decision theoretic approach to learn causal Bayesian networks from observational data and experiments. We use the information of observational data to learn a completed partially directed acyclic graph using a structure learning technique and try to discover the directions of ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. We discuss a decision theoretic approach to learn causal Bayesian networks from observational data and experiments. We use the information of observational data to learn a completed partially directed acyclic graph using a structure learning technique and try to discover the directions of the remaining edges by means of experiment. We will show that our approach allows to learn a causal Bayesian network optimally with relation to a number of decision criteria. Our method allows the possibility to assign costs to each experiment and each measurement. We introduce an algorithm that allows to actively add results of experiments so that arcs can be directed during learning. A numerical example is given as demonstration of the techniques. 1
Informative interventions
, 2006
"... ABSTRACT. Causal discovery programs have met with considerable enthusiasm in the AI and data mining communities. Amongst philosophers they have met a more mixed response, with numerous skeptics pointing out weaknesses in their assumptions. Some criticize the reliance upon faithfulness (the idea that ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
ABSTRACT. Causal discovery programs have met with considerable enthusiasm in the AI and data mining communities. Amongst philosophers they have met a more mixed response, with numerous skeptics pointing out weaknesses in their assumptions. Some criticize the reliance upon faithfulness (the idea that every causal connection will result in probabilistic dependence), since the true model may in fact be unfaithful (Cartwright, 2001). Despite a common, selfimposed restriction to observational data in causal discovery, the intervention account of causality (Pearl, 2000; Spirtes et al., 2000) suggests that the inclusion of intervention data may alleviate this concern. Korb and Nyberg (2006) established that, for linear networks, even underwhelming interventions (that never overwhelm other influences) have sufficient power to overcome unfaithfulness and go beyond the limits of observational data to identify the true model. Here we extend those results to discrete networks, which present the added difficulty that they can be unfaithful along a single path (as noted, e.g., by Hitchcock, 2001). In doing so, we illustrate both unfaithful chains and unfaithful collisions, give mathematical criteria for such interactions, make some recommendations for diagnosing unfaithfulness and designing informative interventions, and finally, demonstrate the power of both one and N − 1 underwhelming interventions. 1
Causal discovery for linear cyclic models with latent variables
"... We consider the problem of identifying the causal relationships among a set of variables in the presence of both feedback loops and unmeasured confounders. This is a challenging task which, for full identification, typically requires the use of randomized experiments. For linear systems, Eberhardt e ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We consider the problem of identifying the causal relationships among a set of variables in the presence of both feedback loops and unmeasured confounders. This is a challenging task which, for full identification, typically requires the use of randomized experiments. For linear systems, Eberhardt et al (2010) recently provided a procedure for integrating data from several experiments, and gave a corresponding, but demanding, identifiability condition. In this paper we (i) characterize the underdetermination of the model when the identifiability condition is not fully satisfied, (ii) show that their algorithm is complete with regard to the search space and the assumptions, and (iii) extend the procedure to incorporate the common assumption of faithfulness, and any prior knowledge. The resulting method typically resolves much additional structure and often yields full identification with many fewer experiments. We demonstrate our procedure using simulated data, and apply it to the protein signaling dataset of Sachs et al (2005). 1
Sufficient condition for pooling data from different distributions
 In First Symposium on Philosophy, History, and Methodology of Error
, 2006
"... We consider the problems arising from using sequences of experiments to discover the causal structure among a set of variables, none of whom are known ahead of time to be an “outcome”. In particular, we present various approaches to resolve conflicts in the experimental results arising from sampling ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider the problems arising from using sequences of experiments to discover the causal structure among a set of variables, none of whom are known ahead of time to be an “outcome”. In particular, we present various approaches to resolve conflicts in the experimental results arising from sampling variability in the experiments. We provide a sufficient condition that allows for pooling of data from experiments with different joint distributions over the variables. Satisfaction of the condition allows for more powerful independence tests that may resolve some of the conflicts in the experimental results. The pooling condition has its own problems, but should – due to its generality – be informative to techniques for metaanalysis. 1.
Causal models, value of intervention, and search for opportunities
 Proceeding of the First European Workshop on Probabilistic Graphical Models (PGM’02
"... Summary. While algorithms for influence diagrams allow for computing the optimal setting for decision variables, they offer no guidance in generation of decision variables, arguably a critical stage of decision making. A decision maker confronted with a complex system may not know which variables to ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Summary. While algorithms for influence diagrams allow for computing the optimal setting for decision variables, they offer no guidance in generation of decision variables, arguably a critical stage of decision making. A decision maker confronted with a complex system may not know which variables to best manipulate to achieve a desired objective. We introduce the concept of search for opportunities which amounts to identifying the set of decision variables and computing their optimal settings, given an objective expressed by a utility function. Search for opportunities is built on value of intervention in causal models. Key words: causal models, value of intervention, search for opportunities, causal Bayesian networks, and structural equation models. 1
Learning linear cyclic causal models with latent variables. Submitted. Available online from the authors’ homepages
, 2012
"... Identifying causeeffect relationships between variables of interest is a central problem in science. Given a set of experiments we describe a procedure that identifies linear models that may contain cycles and latent variables. We provide a detailed description of the model family, full proofs of t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Identifying causeeffect relationships between variables of interest is a central problem in science. Given a set of experiments we describe a procedure that identifies linear models that may contain cycles and latent variables. We provide a detailed description of the model family, full proofs of the necessary and sufficient conditions for identifiability, a search algorithm that is complete, and a discussion of what can be done when the identifiability conditions are not satisfied. The algorithm is comprehensively tested in simulations, comparing it to competing algorithms in the literature. Furthermore, we adapt the procedure to the problem of cellular network inference, applying it to the biologically realistic data of the DREAM challenges. The paper provides a full theoretical foundation for the causal discovery procedure first presented by Eberhardt et al. (2010) and Hyttinen et al. (2010).
Using a Dynamic Bayesian Network to Learn Genetic Interactions
"... In this paper, we evaluate a method for analyzing microarray data. The method is an attempt to learn regulatory interactions between genes from gene expression data. It is based on a Bayesian network, which is a mathematical tool for modeling conditional independences between stochastic variables ..."
Abstract
 Add to MetaCart
In this paper, we evaluate a method for analyzing microarray data. The method is an attempt to learn regulatory interactions between genes from gene expression data. It is based on a Bayesian network, which is a mathematical tool for modeling conditional independences between stochastic variables. We review the dynamic nature of interacting genes, and explain how to model them using such a network.
Unsupervised Active Learning in Large Domains
, 2002
"... Active learning is a powerful approach to analyzing data eectively. We show that the feasibility of active learning depends crucially on the choice of measure with respect to which the query is being optimized. The standard information gain, for example, does not permit an accurate evaluation ..."
Abstract
 Add to MetaCart
Active learning is a powerful approach to analyzing data eectively. We show that the feasibility of active learning depends crucially on the choice of measure with respect to which the query is being optimized. The standard information gain, for example, does not permit an accurate evaluation with a small committee, a representative subset of the model space. We propose a surrogate measure requiring only a small committee and discuss the properties of this new measure.