Results 1  10
of
101
Nonlinear causal discovery with additive noise models
"... The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuousvalued data linear acyclic causal models with additive noise are often used because these models are well understood and there are wellknown methods to fit them to data. In ..."
Abstract

Cited by 79 (31 self)
 Add to MetaCart
(Show Context)
The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuousvalued data linear acyclic causal models with additive noise are often used because these models are well understood and there are wellknown methods to fit them to data. In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. In this contribution we show that in fact the basic linear framework can be generalized to nonlinear models. In this extended framework, nonlinearities in the datagenerating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true datagenerating mechanisms to be identified. In addition to theoretical results we show simulations and some simple real data experiments illustrating the identification power provided by nonlinearities. 1
Estimating highdimensional intervention effects from observation data
 THE ANN OF STAT
, 2009
"... We assume that we have observational data generated from an unknown underlying directed acyclic graph (DAG) model. A DAG is typically not identifiable from observational data, but it is possible to consistently estimate the equivalence class of a DAG. Moreover, for any given DAG, causal effects can ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
(Show Context)
We assume that we have observational data generated from an unknown underlying directed acyclic graph (DAG) model. A DAG is typically not identifiable from observational data, but it is possible to consistently estimate the equivalence class of a DAG. Moreover, for any given DAG, causal effects can be estimated using intervention calculus. In this paper, we combine these two parts. For each DAG in the estimated equivalence class, we use intervention calculus to estimate the causal effects of the covariates on the response. This yields a collection of estimated causal effects for each covariate. We show that the distinct values in this set can be consistently estimated by an algorithm that uses only local information of the graph. This local approach is computationally fast and feasible in highdimensional problems. We propose to use summary measures of the set of possible causal effects to determine variable importance. In particular, we use the minimum absolute value of this set, since that is a lower bound on the size of the causal effect. We demonstrate the merits of our methods in a simulation study and on a data set about riboflavin production.
Regression by dependence minimization and its application to causal inference in additive noise models
, 2009
"... ..."
Identifiability of causal graphs using functional models
 In UAI
, 2011
"... This work addresses the following question: Under what assumptions on the data generating process can one infer the causal graph from the joint distribution? The approach taken by conditional independencebased causal discovery methods is based on two assumptions: the Markov condition and faithfulnes ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
This work addresses the following question: Under what assumptions on the data generating process can one infer the causal graph from the joint distribution? The approach taken by conditional independencebased causal discovery methods is based on two assumptions: the Markov condition and faithfulness. It has been shown that under these assumptions the causal graph can be identified up to Markov equivalence (some arrows remain undirected) using methods like the PC algorithm. In this work we propose an alternative by defining Identifiable Functional Model Classes (IFMOCs). As our main theorem we prove that if the data generating process belongs to an IFMOC, one can identify the complete causal graph. To the best of our knowledge this is the first identifiability result of this kind that is not limited to linear functional relationships. We discuss how the IFMOC assumption and the Markov and faithfulness assumptions relate to each other and explain why we believe that the IFMOC assumption can be tested more easily on given data. We further provide a practical algorithm that recovers the causal graph from finitely many data; experiments on simulated data support the theoretical findings. 1
Boredom: A Review
 Human Factors
, 1981
"... Edward Jenner, who discovered that it is possible to vaccinate against Small Pox using material from Cow Pox, is rightly the man who started the science of immunology. However, over the passage of time many of the details surrounding his astounding discovery have been lost or forgotten. Also, the en ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
Edward Jenner, who discovered that it is possible to vaccinate against Small Pox using material from Cow Pox, is rightly the man who started the science of immunology. However, over the passage of time many of the details surrounding his astounding discovery have been lost or forgotten. Also, the environment within which Jenner worked as a physician in the countryside, and the state of the art of medicine and society are difficult to appreciate today. It is important to recall that people were still being bled at the time, to relieve the presence of evil humors. Accordingly, this review details Jenner’s discovery and attempts to place it in historical context. Also, the vaccine that Jenner used, which decreased the prevalence of Small Pox worldwide in his own time, and later was used to eradicate Small Pox altogether, is discussed in light of recent data.
Estimation of causal effects using linear nonGaussian causal models with hidden variables
"... ..."
DirectLiNGAM: A direct method for learning a linear nongaussian structural equation model
 J. of Machine Learning Research
"... ..."
Dependence minimizing regression with model selection for nonlinear causal inference under nonGaussian noise
 Proceedings of the TwentyThird AAAI Conference on Artificial Intelligence (AAAI2010
, 2010
"... The discovery of nonlinear causal relationship under additive nonGaussian noise models has attracted considerable attention recently because of their high flexibility. In this paper, we propose a novel causal inference algorithm called leastsquares independence regression (LSIR). LSIR learns the ..."
Abstract

Cited by 13 (10 self)
 Add to MetaCart
(Show Context)
The discovery of nonlinear causal relationship under additive nonGaussian noise models has attracted considerable attention recently because of their high flexibility. In this paper, we propose a novel causal inference algorithm called leastsquares independence regression (LSIR). LSIR learns the additive noise model through minimization of an estimator of the squaredloss mutual information between inputs and residuals. A notable advantage of LSIR over existing approaches is that tuning parameters such as the kernel width and the regularization parameter can be naturally optimized by crossvalidation, allowing us to avoid overfitting in a datadependent fashion. Through experiments with realworld datasets, we show that LSIR compares favorably with the stateoftheart causal inference method.
Nonlinear directed acyclic structure learning with weakly additive noise models
, 2009
"... ..."
(Show Context)
Identifiability of Gaussian structural equation models with same error variances. Available at arXiv:1205.2536
, 2012
"... We consider structural equation models in which variables can be written as a function of their parents and noise terms, which are assumed to be jointly independent. Corresponding to each structural equation model, there is a directed acyclic graph describing the relationships between the variables ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
We consider structural equation models in which variables can be written as a function of their parents and noise terms, which are assumed to be jointly independent. Corresponding to each structural equation model, there is a directed acyclic graph describing the relationships between the variables. In Gaussian structural equation models with linear functions, the graph can be identified from the joint distribution only up to Markov equivalence classes, assuming faithfulness. In this work, we prove full identifiability if all noise variables have the same variances: the directed acyclic graph can be recovered from the joint Gaussian distribution. Our result has direct implications for causal inference: if the data follow a Gaussian structural equation model with equal error variances and assuming that all variables are observed, the causal structure can be inferred from observational data only. We propose a statistical method and an algorithm that exploit our theoretical findings. 1