Results 1 
6 of
6
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
Inequality constraints in causal models with hidden variables
 In Proceedings of the Seventeenth Annual Conference on Uncertainty in Artificial Intelligence (UAI06
, 2006
"... We present a class of inequality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network, in which some of the variables remain unmeasured. We derive bounds on causal effects that are not directly measured in randomized experiments. W ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We present a class of inequality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network, in which some of the variables remain unmeasured. We derive bounds on causal effects that are not directly measured in randomized experiments. We derive instrumental inequality type of constraints on nonexperimental distributions. The results have applications in testing causal models with observational or experimental data. 1
Identifying causal effects with computer algebra, in
 Proceedings of the Twenty Sixth Annual Conference on Uncertainty in Artificial Intelligence (UAI2010
, 2010
"... The longstanding identification problem for causal effects in graphical models has many partial results but lacks a systematic study. We show how computer algebra can be used to either prove that a causal effect can be identified, generically identified, or show that the effect is not generically i ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The longstanding identification problem for causal effects in graphical models has many partial results but lacks a systematic study. We show how computer algebra can be used to either prove that a causal effect can be identified, generically identified, or show that the effect is not generically identifiable. We report on the results of our computations for linear structural equation models, where we determine precisely which causal effects are generically identifiable for all graphs on three and four vertices. 1
Polynomial constraints in causal Bayesian networks
 In Proceedings of the Seventeenth Annual Conference on Uncertainty in Artificial Intelligence (UAI07
"... We use the implicitization procedure to generate polynomial equality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network with hidden variables. We show how we may reduce the complexity of the implicitization problem and make the p ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We use the implicitization procedure to generate polynomial equality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network with hidden variables. We show how we may reduce the complexity of the implicitization problem and make the problem tractable in certain causal Bayesian networks. We also show some preliminary results on the algebraic structure of polynomial constraints. The results have applications in distinguishing between causal models and in testing causal models with combined observational and experimental data. 1
Automatic derivation of probabilistic inference rules
 In Proc. PaKDD PacificAsia Conf
"... A probabilistic inference rule is a general rule that provides bounds on a target probability given constraints on a number of input probabilities. Example: from����infer �℄. Rules of this kind have been studied extensively as a deduction method for propositional probabilistic logics. Many different ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A probabilistic inference rule is a general rule that provides bounds on a target probability given constraints on a number of input probabilities. Example: from����infer �℄. Rules of this kind have been studied extensively as a deduction method for propositional probabilistic logics. Many different rules have been proposed, and their validity proved – often with substantial effort. Building on previous work by T. Hailperin, in this paper we show that probabilistic inference rules can be derived automatically, i.e. given the input constraints and the target probability, one can automatically derive the optimal bounds on the target probability as a functional expression in the parameters of the input constraints.
Conditional independences in Gaussian vectors and rings of polynomials
, 2002
"... Inference among the conditional independences in nondegenerate Gaussian vectors is studied by algebraic techniques. A general method to prove implications involving the conditional independences is presented. The method relies on computation of a Groebner basis. Examples of the implications are disc ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Inference among the conditional independences in nondegenerate Gaussian vectors is studied by algebraic techniques. A general method to prove implications involving the conditional independences is presented. The method relies on computation of a Groebner basis. Examples of the implications are discussed.