Results 1  10
of
18
A theory of causal learning in children: Causal maps and Bayes nets
 PSYCHOLOGICAL REVIEW
, 2004
"... The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate “causal map ” of the world: an abstract, coherent, learned representation of the causal relations among events ..."
Abstract

Cited by 157 (33 self)
 Add to MetaCart
The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate “causal map ” of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously understood in terms of the formalism of directed graphical causal models, or Bayes nets. Children’s causal learning and inference may involve computations similar to those for learning causal Bayes nets and for predicting with them. Experimental results suggest that 2to 4yearold children construct new causal maps and that their learning is consistent with the Bayes net formalism.
Structure and Strength in Causal Induction
"... We present a framework for the rational analysis of elemental causal induction – learning about the existence of a relationship between a single cause and effect – based upon causal graphical models. This framework makes precise the distinction between causal structure and causal strength: the diffe ..."
Abstract

Cited by 89 (30 self)
 Add to MetaCart
We present a framework for the rational analysis of elemental causal induction – learning about the existence of a relationship between a single cause and effect – based upon causal graphical models. This framework makes precise the distinction between causal structure and causal strength: the difference between asking whether a causal relationship exists and asking how strong that causal relationship might be. We show that two leading rational models of elemental causal induction, ∆P and causal power, both estimate causal strength, and introduce a new rational model, causal support, that assesses causal structure. Causal support predicts several key phenomena of causal induction that cannot be accounted for by other rational models, which we explore through a series of experiments. These phenomena include the complex interaction between ∆P and the baserate probability of the effect in the absence of the cause, sample size effects, inferences from incomplete contingency tables, and causal learning from rates. Causal support also provides a better account of a number of existing datasets than either ∆P or causal power.
Assessing interactive causal influence
 Psychological Review
"... The discovery of conjunctive causes—factors that act in concert to produce or prevent an effect—has been explained by purely covariational theories. Such theories assume that concomitant variations in observable events directly license causal inferences, without postulating the existence of unobserv ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
The discovery of conjunctive causes—factors that act in concert to produce or prevent an effect—has been explained by purely covariational theories. Such theories assume that concomitant variations in observable events directly license causal inferences, without postulating the existence of unobservable causal relations. This article discusses problems with these theories, proposes a causalpower theory that overcomes the problems, and reports empirical evidence favoring the new theory. Unlike earlier models, the new theory derives (a) the conditions under which covariation implies conjunctive causation and (b) functions relating observable events to unobservable conjunctive causal strength. This psychological theory, which concerns simple cases involving 2 binary candidate causes and a binary effect, raises questions about normative statistics for testing causal hypotheses regarding categorical data resulting from discrete variables. The preparation of this article was supported by National Science
Bayesian generic priors for causal learning
 Psychological Review
, 2008
"... The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.
Predictions and causal estimations are not supported by the same associative structure
 THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY
, 2007
"... ..."
Intuitive theories as grammars for causal inference
 In A. Gopnik & L. Schulz (Eds.), Causal learning: Psychology, philosophy, and computation
, 2007
"... This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recogniz ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing
The RescorlaWagner algorithm and Maximum Likelihood estimation of causal parameters”. NIPS
 In L
, 2004
"... This paper analyzes generalization of the classic RescorlaWagner (RW) learning algorithm and studies their relationship to Maximum Likelihood estimation of causal parameters. We prove that the parameters of two popular causal models, ∆P and P C, can be learnt by the same generalized linear Rescorl ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
This paper analyzes generalization of the classic RescorlaWagner (RW) learning algorithm and studies their relationship to Maximum Likelihood estimation of causal parameters. We prove that the parameters of two popular causal models, ∆P and P C, can be learnt by the same generalized linear RescorlaWagner (GLRW) algorithm provided genericity conditions apply. We characterize the fixed points of these GLRW algorithms and calculate the fluctuations about them, assuming that the input is a set of i.i.d. samples from a fixed (unknown) distribution. We describe how to determine convergence conditions and calculate convergence rates for the GLRW algorithms under these conditions. 1
Augmented RescorlaWagner and maximum likelihood estimation
 In B
, 2006
"... We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results in ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We show that linear generalizations of RescorlaWagner can perform Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables to deal with conjunctions of causes, similar to the agumented model of Rescorla. Our results involve genericity assumptions on the distributions of causes. If these assumptions are violated, for example for the Cheng causal power theory, then we show that a linear RescorlaWagner can estimate the parameters of the model up to a nonlinear transformtion. Moreover, a nonlinear RescorlaWagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to determine convergence and to estimate convergence rates. 1
Bayesian models of judgments of causal strength: A comparison
 In D. S. McNammara & G. Trafton inference and causal learning 74 (Eds.), Proceedings of the Twentyninth Annual Conference of the Cognitive Science Society
, 2007
"... We formulate four alternative Bayesian models of causal strength judgments, and compare their predictions to two sets of human data. The models were derived by factorially varying the causal generating function for integrating multiple causes (based on either the power PC theory or the ΔP rule) and ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We formulate four alternative Bayesian models of causal strength judgments, and compare their predictions to two sets of human data. The models were derived by factorially varying the causal generating function for integrating multiple causes (based on either the power PC theory or the ΔP rule) and priors on strengths (favoring necessary and sufficient (NS) causes, or uniform). The models based on the causal generating function derived from the power PC theory provided much better fits than those based on the function derived from the ΔP rule. The models that included NS priors were able to account for subtle asymmetries between strength judgments for generative and preventive causes. These results complement previous model comparisons for judgments of causal structure (Lu et al., 2006).
Bayes and blickets: Effects of knowledge on causal induction in children and adults
"... People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal acc ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal account of how this knowledge can be used and acquired, based on analyzing causal induction as Bayesian inference. Five studies explored the predictions of this account with adults and 4yearolds, using tasks in which participants learned about the causal properties of a set of objects. The studies varied the two factors that our Bayesian approach predicted should be relevant to causal induction: the prior probability with which causal relations exist, and the assumption of a deterministic or a probabilistic relation between cause and effect. Adults ’ judgments (Experiments 1, 2, and 4) were in close correspondence with the quantitative predictions of the model, and children’s judgments (Experiments 3 and 5) agreed qualitatively with this account.