Results 1  10
of
15
A theory of causal learning in children: Causal maps and Bayes nets
 PSYCHOLOGICAL REVIEW
, 2004
"... The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate “causal map ” of the world: an abstract, coherent, learned representation of the causal relations among events ..."
Abstract

Cited by 157 (33 self)
 Add to MetaCart
The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate “causal map ” of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously understood in terms of the formalism of directed graphical causal models, or Bayes nets. Children’s causal learning and inference may involve computations similar to those for learning causal Bayes nets and for predicting with them. Experimental results suggest that 2to 4yearold children construct new causal maps and that their learning is consistent with the Bayes net formalism.
Theorybased causal induction
 In
, 2003
"... Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various s ..."
Abstract

Cited by 33 (14 self)
 Add to MetaCart
Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the cooccurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge—identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computationallevel analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domaingeneral statistical inference guided by domainspecific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge—the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships—and show how they provide the constraints that people need to induce useful causal models from sparse data.
Locally Bayesian Learning with Applications to Retrospective Revaluation and Highlighting
 Psychological Review
, 2006
"... A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to backpropagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probab ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to backpropagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probability of the next component’s target. Each layer then does locally Bayesian learning. The approach assumes online trialbytrial learning. The resulting parameter updating is not globally Bayesian but can better capture human behavior. The approach is implemented for an associative learning model that first maps inputs to attentionally filtered inputs and then maps attentionally filtered inputs to outputs. The Bayesian updating allows the associative model to exhibit retrospective revaluation effects such as backward blocking and unovershadowing, which have been challenging for associative learning models. The backpropagation of target values to attention allows the model to show trialorder effects, including highlighting and differences in magnitude of forward and backward blocking, which have been challenging for Bayesian learning models.
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
Bayesian generic priors for causal learning
 Psychological Review
, 2008
"... The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.
Bayesian approaches to associative learning: From passive to active learning
 Learning & Behavior
, 2008
"... Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist mode ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist models assume that the learner is passive, adjusting strengths of association only in reaction to stimuli delivered by the environment. Bayesian models, on the other hand, can describe how the learner should actively probe the environment to learn optimally. The first part of this article reviews two Bayesian accounts of backward blocking, a phenomenon that is challenging for many traditional theories. The broad Bayesian framework, in which these models reside, is also selectively reviewed. The second part focuses on two formalizations of optimal active learning: maximizing either the expected information gain or the probability gain. New analyses of optimal active learning by a Kalman filter and by a noisylogic gate show that these two Bayesian models make different predictions for some environments. The Kalman filter predictions are disconfirmed in at least one case. Bayesian formalizations of learning are a revolutionary advance over traditional approaches. Bayesian models assume that the learner maintains multiple candidate hypotheses with differing degrees of belief, unlike traditional
ConstraintBased Human Causal Learning
 In Proceedings of sixth international conference on cognitive modelling
, 2005
"... Much of human cognition and activity depends on causal beliefs and reasoning. In psychological research on human causal learning and inference, we usually suppose that we have a set of binary potential causes, C1, …, Cn, and a known binary effect, E, all typically presentabsent values of a property ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Much of human cognition and activity depends on causal beliefs and reasoning. In psychological research on human causal learning and inference, we usually suppose that we have a set of binary potential causes, C1, …, Cn, and a known binary effect, E, all typically presentabsent values of a property or event. The differentiation into potential causes and effect is made on the basis of external factors, including prior knowledge or temporal information. Given these variables, people are then asked to infer the existence and strength of causal relationships between the Ci’s and E from observed data in one of several formats (serially, as a list, or in a summary). The standard measure of people’s causal beliefs is a rating of some proxy for causal influence, where a zero rating indicates no causal relationship. The exact probe question varies between
Scientific Coherence and the Fusion of Experimental Results
"... A pervasive feature of the sciences, particularly the applied sciences, is an experimental focus on a few (often only one) possible causal connections. At the same time, scientists often advance and apply relatively broad models that incorporate many different causal mechanisms. We are naturally led ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A pervasive feature of the sciences, particularly the applied sciences, is an experimental focus on a few (often only one) possible causal connections. At the same time, scientists often advance and apply relatively broad models that incorporate many different causal mechanisms. We are naturally led to ask whether there are normative rules for integrating multiple local experimental conclusions into models covering many additional variables. In this paper, we provide a positive answer to this question by developing several inference rules that use local causal models to place constraints on the integrated model, given quite general assumptions. We also demonstrate the practical value of these rules by applying them to a case study from ecology. 1 Experimental scope in applied sciences 2 Fusing the results of experiments 3 A concrete example of the inference rules 4 Application to a case study 1 Experimental scope in applied sciences Total photosynthetic material has increased globally in recent years (though with local decreases), and one might naturally wonder why. In a recent paper in Science, Nemani et al. ([2003]) focused on some of the potential causes of global vegetation growth during the past 20 years. Their analysis focused on only four variables: growing season average temperature, vapor pressure deficit, solar radiation, and net primary production (photosynthetic material). Their study considered only a limited variable set because of (a) the global scale of their analysis, and (b) the relatively long study period (18 years). Despite this limited scope (in terms of variables), their study gives substantial support to the hypothesis that the first three variables are causes of the last, and helps to clarify the functional form of those dependencies. At the same time, they explicitly note that there are many causally relevant variables that were ignored in their study, such as vegetation
Bayesian models of judgments of causal strength: A comparison
 In D. S. McNammara & G. Trafton inference and causal learning 74 (Eds.), Proceedings of the Twentyninth Annual Conference of the Cognitive Science Society
, 2007
"... We formulate four alternative Bayesian models of causal strength judgments, and compare their predictions to two sets of human data. The models were derived by factorially varying the causal generating function for integrating multiple causes (based on either the power PC theory or the ΔP rule) and ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We formulate four alternative Bayesian models of causal strength judgments, and compare their predictions to two sets of human data. The models were derived by factorially varying the causal generating function for integrating multiple causes (based on either the power PC theory or the ΔP rule) and priors on strengths (favoring necessary and sufficient (NS) causes, or uniform). The models based on the causal generating function derived from the power PC theory provided much better fits than those based on the function derived from the ΔP rule. The models that included NS priors were able to account for subtle asymmetries between strength judgments for generative and preventive causes. These results complement previous model comparisons for judgments of causal structure (Lu et al., 2006).
Bayes and blickets: Effects of knowledge on causal induction in children and adults
"... People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal acc ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal account of how this knowledge can be used and acquired, based on analyzing causal induction as Bayesian inference. Five studies explored the predictions of this account with adults and 4yearolds, using tasks in which participants learned about the causal properties of a set of objects. The studies varied the two factors that our Bayesian approach predicted should be relevant to causal induction: the prior probability with which causal relations exist, and the assumption of a deterministic or a probabilistic relation between cause and effect. Adults ’ judgments (Experiments 1, 2, and 4) were in close correspondence with the quantitative predictions of the model, and children’s judgments (Experiments 3 and 5) agreed qualitatively with this account.