Results 1  10
of
28
Theorybased causal induction
 In
, 2003
"... Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various s ..."
Abstract

Cited by 33 (14 self)
 Add to MetaCart
Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the cooccurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge—identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computationallevel analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domaingeneral statistical inference guided by domainspecific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge—the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships—and show how they provide the constraints that people need to induce useful causal models from sparse data.
Bayesian generic priors for causal learning
 Psychological Review
, 2008
"... The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause–effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.
Combining causal and similaritybased reasoning. NIPS
, 2006
"... Everyday inductive reasoning draws on many kinds of knowledge, including knowledge about relationships between properties and knowledge about relationships between objects. Previous accounts of inductive reasoning generally focus on just one kind of knowledge: models of causal reasoning often focus ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Everyday inductive reasoning draws on many kinds of knowledge, including knowledge about relationships between properties and knowledge about relationships between objects. Previous accounts of inductive reasoning generally focus on just one kind of knowledge: models of causal reasoning often focus on relationships between properties, and models of similaritybased reasoning often focus on similarity relationships between objects. We present a Bayesian model of inductive reasoning that incorporates both kinds of knowledge, and show that it accounts well for human inferences about the properties of biological species. 1
The Role of Causality in Judgment Under Uncertainty
"... Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explai ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explain the success and flexibility of people's realworld judgments, and propose an alternative normative framework based on Bayesian inferences over causal models. Deviations from traditional norms of judgment, such as "baserate neglect", may then be explained in terms of a mismatch between the statistics given to people and the causal models they intuitively construct to support probabilistic reasoning. Four experiments show that when a clear mapping can be established from given statistics to the parameters of an intuitive causal model, people are more likely to use the statistics appropriately, and that when the classical and causal Bayesian norms differ in their prescriptions, people's judgments are more consistent with causal Bayesian norms.
Can Being Scared Cause Tummy Aches? Naive Theories, Ambiguous Evidence, and Preschoolers ’ Causal Inferences
"... Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. Each child heard 2 stories in which 2 candidate causes cooccurred with an effect. Evidence was presented in the
Learning the form of causal relationships using hierarchical Bayesian models
 Cognitive Science
, 2010
"... People learn quickly when reasoning about causal relationships, making inferences from limited data and avoiding spurious inferences. Efficient learning depends on abstract knowledge, which is often domain or context specific, and much of it must be learned. While such knowledge effects are well doc ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
People learn quickly when reasoning about causal relationships, making inferences from limited data and avoiding spurious inferences. Efficient learning depends on abstract knowledge, which is often domain or context specific, and much of it must be learned. While such knowledge effects are well documented, little is known about exactly how we acquire knowledge that constrains learning. This work focuses on knowledge of the functional form of causal relationships; there are many kinds of relationships that can apply between causes and their effects, and knowledge of the form such a relationship takes is important in order to quickly identify the real causes of an observed effect. We developed a hierarchical Bayesian model of the acquisition of knowledge of the functional form of causal relationships and tested it in five experimental studies, considering disjunctive and conjunctive relationships, failure rates, and crossdomain effects. The Bayesian model accurately predicted human judgments and outperformed several alternative models.
Seeking Confirmation Is Rational for Deterministic Hypotheses
"... The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the bestknown biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single pre ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the bestknown biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single prediction about the next event in a sequence. Our proof applies for two normative standards commonly used for evaluating hypothesis testing: maximizing expected information gain and maximizing the probability of falsifying the current hypothesis. This analysis rests on two assumptions: (a) that people predict the next event in a sequence in a way that is consistent with Bayesian inference; and (b) when testing hypotheses, people test the hypothesis to which they assign highest posterior probability. We present four behavioral experiments that support these assumptions, showing that a simple Bayesian model can capture people’s predictions about numerical sequences (Experiments 1 and 2), and that we can alter the hypotheses that people choose to test by manipulating the prior probability of those hypotheses (Experiments 3 and 4).
A Rational Analysis of Confirmation with Deterministic Hypotheses
"... Whether scientists test their hypotheses as they ought to has interested both cognitive psychologists and philosophers of science. Classic analyses of hypothesis testing assume that people should pick the test with the largest probability of falsifying their current hypothesis, while experiments hav ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Whether scientists test their hypotheses as they ought to has interested both cognitive psychologists and philosophers of science. Classic analyses of hypothesis testing assume that people should pick the test with the largest probability of falsifying their current hypothesis, while experiments have shown that people tend to select tests consistent with that hypothesis. Using two different normative standards, we prove that seeking evidence predicted by your current hypothesis is optimal when the hypotheses in question are deterministic and other reasonable assumptions hold. We test this account with two experiments using a sequential prediction task, in which people guess the next number in a sequence. Experiment 1 shows that people’s predictions can be captured by a simple Bayesian model. Experiment 2 manipulates people’s beliefs about the probabilities of different hypotheses, and shows that they confirm whichever hypothesis they are led to believe is most likely.
Bayesian models of judgments of causal strength: A comparison
 In D. S. McNammara & G. Trafton inference and causal learning 74 (Eds.), Proceedings of the Twentyninth Annual Conference of the Cognitive Science Society
, 2007
"... We formulate four alternative Bayesian models of causal strength judgments, and compare their predictions to two sets of human data. The models were derived by factorially varying the causal generating function for integrating multiple causes (based on either the power PC theory or the ΔP rule) and ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We formulate four alternative Bayesian models of causal strength judgments, and compare their predictions to two sets of human data. The models were derived by factorially varying the causal generating function for integrating multiple causes (based on either the power PC theory or the ΔP rule) and priors on strengths (favoring necessary and sufficient (NS) causes, or uniform). The models based on the causal generating function derived from the power PC theory provided much better fits than those based on the function derived from the ΔP rule. The models that included NS priors were able to account for subtle asymmetries between strength judgments for generative and preventive causes. These results complement previous model comparisons for judgments of causal structure (Lu et al., 2006).
Heuristics in Covariationbased Induction of Causal Models: Sufficiency and Necessity Priors
"... Our main goal in the present set of studies was to revisit the question whether people are capable of inducing causal models from covariation data alone without further cues, such as temporal order. In the literature there has been a debate between bottomup and topdown learning theories in causal ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Our main goal in the present set of studies was to revisit the question whether people are capable of inducing causal models from covariation data alone without further cues, such as temporal order. In the literature there has been a debate between bottomup and topdown learning theories in causal learning. Whereas topdown theorists claim that in structure induction, covariation information plays none or only a secondary role, bottomup theories, such as causal Bayes net theory, assert that people are capable of inducing structure from conditional dependence and independence information alone. Our three experiments suggest that both positions are wrong. In simple threevariable domains people are indeed often capable of reliably picking the right model. However, this can be achieved by simple heuristics that do not require complex statistics.