Results 1  10
of
11
Theorybased causal induction
 In
, 2003
"... Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various s ..."
Abstract

Cited by 33 (14 self)
 Add to MetaCart
Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the cooccurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge—identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computationallevel analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domaingeneral statistical inference guided by domainspecific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge—the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships—and show how they provide the constraints that people need to induce useful causal models from sparse data.
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
Two proposals for causal grammar
 In A. Gopnik & L. Schulz (Eds.), Causal learning: Psychology, philosophy, and computation
, 2007
"... In the previous chapter (Tenenbaum, Griffiths, & Niyogi, this volume), we introduced a framework for thinking about the structure, function, and acquisition of intuitive theories inspired by an analogy to the research program of generative grammar in linguistics. We argued that a principal function ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
In the previous chapter (Tenenbaum, Griffiths, & Niyogi, this volume), we introduced a framework for thinking about the structure, function, and acquisition of intuitive theories inspired by an analogy to the research program of generative grammar in linguistics. We argued that a principal function for intuitive theories, just as for grammars for natural
Can Being Scared Cause Tummy Aches? Naive Theories, Ambiguous Evidence, and Preschoolers ’ Causal Inferences
"... Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. Each child heard 2 stories in which 2 candidate causes cooccurred with an effect. Evidence was presented in the
Learning the form of causal relationships using hierarchical Bayesian models
 Cognitive Science
, 2010
"... People learn quickly when reasoning about causal relationships, making inferences from limited data and avoiding spurious inferences. Efficient learning depends on abstract knowledge, which is often domain or context specific, and much of it must be learned. While such knowledge effects are well doc ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
People learn quickly when reasoning about causal relationships, making inferences from limited data and avoiding spurious inferences. Efficient learning depends on abstract knowledge, which is often domain or context specific, and much of it must be learned. While such knowledge effects are well documented, little is known about exactly how we acquire knowledge that constrains learning. This work focuses on knowledge of the functional form of causal relationships; there are many kinds of relationships that can apply between causes and their effects, and knowledge of the form such a relationship takes is important in order to quickly identify the real causes of an observed effect. We developed a hierarchical Bayesian model of the acquisition of knowledge of the functional form of causal relationships and tested it in five experimental studies, considering disjunctive and conjunctive relationships, failure rates, and crossdomain effects. The Bayesian model accurately predicted human judgments and outperformed several alternative models.
Seeking Confirmation Is Rational for Deterministic Hypotheses
"... The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the bestknown biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single pre ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the bestknown biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single prediction about the next event in a sequence. Our proof applies for two normative standards commonly used for evaluating hypothesis testing: maximizing expected information gain and maximizing the probability of falsifying the current hypothesis. This analysis rests on two assumptions: (a) that people predict the next event in a sequence in a way that is consistent with Bayesian inference; and (b) when testing hypotheses, people test the hypothesis to which they assign highest posterior probability. We present four behavioral experiments that support these assumptions, showing that a simple Bayesian model can capture people’s predictions about numerical sequences (Experiments 1 and 2), and that we can alter the hypotheses that people choose to test by manipulating the prior probability of those hypotheses (Experiments 3 and 4).
Address for correspondence:
"... In multiple‐cue learning people acquire information about cue‐outcome relations and combine these into predictions or judgments. Previous studies claim that people can achieve high levels of performance without explicit knowledge of the task structure or insight into their own judgment policies. It ..."
Abstract
 Add to MetaCart
In multiple‐cue learning people acquire information about cue‐outcome relations and combine these into predictions or judgments. Previous studies claim that people can achieve high levels of performance without explicit knowledge of the task structure or insight into their own judgment policies. It has also been argued that people use a variety of suboptimal strategies to solve such tasks. In two experiments we re‐examined these conclusions by introducing novel measures of task knowledge and self‐insight, and using ‘rolling regression ’ methods to analyze individual learning. Participants successfully learned a four‐cue probabilistic environment and showed accurate knowledge of both the task structure and their own judgment processes. Learning analyses suggested that the apparent use of suboptimal strategies emerges from the incremental tracking of statistical contingencies in the environment. These findings have wide repercussions for the study of multicue learning in both normal and patient populations. Insight 3
Address for correspondence:
"... The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the bestknown biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single pre ..."
Abstract
 Add to MetaCart
The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the bestknown biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single prediction about the next event in a sequence. Our proof applies for two normative standards commonly used for evaluating hypothesis testing: maximizing expected information gain and maximizing the probability of falsifying the current hypothesis. This analysis rests on two assumptions: (1) that people predict the next event in a sequence in a way that is consistent with Bayesian inference and (2) when testing hypotheses, people test the hypothesis that they assign highest posterior probability. We present four behavioral experiments that support these assumptions, showing that a simple Bayesian model can capture people’s predictions about numerical sequences (Experiments 1 and 2), and that we can alter the hypotheses that people choose to test by manipulating the prior probability of those hypotheses
Elements of a rational framework for continuoustime causal induction Michael Pacer
"... Temporal information plays a major role in human causal inference. We present a rational framework for causal induction from events that take place in continuous time. We define a set of desiderata for such a framework and outline a strategy for satisfying these desiderata using continuoustime stoc ..."
Abstract
 Add to MetaCart
Temporal information plays a major role in human causal inference. We present a rational framework for causal induction from events that take place in continuous time. We define a set of desiderata for such a framework and outline a strategy for satisfying these desiderata using continuoustime stochastic processes. We develop two specific models within this framework, illustrating how it can be used to capture both generative and preventative causal relationships as well as delays between cause and effect. We evaluate one model through a new behavioral experiment, and the other through a comparison to existing data.