Results 1 - 10
of
17
Theory-based causal induction
- In
, 2003
"... Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various s ..."
Abstract
-
Cited by 56 (19 self)
- Add to MetaCart
Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the co-occurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge—identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computational-level analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domain-general statistical inference guided by domain-specific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge—the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships—and show how they provide the constraints that people need to induce useful causal models from sparse data.
Learning causal schemata
- In Proceedings of the Twenty-Ninth Annual Conference of the Cognitive Science Society
, 2007
"... Abstract Causal inferences about sparsely observed objects are often supported by causal schemata, or systems of abstract causal knowledge. We present a hierarchical Bayesian framework that learns simple causal schemata given only raw data as input. Given a set of objects and observations of causal ..."
Abstract
-
Cited by 19 (12 self)
- Add to MetaCart
(Show Context)
Abstract Causal inferences about sparsely observed objects are often supported by causal schemata, or systems of abstract causal knowledge. We present a hierarchical Bayesian framework that learns simple causal schemata given only raw data as input. Given a set of objects and observations of causal events involving some of these objects, our framework simultaneously discovers the causal type of each object, the causal powers of these types, the characteristic features of these types, and the characteristic interactions between these types. Previous behavioral studies confirm that humans are able to discover causal schemata, and we show that our framework accounts for data collected by Lien and Cheng and Shanks and Darby.
Learning to learn causal models
"... Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems t ..."
Abstract
-
Cited by 15 (1 self)
- Add to MetaCart
(Show Context)
Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning.
Structured correlation from the causal background
- In V. Sloutsky, B. Love & K. McRae (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 303–308). Mahwah, NJ: Erlbaum
, 2008
"... Previous research has cast doubt on whether the Markov condition is a default assumption of human causal reasoning—as causal Bayes net approaches suggest. Human subjects often seem to violate the Markov condition in common-cause reasoning tasks. While this might be treated as evidence that humans ar ..."
Abstract
-
Cited by 7 (3 self)
- Add to MetaCart
(Show Context)
Previous research has cast doubt on whether the Markov condition is a default assumption of human causal reasoning—as causal Bayes net approaches suggest. Human subjects often seem to violate the Markov condition in common-cause reasoning tasks. While this might be treated as evidence that humans are inefficient causal reasoners, we propose that the underlying human intuitions reflect abstract causal knowledge that is sensitive to a great deal of contextual information— knowledge of the “causal background”. In this paper, we introduce a hierarchical Bayesian model of causal background knowledge which explains Markov violations and makes additional, more fine-grained predictions on the basis of causally relevant category membership. We confirm these predictions using an experimental paradigm which extends that used in previous studies of “Markov violation.”
A transitivity heuristic of probabilistic causal reasoning
- In Proceedings of the 31st Annual Conference of the Cognitive Science Society
, 2009
"... In deterministic causal chains the relations „A causes B ’ and „B causes C ’ imply that „A causes C’. However, this is not necessarily the case for probabilistic causal relationships: A may probabilistically cause B, and B may probabilistically cause C, but A does not probabilistically cause C, but ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
In deterministic causal chains the relations „A causes B ’ and „B causes C ’ imply that „A causes C’. However, this is not necessarily the case for probabilistic causal relationships: A may probabilistically cause B, and B may probabilistically cause C, but A does not probabilistically cause C, but rather ¬C. The normal transitive inference is only valid when the Markov condition holds, a key feature of the Bayes net formalism. However, it has been objected that the Markov assumption does not need to hold in the real world. In our studies we examined how people reason about causal chains that do not obey the Markov condition. Three experiments involving causal reasoning within causal chains provide evidence that transitive reasoning seems to hold psychologically, even when it is objectively not valid. Whereas related research has shown that learners assume the Markov condition in causal chains in the absence of contradictory data, we here demonstrate the use of this assumption for situations in which participants were directly confronted with evidence contradicting the Markov condition. The results suggest a causal transitivity heuristic resulting from chaining individual causal links into mental causal models that obey the Markov condition.
Category Transfer in Sequential Causal Learning: The Unbroken Mechanism Hypothesis
"... The goal of the present set of studies is to explore the boundary conditions of category transfer in causal learning. Previous research has shown that people are capable of inducing categories based on causal learning input, and they often transfer these categories to new causal learning tasks. Howe ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
The goal of the present set of studies is to explore the boundary conditions of category transfer in causal learning. Previous research has shown that people are capable of inducing categories based on causal learning input, and they often transfer these categories to new causal learning tasks. However, occasionally learners abandon the learned categories and induce new ones. Whereas previously it has been argued that transfer is only observed with essentialist categories in which the hidden properties are causally relevant for the target effect in the transfer relation, we here propose an alternative explanation, the unbroken mechanism hypothesis. This hypothesis claims that categories are transferred from a previously learned causal relation to a new causal relation when learners assume a causal mechanism linking the two relations that is continuous and unbroken. The findings of two causal learning experiments support the unbroken mechanism hypothesis.
Novelty and Inductive Generalization in Human Reinforcement Learning
, 2014
"... In reinforcement learning (RL), a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experi-ence with one ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
In reinforcement learning (RL), a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experi-ence with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and we describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of RL in humans and animals. According to our view, the search for the best option is guided by abstract knowledge about the relationships between different options in an environment, resulting in greater search effi-ciency compared to traditional RL algorithms previously applied to human cognition. In two behav-ioral experiments, we test several predictions of our model, providing evidence that humans learn and exploit structured inductive knowledge to make predictions about novel options. In light of this model, we suggest a new interpretation of dopaminergic responses to novelty.