Results 1  10
of
16
Theorybased causal induction
 In
, 2003
"... Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various s ..."
Abstract

Cited by 37 (15 self)
 Add to MetaCart
(Show Context)
Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the cooccurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge—identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computationallevel analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domaingeneral statistical inference guided by domainspecific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge—the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships—and show how they provide the constraints that people need to induce useful causal models from sparse data.
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
Bayesian approaches to associative learning: From passive to active learning
 Learning & Behavior
, 2008
"... Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist mode ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist models assume that the learner is passive, adjusting strengths of association only in reaction to stimuli delivered by the environment. Bayesian models, on the other hand, can describe how the learner should actively probe the environment to learn optimally. The first part of this article reviews two Bayesian accounts of backward blocking, a phenomenon that is challenging for many traditional theories. The broad Bayesian framework, in which these models reside, is also selectively reviewed. The second part focuses on two formalizations of optimal active learning: maximizing either the expected information gain or the probability gain. New analyses of optimal active learning by a Kalman filter and by a noisylogic gate show that these two Bayesian models make different predictions for some environments. The Kalman filter predictions are disconfirmed in at least one case. Bayesian formalizations of learning are a revolutionary advance over traditional approaches. Bayesian models assume that the learner maintains multiple candidate hypotheses with differing degrees of belief, unlike traditional
Learning a Theory of Causality
"... The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework, and the role for innate structure. We focus on knowledge about causality, seen as a domaingeneral intuit ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework, and the role for innate structure. We focus on knowledge about causality, seen as a domaingeneral intuitive theory, and ask whether this knowledge can be learned from cooccurrence of events. We begin by phrasing the causal Bayes nets theory of causality, and a range of alternatives, in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned—an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence, and find that a collection of simple “perceptual input analyzers ” can help to bootstrap abstract knowledge. Together these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality, but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion. Preprint June 2010—to appear in Psych. Review.
From universal laws of cognition to specific cognitive models
, 2008
"... The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which sp ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which specific aspects of perception, memory, or decision making might be modelled? Following Shepard (e.g., 1987), it is argued that some universal principles may be attainable in cognitive science. Here we propose two examples: The simplicity principle (which states that the cognitive system prefers patterns that provide simpler explanations of available data); and the scaleinvariance principle, which states that many cognitive phenomena are independent of the scale of relevant underlying physical variables, such as time, space, luminance, or sound pressure. We illustrate how principles may be combined to explain specific cognitive processes by using these principles to derive SIMPLE, a formal model of memory for serial order (Brown, Neath & Chater, in press), and briefly mention some extensions to models of identification and categorization. We also consider the scope and limitations of universal laws in cognitive science.
Towards A Computational Model of the SelfAttribution of Agency
 In: Proc. of the 24th Intern. Conf. on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE’11. Lecture Notes in AI
, 2011
"... Abstract. In this paper, a first step towards a computational model of the selfattribution of agency is presented, based on Wegner’s theory of apparent mental causation. A model to compute a feeling of doing based on firstorder Bayesian network theory is introduced that incorporates the main contr ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, a first step towards a computational model of the selfattribution of agency is presented, based on Wegner’s theory of apparent mental causation. A model to compute a feeling of doing based on firstorder Bayesian network theory is introduced that incorporates the main contributing factors to the formation of such a feeling. The main contribution of this paper is the presentation of a formal and precise model that can be used to further test Wegner’s theory against quantitative experimental data. 1
Can Being Scared Cause Tummy Aches? Naive Theories, Ambiguous Evidence, and Preschoolers ’ Causal Inferences
"... Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. Each child heard 2 stories in which 2 candidate causes cooccurred with an effect. Evidence was presented in the
Compositionality in rational analysis: Grammarbased induction for concept learning
 In M. Oaksford & N. Chater (Eds.), The
, 2007
"... Rational analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to rational analysis today takes an ecologically reasonable specification of a problem facing an org ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Rational analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to rational analysis today takes an ecologically reasonable specification of a problem facing an organism, given in statistical terms, then seeks an optimal solution, usually using Bayesian methods. This approach has proven very successful in cognitive science; it has predicted perceptual phenomena (Geisler & Kersten, 2002; Feldman, 2001), illuminated puzzling effects in reasoning (Chater & Oaksford, 1999; Griffiths & Tenenbaum, 2006), and, especially, explained how human learning can succeed despite sparse input and endemic uncertainty (Tenenbaum, 1999; Tenenbaum & Griffiths, 2001). However, there were earlier notions of the “rational ” analysis of cognition that
Approximating Solution Structure
"... Abstract. Approximations can aim at having close to optimal value or, alternatively, they can aim at structurally resembling an optimal solution. Whereas valueapproximation has been extensively studied by complexity theorists over the last three decades, structuralapproximation has not yet been de ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Approximations can aim at having close to optimal value or, alternatively, they can aim at structurally resembling an optimal solution. Whereas valueapproximation has been extensively studied by complexity theorists over the last three decades, structuralapproximation has not yet been defined, let alone studied. However, structuralapproximation is theoretically no less interesting, and has important applications in cognitive science. Building on analogies with existing valueapproximation algorithms and classes, we develop a general framework for analyzing structural (in)approximability. We identify dissociations between solution value and solution structure, and generate a list of open problems that may stimulate future research.
Training a Bayesian: Threeandahalfyearolds ’ Reasoning about Ambiguous Evidence
"... Previous work has demonstrated the importance of both naïve theories and statistical evidence to children’s causal reasoning. In particular, fouryearolds can use statistical evidence to update their beliefs. However, the story is more complex for threeyearolds. Although threeandahalfyearold ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Previous work has demonstrated the importance of both naïve theories and statistical evidence to children’s causal reasoning. In particular, fouryearolds can use statistical evidence to update their beliefs. However, the story is more complex for threeyearolds. Although threeandahalfyearolds perform as well as fouryearolds when statistical evidence is theoryneutral, several studies suggest that they do not learn from statistical evidence when a statistically likely cause is inconsistent with their prior beliefs (e.g., Schulz et al., 2007). There are at least two possible explanations for younger children’s failure to use statistical data to update their beliefs: one (the Information Processing account) suggests that younger children have a fragile ability to reason about statistical evidence; the other (a Prior Knowledge account) suggests that in some domains, younger children have stronger prior beliefs and thus require more evidence before belief revision is rational. To distinguish these accounts, we conducted a twoweek training study with threeandahalfyearolds. Children participated in an Information Processing Training condition, a Prior Belief Training condition, or a Control condition. Relative to the Control condition, children in the Prior Belief Training condition, but not children in the Information Processing Training condition showed an overall improvement in their ability to reason about theoryviolating evidence. This suggests that at least some developmental differences in statistical reasoning tasks may be due to younger children’s stronger prior beliefs.