Results 1  10
of
11
Theorybased causal induction
 In
, 2003
"... Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various s ..."
Abstract

Cited by 33 (14 self)
 Add to MetaCart
Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the cooccurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge—identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computationallevel analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domaingeneral statistical inference guided by domainspecific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge—the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships—and show how they provide the constraints that people need to induce useful causal models from sparse data.
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
Bayesian approaches to associative learning: From passive to active learning
 Learning & Behavior
, 2008
"... Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist mode ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist models assume that the learner is passive, adjusting strengths of association only in reaction to stimuli delivered by the environment. Bayesian models, on the other hand, can describe how the learner should actively probe the environment to learn optimally. The first part of this article reviews two Bayesian accounts of backward blocking, a phenomenon that is challenging for many traditional theories. The broad Bayesian framework, in which these models reside, is also selectively reviewed. The second part focuses on two formalizations of optimal active learning: maximizing either the expected information gain or the probability gain. New analyses of optimal active learning by a Kalman filter and by a noisylogic gate show that these two Bayesian models make different predictions for some environments. The Kalman filter predictions are disconfirmed in at least one case. Bayesian formalizations of learning are a revolutionary advance over traditional approaches. Bayesian models assume that the learner maintains multiple candidate hypotheses with differing degrees of belief, unlike traditional
Can Being Scared Cause Tummy Aches? Naive Theories, Ambiguous Evidence, and Preschoolers ’ Causal Inferences
"... Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. Each child heard 2 stories in which 2 candidate causes cooccurred with an effect. Evidence was presented in the
Learning a Theory of Causality
"... The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework, and the role for innate structure. We focus on knowledge about causality, seen as a domaingeneral intuit ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework, and the role for innate structure. We focus on knowledge about causality, seen as a domaingeneral intuitive theory, and ask whether this knowledge can be learned from cooccurrence of events. We begin by phrasing the causal Bayes nets theory of causality, and a range of alternatives, in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned—an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence, and find that a collection of simple “perceptual input analyzers ” can help to bootstrap abstract knowledge. Together these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality, but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion. Preprint June 2010—to appear in Psych. Review.
Towards A Computational Model of the SelfAttribution of Agency
 In: Proc. of the 24th Intern. Conf. on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE’11. Lecture Notes in AI
, 2011
"... Abstract. In this paper, a first step towards a computational model of the selfattribution of agency is presented, based on Wegner’s theory of apparent mental causation. A model to compute a feeling of doing based on firstorder Bayesian network theory is introduced that incorporates the main contr ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. In this paper, a first step towards a computational model of the selfattribution of agency is presented, based on Wegner’s theory of apparent mental causation. A model to compute a feeling of doing based on firstorder Bayesian network theory is introduced that incorporates the main contributing factors to the formation of such a feeling. The main contribution of this paper is the presentation of a formal and precise model that can be used to further test Wegner’s theory against quantitative experimental data. 1
From universal laws of cognition to specific cognitive models
 34 – 215535 Deliverable 1.1.1
, 2008
"... The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which sp ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which specific aspects of perception, memory, or decision making might be modelled? Following Shepard (e.g., 1987), it is argued that some universal principles may be attainable in cognitive science. Here we propose two examples: The simplicity principle (which states that the cognitive system prefers patterns that provide simpler explanations of available data); and the scaleinvariance principle, which states that many cognitive phenomena are independent of the scale of relevant underlying physical variables, such as time, space, luminance, or sound pressure. We illustrate how principles may be combined to explain specific cognitive processes by using these principles to derive SIMPLE, a formal model of memory for serial order (Brown, Neath & Chater, in press), and briefly mention some extensions to models of identification and categorization. We also consider the scope and limitations of universal laws in cognitive science.
Approximating Solution Structure
"... Abstract. Approximations can aim at having close to optimal value or, alternatively, they can aim at structurally resembling an optimal solution. Whereas valueapproximation has been extensively studied by complexity theorists over the last three decades, structuralapproximation has not yet been de ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. Approximations can aim at having close to optimal value or, alternatively, they can aim at structurally resembling an optimal solution. Whereas valueapproximation has been extensively studied by complexity theorists over the last three decades, structuralapproximation has not yet been defined, let alone studied. However, structuralapproximation is theoretically no less interesting, and has important applications in cognitive science. Building on analogies with existing valueapproximation algorithms and classes, we develop a general framework for analyzing structural (in)approximability. We identify dissociations between solution value and solution structure, and generate a list of open problems that may stimulate future research.
Compositionality in rational analysis: Grammarbased induction for concept learning
 In M. Oaksford & N. Chater (Eds.), The
, 2007
"... Rational analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to rational analysis today takes an ecologically reasonable specification of a problem facing an organis ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Rational analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to rational analysis today takes an ecologically reasonable specification of a problem facing an organism, given in statistical terms, then seeks an optimal solution, usually using Bayesian methods. This approach has proven very successful in cognitive science; it has predicted perceptual phenomena (Geisler & Kersten, 2002; Feldman, 2001), illuminated puzzling effects in reasoning (Chater & Oaksford, 1999; Griffiths & Tenenbaum, 2006), and, especially, explained how human learning can succeed despite sparse input and endemic uncertainty (Tenenbaum, 1999; Tenenbaum & Griffiths, 2001). However, there were earlier notions of the “rational ” analysis of cognition that
2010 © Eric Gregory TaylorLEARNING AND RESTRUCTURING CAUSAL CONCEPTS BY
"... studies of concept learning in adults address the learning of novel concepts, but much of learning involves the updating and restructuring of familiar concepts. Research on conceptual change explores this issue directly but differs greatly from the formal approach of the adult learning studies. This ..."
Abstract
 Add to MetaCart
studies of concept learning in adults address the learning of novel concepts, but much of learning involves the updating and restructuring of familiar concepts. Research on conceptual change explores this issue directly but differs greatly from the formal approach of the adult learning studies. This paper bridges these two areas to advance our knowledge of the mechanisms underlying concept restructuring. The main idea behind this approach is that concepts are built on causalexplanatory knowledge, and hence, models of causal induction may help to clarify the mechanisms of the restructuring process. A new paradigm is presented to study the learning and revising of causal networks. Experiments 1 and 2 showed that learners’ prior beliefs about the causal relations in a domain affected their hypotheses as they began to infer the correct causes. First, when the prior learning suggested evidence against some of the incorrect causes, this helped learners to focus on the correct causes later in learning. Second, the prior causal beliefs were difficult to give up, and they biased learners away from the correct causes that competed to explain the same effects. Experiment 3 showed that learning by intervention, as opposed to observation, affected the concept restructuring process in different ways, depending on what interventions were chosen and by whom. People choosing their own