Results 1  10
of
20
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
(Show Context)
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
Theorybased causal induction
 In
, 2003
"... Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various s ..."
Abstract

Cited by 49 (16 self)
 Add to MetaCart
(Show Context)
Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the cooccurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge—identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computationallevel analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domaingeneral statistical inference guided by domainspecific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge—the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships—and show how they provide the constraints that people need to induce useful causal models from sparse data.
Bayesian approaches to associative learning: From passive to active learning
 Learning & Behavior
, 2008
"... Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist mode ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist models assume that the learner is passive, adjusting strengths of association only in reaction to stimuli delivered by the environment. Bayesian models, on the other hand, can describe how the learner should actively probe the environment to learn optimally. The first part of this article reviews two Bayesian accounts of backward blocking, a phenomenon that is challenging for many traditional theories. The broad Bayesian framework, in which these models reside, is also selectively reviewed. The second part focuses on two formalizations of optimal active learning: maximizing either the expected information gain or the probability gain. New analyses of optimal active learning by a Kalman filter and by a noisylogic gate show that these two Bayesian models make different predictions for some environments. The Kalman filter predictions are disconfirmed in at least one case. Bayesian formalizations of learning are a revolutionary advance over traditional approaches. Bayesian models assume that the learner maintains multiple candidate hypotheses with differing degrees of belief, unlike traditional
Learning a Theory of Causality
"... The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework, and the role for innate structure. We focus on knowledge about causality, seen as a domaingeneral intuit ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
(Show Context)
The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework, and the role for innate structure. We focus on knowledge about causality, seen as a domaingeneral intuitive theory, and ask whether this knowledge can be learned from cooccurrence of events. We begin by phrasing the causal Bayes nets theory of causality, and a range of alternatives, in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned—an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence, and find that a collection of simple “perceptual input analyzers ” can help to bootstrap abstract knowledge. Together these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality, but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion. Preprint June 2010—to appear in Psych. Review.
From universal laws of cognition to specific cognitive models
, 2008
"... The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which sp ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which specific aspects of perception, memory, or decision making might be modelled? Following Shepard (e.g., 1987), it is argued that some universal principles may be attainable in cognitive science. Here we propose two examples: The simplicity principle (which states that the cognitive system prefers patterns that provide simpler explanations of available data); and the scaleinvariance principle, which states that many cognitive phenomena are independent of the scale of relevant underlying physical variables, such as time, space, luminance, or sound pressure. We illustrate how principles may be combined to explain specific cognitive processes by using these principles to derive SIMPLE, a formal model of memory for serial order (Brown, Neath & Chater, in press), and briefly mention some extensions to models of identification and categorization. We also consider the scope and limitations of universal laws in cognitive science.
Can Being Scared Cause Tummy Aches? Naive Theories, Ambiguous Evidence, and Preschoolers ’ Causal Inferences
"... Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Causal learning requires integrating constraints provided by domainspecific theories with domaingeneral statistical learning. In order to investigate the interaction between these factors, the authors presented preschoolers with stories pitting their existing theories against statistical evidence. Each child heard 2 stories in which 2 candidate causes cooccurred with an effect. Evidence was presented in the
Towards A Computational Model of the SelfAttribution of Agency
 In: Proc. of the 24th Intern. Conf. on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE’11. Lecture Notes in AI
, 2011
"... Abstract. In this paper, a first step towards a computational model of the selfattribution of agency is presented, based on Wegner’s theory of apparent mental causation. A model to compute a feeling of doing based on firstorder Bayesian network theory is introduced that incorporates the main contr ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, a first step towards a computational model of the selfattribution of agency is presented, based on Wegner’s theory of apparent mental causation. A model to compute a feeling of doing based on firstorder Bayesian network theory is introduced that incorporates the main contributing factors to the formation of such a feeling. The main contribution of this paper is the presentation of a formal and precise model that can be used to further test Wegner’s theory against quantitative experimental data. 1
Approximating Solution Structure
"... Abstract. Approximations can aim at having close to optimal value or, alternatively, they can aim at structurally resembling an optimal solution. Whereas valueapproximation has been extensively studied by complexity theorists over the last three decades, structuralapproximation has not yet been de ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Approximations can aim at having close to optimal value or, alternatively, they can aim at structurally resembling an optimal solution. Whereas valueapproximation has been extensively studied by complexity theorists over the last three decades, structuralapproximation has not yet been defined, let alone studied. However, structuralapproximation is theoretically no less interesting, and has important applications in cognitive science. Building on analogies with existing valueapproximation algorithms and classes, we develop a general framework for analyzing structural (in)approximability. We identify dissociations between solution value and solution structure, and generate a list of open problems that may stimulate future research.
Rethinking the role of resubsumption in conceptual change
 Educational Psychologist
, 2009
"... Why is conceptual change difficult yet possible? Ohlsson (2009/this issue) proposes that the answer can be found in the dynamics of resubsumption, or the process by which a domain of experience is resubsumed under an intuitive theory originally constructed to explain some other domain of experience. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Why is conceptual change difficult yet possible? Ohlsson (2009/this issue) proposes that the answer can be found in the dynamics of resubsumption, or the process by which a domain of experience is resubsumed under an intuitive theory originally constructed to explain some other domain of experience. Here, it is argued that conceptual change is difficult in two distinct senses—that is, difficult to initiate and difficult to complete—and that Ohlsson’s proposal addresses the latter but not the former. The implications of this argument for how conceptual change might be best facilitated in the science classroom are discussed as well. In a classic study by McCloskey, Caramazza, and Green (1980), college undergraduates were asked to draw the trajectory of a ball shot through a curved tube resting on a flat surface. Although most participants had taken one or more physics courses prior to the study, many still drew physically impossible trajectories—that is, trajectories in which the ball continued to travel in a curved motion after exiting the tube. This intuition is inconsistent not only with the way objects actually move but also with the Newtonian principles these students had presumably learned in their prior coursework. From where do such misconceptions arise? Why do such misconceptions persist in the face of contrary experience and instruction? And how might such misconceptions be eliminated? These are the questions at the heart of science education research, both in the physical sciences (Clement, 1982; Halloun & Hestenes, 1985; Vosniadou & Brewer, 1992) and the
Compositionality in rational analysis: Grammarbased induction for concept learning
 In M. Oaksford & N. Chater (Eds.), The
, 2007
"... Rational analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to rational analysis today takes an ecologically reasonable specification of a problem facing an org ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Rational analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to rational analysis today takes an ecologically reasonable specification of a problem facing an organism, given in statistical terms, then seeks an optimal solution, usually using Bayesian methods. This approach has proven very successful in cognitive science; it has predicted perceptual phenomena (Geisler & Kersten, 2002; Feldman, 2001), illuminated puzzling effects in reasoning (Chater & Oaksford, 1999; Griffiths & Tenenbaum, 2006), and, especially, explained how human learning can succeed despite sparse input and endemic uncertainty (Tenenbaum, 1999; Tenenbaum & Griffiths, 2001). However, there were earlier notions of the “rational ” analysis of cognition that