Results 1  10
of
21
A bayesian framework for word segmentation: Exploring the effects of context
 In 46th Annual Meeting of the ACL
, 2009
"... Since the experiments of Saffran et al. (1996a), there has been a great deal of interest in the question of how statistical regularities in the speech stream might be used by infants to begin to identify individual words. In this work, we use computational modeling to explore the effects of differen ..."
Abstract

Cited by 50 (11 self)
 Add to MetaCart
Since the experiments of Saffran et al. (1996a), there has been a great deal of interest in the question of how statistical regularities in the speech stream might be used by infants to begin to identify individual words. In this work, we use computational modeling to explore the effects of different assumptions the learner might make regarding the nature of words – in particular, how these assumptions affect the kinds of words that are segmented from a corpus of transcribed childdirected speech. We develop several models within a Bayesian ideal observer framework, and use them to examine the consequences of assuming either that words are independent units, or units that help to predict other units. We show through empirical and theoretical results that the assumption of independence causes the learner to undersegment the corpus, with many two and threeword sequences (e.g. what’s that, do you, in the house) misidentified as individual words. In contrast, when the learner assumes that words are predictive, the resulting segmentation is far more accurate. These results indicate that taking context into account is important for a statistical word segmentation strategy to be successful, and raise the possibility that even young infants may be able to exploit more subtle statistical patterns than have usually been considered. 1
Attention in learning
 Current Directions in Psychological Science
, 2003
"... explaining many phenomena in learning. The mechanism of selective attention in learning is also well motivated by its ability to minimize proactive interference and enhance generalization, thereby accelerating learning. Therefore, not only does the mechanism help explain behavioral phenomena, it mak ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
explaining many phenomena in learning. The mechanism of selective attention in learning is also well motivated by its ability to minimize proactive interference and enhance generalization, thereby accelerating learning. Therefore, not only does the mechanism help explain behavioral phenomena, it makes sense that it should have evolved (Kruschke & Hullinger, 2010). The phrase “learned selective attention ” denotes three qualities. First, “attention ” means the amplification or attenuation of the processing of stimuli. Second, “selective” refers to differentially amplifying and/or attenuating a subset of the components of the stimulus. This selectivity within a stimulus is different from attenuating or amplifying all aspects of a stimulus simultaneously (cf. Larrauri & Schmajuk, 2008). Third, “learned ” denotes the idea that the allocation of selective processing is retained for future use. The allocation may be context sensitive, so that attention is allocated differently in different contexts. There are many phenomena in human and animal learning that suggest the involvement of learned selective attention. The first part of this chapter briefly reviews some of those phenomena. The emphasis of the chapter is not the empirical phenomena, however. Instead, the focus is on a collection of models that formally express theories of learned attention. These models will be surveyed subsequently. Phenomena suggestive of selective attention in learning There are many phenomena in human and animal learning that suggest that learning involves allocating attention to informative cues, while ignoring uninformative cues. The following subsections indicate the benefits of selective allocation of attention, and illustrate the benefits with particular findings.
Rational approximations to rational models: Alternative algorithms for category learning
"... Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible fo ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of “rational process models” that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson’s (1990, 1991) Rational Model of Categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose two alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure
Bayesian approaches to associative learning: From passive to active learning
 Learning & Behavior
, 2008
"... Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist mode ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist models assume that the learner is passive, adjusting strengths of association only in reaction to stimuli delivered by the environment. Bayesian models, on the other hand, can describe how the learner should actively probe the environment to learn optimally. The first part of this article reviews two Bayesian accounts of backward blocking, a phenomenon that is challenging for many traditional theories. The broad Bayesian framework, in which these models reside, is also selectively reviewed. The second part focuses on two formalizations of optimal active learning: maximizing either the expected information gain or the probability gain. New analyses of optimal active learning by a Kalman filter and by a noisylogic gate show that these two Bayesian models make different predictions for some environments. The Kalman filter predictions are disconfirmed in at least one case. Bayesian formalizations of learning are a revolutionary advance over traditional approaches. Bayesian models assume that the learner maintains multiple candidate hypotheses with differing degrees of belief, unlike traditional
Performing Bayesian inference with exemplar models
 In Proceedings of the Thirtieth Annual Conference of the Cognitive Science
, 2008
"... Probabilistic models have recently received much attention as accounts of human cognition. However, previous work has focused on formulating the abstract problems behind cognitive tasks and their probabilistic solutions, rather than considering mechanisms that could implement these solutions. Exempl ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
Probabilistic models have recently received much attention as accounts of human cognition. However, previous work has focused on formulating the abstract problems behind cognitive tasks and their probabilistic solutions, rather than considering mechanisms that could implement these solutions. Exemplar models are a successful class of psychological process models that use an inventory of stored examples to solve problems such as identification, categorization and function learning. We show that exemplar models can be interpreted as a sophisticated form of Monte Carlo approximation known as importance sampling, and thus provide a way to perform approximate Bayesian inference. Simulations of Bayesian inference in speech perception and concept learning show that exemplar models can account for human performance with only a few exemplars, for both simple and relatively complex prior distributions. Thus, we show that exemplar models provide a possible mechanism for implementing Bayesian inference.
Semirational Models of Conditioning: The Case of Trial Order
, 2007
"... Bayesian treatments of animal conditioning start from a generative model that specifies precisely a set of assumptions about the structure of the learning task. Optimal rules for learning are direct mathematical consequences of these assumptions. In terms of Marr’s (1982) levels of analyses, the mai ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Bayesian treatments of animal conditioning start from a generative model that specifies precisely a set of assumptions about the structure of the learning task. Optimal rules for learning are direct mathematical consequences of these assumptions. In terms of Marr’s (1982) levels of analyses, the main task at the computational level
Learning to Selectively Attend
"... How is reinforcement learning possible in a highdimensional world? Without making any assumptions about the structure of the state space, the amount of data required to effectively learn a value function grows exponentially with the state space’s dimensionality. However, humans learn to solve highd ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
How is reinforcement learning possible in a highdimensional world? Without making any assumptions about the structure of the state space, the amount of data required to effectively learn a value function grows exponentially with the state space’s dimensionality. However, humans learn to solve highdimensional problems much more rapidly than would be expected under this scenario. This suggests that humans employ inductive biases to guide (and accelerate) their learning. Here we propose one particular bias—sparsity—that ameliorates the computational challenges posed by highdimensional state spaces, and present experimental evidence that humans can exploit sparsity information when it is available. Keywords: reinforcement learning; attention; Bayes.
A COGNITIVE AND COMPUTATIONAL BASIS FOR DESIGNING
"... This paper presents a set of concepts that provides the cognitive and computational foundations for answering fundamental questions about designerly behaviour. A design agent should be dynamic and able to handle changes in external representations that describe its designs and changes in how it is g ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents a set of concepts that provides the cognitive and computational foundations for answering fundamental questions about designerly behaviour. A design agent should be dynamic and able to handle changes in external representations that describe its designs and changes in how it is guided by its past experiences. Such a view of agency is afforded by situated design computing and can represent and explain much designerly behaviour. It can model how a designer can commence designing before all the requirements have been specified, how two designers presented with the same specifications produce different designs, how the same designer later confronted with the same requirements produces a different design to the previous one, and how a designer can change their design trajectory during the activity of designing. This view is built upon three foundational concepts: knowledge grounded in interaction, constructive memory, situatedness. The concepts build on notions of memory that are often traced back to Dewey and Bartlett, although we use contemporary descriptions. Memory is not understood primarily as allowing for the retrieval of an object from a data store by knowing its physical location; it is guiding an experience in a fashion similar to how past experiences progressed, and recognising that this is so. The paper explicates experiences and situations and discusses implementation requirements.
Contents lists available at ScienceDirect
"... journal homepage: www.elsevier.com/locate/neucom Toward a descriptive cognitive model of human learning ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/neucom Toward a descriptive cognitive model of human learning