Results 1  10
of
22
Attention in learning
 Current Directions in Psychological Science
, 2003
"... explaining many phenomena in learning. The mechanism of selective attention in learning is also well motivated by its ability to minimize proactive interference and enhance generalization, thereby accelerating learning. Therefore, not only does the mechanism help explain behavioral phenomena, it mak ..."
Abstract

Cited by 37 (9 self)
 Add to MetaCart
explaining many phenomena in learning. The mechanism of selective attention in learning is also well motivated by its ability to minimize proactive interference and enhance generalization, thereby accelerating learning. Therefore, not only does the mechanism help explain behavioral phenomena, it makes sense that it should have evolved (Kruschke & Hullinger, 2010). The phrase “learned selective attention ” denotes three qualities. First, “attention ” means the amplification or attenuation of the processing of stimuli. Second, “selective” refers to differentially amplifying and/or attenuating a subset of the components of the stimulus. This selectivity within a stimulus is different from attenuating or amplifying all aspects of a stimulus simultaneously (cf. Larrauri & Schmajuk, 2008). Third, “learned ” denotes the idea that the allocation of selective processing is retained for future use. The allocation may be context sensitive, so that attention is allocated differently in different contexts. There are many phenomena in human and animal learning that suggest the involvement of learned selective attention. The first part of this chapter briefly reviews some of those phenomena. The emphasis of the chapter is not the empirical phenomena, however. Instead, the focus is on a collection of models that formally express theories of learned attention. These models will be surveyed subsequently. Phenomena suggestive of selective attention in learning There are many phenomena in human and animal learning that suggest that learning involves allocating attention to informative cues, while ignoring uninformative cues. The following subsections indicate the benefits of selective allocation of attention, and illustrate the benefits with particular findings.
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
Bayesian approaches to associative learning: From passive to active learning
 Learning & Behavior
, 2008
"... Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist mode ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
Traditional associationist models represent an organism’s knowledge state by a single strength of association on each associative link. Bayesian models instead represent knowledge by a distribution of graded degrees of belief over a range of candidate hypotheses. Many traditional associationist models assume that the learner is passive, adjusting strengths of association only in reaction to stimuli delivered by the environment. Bayesian models, on the other hand, can describe how the learner should actively probe the environment to learn optimally. The first part of this article reviews two Bayesian accounts of backward blocking, a phenomenon that is challenging for many traditional theories. The broad Bayesian framework, in which these models reside, is also selectively reviewed. The second part focuses on two formalizations of optimal active learning: maximizing either the expected information gain or the probability gain. New analyses of optimal active learning by a Kalman filter and by a noisylogic gate show that these two Bayesian models make different predictions for some environments. The Kalman filter predictions are disconfirmed in at least one case. Bayesian formalizations of learning are a revolutionary advance over traditional approaches. Bayesian models assume that the learner maintains multiple candidate hypotheses with differing degrees of belief, unlike traditional
Semirational Models of Conditioning: The Case of Trial Order
, 2007
"... Bayesian treatments of animal conditioning start from a generative model that specifies precisely a set of assumptions about the structure of the learning task. Optimal rules for learning are direct mathematical consequences of these assumptions. In terms of Marr’s (1982) levels of analyses, the mai ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Bayesian treatments of animal conditioning start from a generative model that specifies precisely a set of assumptions about the structure of the learning task. Optimal rules for learning are direct mathematical consequences of these assumptions. In terms of Marr’s (1982) levels of analyses, the main task at the computational level
A taxonomy of inductive problems
 Cognitive Science Society
, 2009
"... Inductive inferences about objects, properties, categories, relations, and labels have been studied for many years but there are few attempts to chart the range of inductive problems that humans are able to solve. We present a taxonomy that includes more than thirty inductive problems. The taxonomy ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Inductive inferences about objects, properties, categories, relations, and labels have been studied for many years but there are few attempts to chart the range of inductive problems that humans are able to solve. We present a taxonomy that includes more than thirty inductive problems. The taxonomy helps to clarify the relationships between familiar problems such as identification, stimulus generalization, and categorization, and introduces several novel problems including property identification and object discovery.
Optimal decisions for contrast discrimination
"... Contrast discrimination functions for simple gratings famously look like a dipper. Discrimination thresholds are lower than detection thresholds for moderate pedestal contrasts, and the rate of growth of thresholds as the pedestal contrast gets larger typically lies between the values implied by two ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Contrast discrimination functions for simple gratings famously look like a dipper. Discrimination thresholds are lower than detection thresholds for moderate pedestal contrasts, and the rate of growth of thresholds as the pedestal contrast gets larger typically lies between the values implied by two popular treatments of noise. Here, we suggest a new normative treatment of the dipper, showing how it emerges from Bayesian inference based on the responses of a population of orientationtuned units. Our central assumption concerns the noise corrupting the outputs of these units as a function of the contrast: We suggest that it has the shape of a hinge. We show the match to the psychophysical data and discuss the neurobiological and statistical rationales for this form of noise. Finally, we relate our model to other major accounts of contrast discrimination.
An Action Selection Calculus (An Action Selection Calculus)
"... This paper describes a unifying framework for five highly influential but disparate theories of natural learning and behavioral action selection. These theories are normally considered independently, with their own experimental procedures and results. The framework presented builds on a structure of ..."
Abstract
 Add to MetaCart
This paper describes a unifying framework for five highly influential but disparate theories of natural learning and behavioral action selection. These theories are normally considered independently, with their own experimental procedures and results. The framework presented builds on a structure of connection types, propagation rules and learning rules, which are used in combination to integrate results from each theory into a whole. These connection types and rules form the Action Selection Calculus. The Calculus will be used to discuss the areas of genuine difference between the factor theories and to identify areas where there is overlap and where apparently disparate findings have a common source. The discussion is illustrated with exemplar experimental procedures. The paper focuses on predictive or anticipatory properties inherent in these action selection and learning theories, and uses the Dynamic Expectancy Model and its computer implementation SRS/E as a mechanism to conduct this discussion.
Novelty and Inductive Generalization in Human Reinforcement Learning
"... What is the value of an action that has never been tried before? One way to frame this question is as an inductive problem: how can I generalize my previous experience with one set of actions to a novel action? We show how hierarchical Bayesian inference can be used to solve this problem, and descri ..."
Abstract
 Add to MetaCart
What is the value of an action that has never been tried before? One way to frame this question is as an inductive problem: how can I generalize my previous experience with one set of actions to a novel action? We show how hierarchical Bayesian inference can be used to solve this problem, and describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of human reinforcement learning. In two experiments we test several predictions of this model, providing behavioral evidence that humans learn and exploit structured inductive knowledge to make predictions about novel actions. We suggest a new interpretation of dopaminergic responses to novelty in light of this model. Keywords: reinforcement learning, Bayesian inference, exploration, exploitation
Explicit Bayesian Reasoning with Frequencies, Probabilities, and Surprisals
"... To explore human deviations from Bayes ’ rule in numerically explicit problems, prior and likelihood probabilities or frequencies are manipulated and their effects on posterior probabilities or surprisals are measured. Results show that people use both priors and likelihoods in Bayesian directions, ..."
Abstract
 Add to MetaCart
To explore human deviations from Bayes ’ rule in numerically explicit problems, prior and likelihood probabilities or frequencies are manipulated and their effects on posterior probabilities or surprisals are measured. Results show that people use both priors and likelihoods in Bayesian directions, but the effect of likelihood information is stronger than that of prior information. Use of frequency information and surprisal measures increase deviations from Bayesian predictions. There is evidence that people do compute something like the standardizing marginal data term when asked for probability estimates, but not when asked for surprisal ratings.