Results 1  10
of
55
Toward a method of selecting among computational models of cognition
 Psychological Review
, 2002
"... The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to guide the evaluation and selection of these models. This article introduces a method of selecting among mathematical models of cognition known as minimum description length, which provides an intuitive and theoretically wellgrounded understanding of why one model should be chosen. A central but elusive concept in model selection, complexity, can also be derived with the method. The adequacy of the method is demonstrated in 3 areas of cognitive modeling: psychophysics, information integration, and categorization. How should one choose among competing theoretical explanations of data? This question is at the heart of the scientific enterprise, regardless of whether verbal models are being tested in an experimental setting or computational models are being evaluated in simulations. A number of criteria have been proposed to assist in this endeavor, summarized nicely by Jacobs and Grainger
Evidence accumulation in decision making: Unifying the “take the best” and the “rational” models
 Psychonomic Bulletin & Review
, 2004
"... An evidence accumulation model of forcedchoice decision making is proposed to unify the fast and frugal take the best (TTB) model and the alternative rational (RAT) model with which it is usually contrasted. The basic idea is to treat the TTB model as a sequentialsampling process that terminates a ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
An evidence accumulation model of forcedchoice decision making is proposed to unify the fast and frugal take the best (TTB) model and the alternative rational (RAT) model with which it is usually contrasted. The basic idea is to treat the TTB model as a sequentialsampling process that terminates as soon as any evidence in favor of a decision is found and the rational approach as a sequentialsampling process that terminates only when all available information has been assessed. The unified TTB and RAT models were tested in an experiment in which participants learned to make correct judgments for a set of realworld stimuli on the basis of feedback, and were then asked to make additional judgments without feedback for cases in which the TTB and the rational models made different predictions. The results show that, in both experiments, there was strong intraparticipant consistency in the use of either the TTB or the rational model but large interparticipant differences in which model was used. The unified model is shown to be able to capture the differences in decision making across participants in an interpretable way and is preferred by the minimum description length model selection criterion. A simple but pervasive type of decision requires choosing which of two alternatives has the greater (or the lesser) value on some variable of interest. Examples of
A temporal ratio model of memory
 Psychological Review
, 2007
"... A model of memory retrieval is described. The model embodies 4 main claims: (a) temporal memory— traces of items are represented in memory partly in terms of their temporal distance from the present; (b) scalesimilarity—similar mechanisms govern retrieval from memory over many different timescales; ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
A model of memory retrieval is described. The model embodies 4 main claims: (a) temporal memory— traces of items are represented in memory partly in terms of their temporal distance from the present; (b) scalesimilarity—similar mechanisms govern retrieval from memory over many different timescales; (c) local distinctiveness—performance on a range of memory tasks is determined by interference from near psychological neighbors; and (d) interferencebased forgetting—all memory loss is due to interference and not trace decay. The model is applied to data on free recall and serial recall. The account emphasizes qualitative similarity in the retrieval principles involved in memory performance at all timescales, contrary to models that emphasize distinctions between shortterm and longterm memory.
Locally Bayesian Learning with Applications to Retrospective Revaluation and Highlighting
 Psychological Review
, 2006
"... A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to backpropagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probab ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to backpropagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probability of the next component’s target. Each layer then does locally Bayesian learning. The approach assumes online trialbytrial learning. The resulting parameter updating is not globally Bayesian but can better capture human behavior. The approach is implemented for an associative learning model that first maps inputs to attentionally filtered inputs and then maps attentionally filtered inputs to outputs. The Bayesian updating allows the associative model to exhibit retrospective revaluation effects such as backward blocking and unovershadowing, which have been challenging for associative learning models. The backpropagation of target values to attention allows the model to show trialorder effects, including highlighting and differences in magnitude of forward and backward blocking, which have been challenging for Bayesian learning models.
Comparing prototypebased and exemplarbased accounts of category learning and attentional allocation
 Journal of Experimental Psychology: Learning, Memory, and Cognition
, 2002
"... Exemplar theory was motivated by research that often used D. L. Medin and M. M. Schaffer’s (1978) 5/4 stimulus set. The exemplar model has seemed to fit categorization data from this stimulus set better than a prototype model can. Moreover, the exemplar model alone predicts a qualitative aspect of p ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Exemplar theory was motivated by research that often used D. L. Medin and M. M. Schaffer’s (1978) 5/4 stimulus set. The exemplar model has seemed to fit categorization data from this stimulus set better than a prototype model can. Moreover, the exemplar model alone predicts a qualitative aspect of performance that participants sometimes show. In 2 experiments, the authors reexamined these findings. In both experiments, a prototype model fit participants ’ performance profiles better than an exemplar model did when comparable prototype and exemplar models were used. Moreover, even when participants showed the qualitative aspect of performance, the exemplar model explained it by making implausible assumptions about human attention and effort in categorization tasks. An independent assay of participants’ attentional strategies suggested that the description the exemplar model offers in such cases is incorrect. A review of 30 uses of the 5/4 stimulus set in the literature reinforces this suggestion. Humans ’ categorization processes are a central topic in cognitive psychology. One prominent theory—prototype theory—assumes that categories are represented by a central tendency that is abstracted from a person’s experience with a category’s exemplars
Bayesian models of cognition
"... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty
Separating perceptual processes from decisional processes in identification and categorization
 Perception & Psychophysics
, 2001
"... Four observers completed perceptual matching, identification, and categorization tasks using separabledimension stimuli. A unified quantitative approach relating perceptual matching, identification, and categorization was proposed and tested. The approach derives from general recognition theory (As ..."
Abstract

Cited by 21 (17 self)
 Add to MetaCart
Four observers completed perceptual matching, identification, and categorization tasks using separabledimension stimuli. A unified quantitative approach relating perceptual matching, identification, and categorization was proposed and tested. The approach derives from general recognition theory (Ashby & Townsend, 1986) and provides a powerful method for quantifying the separate influences of perceptual processes and decisional processes within and across tasks. Good accounts of the identification data were obtained from an initial perceptual representation derived from perceptual matching. The same perceptual representation provided a good account of the categorization data, except when selective attention to one stimulus dimension was required. Selective attention altered the perceptual representation by decreasing the perceptual variance along the attended dimension. These findings suggest that a complete understanding of identification and categorization performance requires an understanding of perceptual and decisional processes. Implications for other psychological tasks are discussed. An important goal of psychological inquiry is to understand how behavior is influenced by the environmental stimulation and the task at hand. Information about the environment
The Generalized Universal Law of Generalization
 Journal of Mathematical Psychology
, 2001
"... It has been argued by Shepard that there is a robust psychological law that relates the distance between a pair of items in psychological space and the probability that they will be confused with each other. Specifically, the probability of confusion is a negative exponential function of the dista ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
It has been argued by Shepard that there is a robust psychological law that relates the distance between a pair of items in psychological space and the probability that they will be confused with each other. Specifically, the probability of confusion is a negative exponential function of the distance between the pair of items. In experimental contexts, distance is typically defined in terms of a multidimensional Euclidean spacebut this assumption seems unlikely to hold for complex stimuli. We show that, nonetheless, the Universal Law of Generalization can be derived in the more complex setting of arbitrary stimuli, using a much more universal measure of distance. This universal distance is defined as the length of the shortest program that transforms the representations of the two items of interest into one another: the algorithmic information distance. It is universal in the sense that it minorizes every computable distance: it is the smallest computable distance. We show ...
Assessing model mimicry using the parametric bootstrap
 Journal of Mathematical Psychology
, 2004
"... We present a general sampling procedure to quantify model mimicry, defined as the ability of a model to account for data generated by a competing model. This sampling procedure, called the parametric bootstrap crossfitting method (PBCM; cf. Williams (J. R. Statist. Soc. B 32 (1970) 350; Biometrics ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We present a general sampling procedure to quantify model mimicry, defined as the ability of a model to account for data generated by a competing model. This sampling procedure, called the parametric bootstrap crossfitting method (PBCM; cf. Williams (J. R. Statist. Soc. B 32 (1970) 350; Biometrics 26 (1970) 23)), generates distributions of differences in goodnessoffit expected under each of the competing models. In the data informed version of the PBCM, the generating models have specific parameter values obtained by fitting the experimental data under consideration. The data informed difference distributions can be compared to the observed difference in goodnessoffit to allow a quantification of model adequacy. In the data uninformed version of the PBCM, the generating models have a relatively broad range of parameter values based on prior knowledge. Application of both the data informed and the data uninformed PBCM is illustrated with several examples. r 2003 Elsevier Inc. All rights reserved. 1.