Results 1  10
of
281
A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge
 Psychological review
, 1997
"... How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LS ..."
Abstract

Cited by 1108 (9 self)
 Add to MetaCart
How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local cooccurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched. Prologue "How much do we know at any time? Much more, or so I believe, than we know we know!" —Agatha Christie, The Moving Finger A typical American seventh grader knows the meaning of
The adaptive nature of human categorization
 Psychological Review
, 1991
"... A rational model of human categorization behavior is presented that assumes that categorization reflects the derivation of optimal estimates of the probability of unseen features of objects. A Bayesian analysis is performed of what optimal estimations would be if categories formed a disjoint partiti ..."
Abstract

Cited by 216 (2 self)
 Add to MetaCart
A rational model of human categorization behavior is presented that assumes that categorization reflects the derivation of optimal estimates of the probability of unseen features of objects. A Bayesian analysis is performed of what optimal estimations would be if categories formed a disjoint partitioning of the object space and if features were independently displayed within a category. This Bayesian analysis is placed within an incremental categorization algorithm. The resulting rational model accounts for effects of central tendency of categories, effects of specific instances, learning of linearly nonseparable categories, effects of category labels, extraction of basic level categories, baserate effects, probability matching in categorization, and trialbytrial learning functions. Although the rational model considers just I level of categorization, it is shown how predictions can be enhanced by considering higher and lower levels. Considering prediction at the lower, individual level allows integration of this rational analysis of categorization with the earlier rational analysis of memory (Anderson & Milson, 1989). Anderson (1990) presented a rational analysis ot 6 human cognition. The term rational derives from similar "rationalman" analyses in economics. Rational analyses in other fields are sometimes called adaptationist analyses. Basically, they are efforts to explain the behavior in some domain on the assumption that the behavior is optimized with respect to some criteria of adaptive importance. This article begins with a general characterization ofhow one develops a rational theory of a particular cognitive phenomenon. Then I present the basic theory of categorization developed in Anderson (1990) and review the applications from that book. Since the writing of the book, the theory has been greatly extended and applied to many new phenomena. Most of this article describes these new developments and applications. A Rational Analysis Several theorists have promoted the idea that psychologists might understand human behavior by assuming it is adapted to the environment (e.g., Brunswik, 1956; Campbell, 1974; Gib
Empirical Evaluation of Dissimilarity Measures for Color and Texture
, 1999
"... This paper empirically compares nine image dissimilarity measures that are based on distributions of color and texture features summarizing over 1,000 CPU hours of computational experiments. Ground truth is collected via a novel random sampling scheme for color, and via an image partitioning method ..."
Abstract

Cited by 192 (6 self)
 Add to MetaCart
This paper empirically compares nine image dissimilarity measures that are based on distributions of color and texture features summarizing over 1,000 CPU hours of computational experiments. Ground truth is collected via a novel random sampling scheme for color, and via an image partitioning method for texture. Quantitative performance evaluations are given for classification, image retrieval, and segmentation tasks, and for a wide variety of dissimilarity measures. It is demonstrated how the selection of a measure, based on large scale evaluation, substantially improves the quality of classification, retrieval, and unsupervised segmentation of color and texture images.
Comparing decision bound and exemplar models of categorization
 Perception and Psychophysics
, 1993
"... The performance of a decision bound model of categorization(Ashliy,J992a;Ashhy & Maddox, in press) is compared with the performance oftwo exemplar models. The first is the generalized context model (e.g., Nosofsky, 1986, 1992) and the second is a recently proposed deterministic exemplar model (Ashby ..."
Abstract

Cited by 150 (87 self)
 Add to MetaCart
The performance of a decision bound model of categorization(Ashliy,J992a;Ashhy & Maddox, in press) is compared with the performance oftwo exemplar models. The first is the generalized context model (e.g., Nosofsky, 1986, 1992) and the second is a recently proposed deterministic exemplar model (Ashby & Maddox, in press), which contains the generalized context model as a special case. When the exemplars from each category were normally distributed and the optimal decision bound was linear, the deterministic exemplar model and the decision bound model provided roughly equivalent accounts of the data. When the optimal decision bound was nonlinear, the decision bound model provided a more accurate account of the data than did either exemplar model. When applied to categorization data collected by Nosofsky (1986, 1989), in which the category exemplars are not normally distributed, the decision bound model provided excellent accounts of the data, in many cases significantly outperforming the exemplar models. The decision bound model was found to be especiallysuccessful when (1) single subject analyses were performed, (2) each subject was given relatively extensive training, and (3) the subject’s performance was characterized by complex suboptimalities. These results support the hypothesis that the decision bound is of fundamental importancein predicting asymptotic categorization performance
An exemplarbased random walk model of speeded classification
 Psychological Review
, 1997
"... The authors propose and test an exemplarbased random walk model for predicting response times in tasks of speeded, multidimensional perceptual classification. The model combines elements of R.M. Nosofsky's (1986) generalized context model of categorization and G. D. Logan's (1988) instancebased mo ..."
Abstract

Cited by 140 (30 self)
 Add to MetaCart
The authors propose and test an exemplarbased random walk model for predicting response times in tasks of speeded, multidimensional perceptual classification. The model combines elements of R.M. Nosofsky's (1986) generalized context model of categorization and G. D. Logan's (1988) instancebased model of automaticity. In the model, exemplars race among one another to be retrieved from memory, with rates determined by their similarity to test items. The retrieved exemplars provide incremental information that enters into a random walk process for making classification decisions. The model predicts correctly effects of within and betweencategories similarity, individualobject familiarity, and extended practice on classification response times. It also builds bridges between the domains of categorization and automaticity. Models of multidimensional perceptual classification have grown increasingly powerful and sophisticated in recent years, providing detailed quantitative accounts of patterns of classification learning, transfer, and generalization (e.g., Anderson,
Bayesian color constancy
 Journal of the Optical Society of America A
, 1997
"... The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor response ..."
Abstract

Cited by 138 (18 self)
 Add to MetaCart
The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor responses. Second, we construct prior distributions that describe the probability that particular illuminants and surfaces exist in the world. Given a set of photosensor responses, we can then use Bayes’s rule to compute the posterior distribution for the illuminants and the surfaces in the scene. There are two widely used methods for obtaining a single best estimate from a posterior distribution. These are maximum a posteriori (MAP) and minimum meansquarederror (MMSE) estimation. We argue that neither is appropriate for perception problems. We describe a new estimator, which we call the maximum local mass (MLM) estimate, that integrates local probability density. The new method uses an optimality criterion that is appropriate for perception tasks: It finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We develop the MLM estimator for the colorconstancy problem in which flat matte surfaces are uniformly illuminated. In simulations we show that the MLM method performs better than the MAP estimator and better than a number of standard colorconstancy algorithms. We note conditions under which even the optimal estimator produces poor estimates: when the spectral properties of the surfaces in the scene are biased. © 1997 Optical Society of America [S07403232(97)016074] 1.
Exemplarbased accounts of relations between classification, recognition, and typicality
 Journal of Experimentul Psychology: Learning, Memory, and Cognition
, 1988
"... Previously published sets of classification and oldnew recognition memory data are reanalyzed within the framework of an exemplarbased generalization model. The key assumption in the model is that, whereas classification decisions are based on the similarity of a probe to exemplars of a target cat ..."
Abstract

Cited by 108 (15 self)
 Add to MetaCart
Previously published sets of classification and oldnew recognition memory data are reanalyzed within the framework of an exemplarbased generalization model. The key assumption in the model is that, whereas classification decisions are based on the similarity of a probe to exemplars of a target category relative to exemplars of contrast categories, recognition decisions are based on overall summed similarity of a probe to all exemplars. The summedsimilarity decision rule is shown to be consistent with a wide variety of recognition memory data obtained in classification learning situations and may provide a unified approach to understanding relations between categorization and recognition. Recently, there has been an upsurge of interest among categorization researchers in exploring relations between classification learning and oldnew recognition memory. This interest has been fueled by the exemplar view of category representation, which holds that people base classification decisions on similarity comparisons with stored exemplars (Hintzman, 1986b; Medin & Schaffer, 1978; Nosofsky, 1986).
Prototypes in the mist: The early epochs of category learning
 Journal of Experimental Psychology: Learning, Memory, & Cognition
, 1998
"... processes. However, research has focused on small, poorly differentiated categories and on taskfinal performances—both may highlight exemplar strategies. Thus, we evaluated participants ' categorization strategies and standard categorization models at successive stages in the learning of smaller, l ..."
Abstract

Cited by 104 (3 self)
 Add to MetaCart
processes. However, research has focused on small, poorly differentiated categories and on taskfinal performances—both may highlight exemplar strategies. Thus, we evaluated participants ' categorization strategies and standard categorization models at successive stages in the learning of smaller, less differentiated categories and larger, more differentiated categories. In the former case, the exemplar model dominated even early in learning. In the latter case, the prototype model had a strong early advantage that gave way slowly. Alternative models, and even the behavior of individual parameters within models, suggest a psychological transition from prototypebased to exemplarbased processing during category learning and show that different category structures produce different trajectories of learning through the larger space of strategies. Categorizing objects into psychological equivalence classes is a basic cognitive task. Descriptions of categorization long favored a generalized prototype principle (Homa,
SUSTAIN: A network model of category learning
 Psychological Review
, 2004
"... SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that ..."
Abstract

Cited by 102 (13 self)
 Add to MetaCart
SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into