Results 1 - 10
of
21
The spatial coding model of visual word identification
- Psychological Review
, 2010
"... Visual word identification requires readers to code the identity and order of the letters in a word and match this code against previously learned codes. Current models of this lexical matching process posit context-specific letter codes in which letter representations are tied to either specific se ..."
Abstract
-
Cited by 44 (3 self)
- Add to MetaCart
(Show Context)
Visual word identification requires readers to code the identity and order of the letters in a word and match this code against previously learned codes. Current models of this lexical matching process posit context-specific letter codes in which letter representations are tied to either specific serial positions or specific local contexts (e.g., letter clusters). The spatial coding model described here adopts a different approach to letter position coding and lexical matching based on context-independent letter representa-tions. In this model, letter position is coded dynamically, with a scheme called spatial coding. Lexical matching is achieved via a method called superposition matching, in which input codes and learned codes are matched on the basis of the relative positions of their common letters. Simulations of the model illustrate its ability to explain a broad range of results from the masked form priming literature, as well as to capture benchmark findings from the unprimed lexical decision task.
Holographic String Encoding
, 2010
"... In this article, we apply a special case of holographic representations to letter position coding. We translate different well-known schemes into this format, which uses distributed representations and supports constituent structure. We show that in addition to these brain-like characteristics, perf ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
In this article, we apply a special case of holographic representations to letter position coding. We translate different well-known schemes into this format, which uses distributed representations and supports constituent structure. We show that in addition to these brain-like characteristics, performances on a standard benchmark of behavioral effects are improved in the holographic format relative to the standard localist one. This notably occurs because of emerging properties in holographic codes, like transposition and edge effects, for which we give formal demonstrations. Finally, we outline the limits of the approach as well as its possible future extensions.
Emergence in cognitive science
- Topics in Cognitive Science
, 2010
"... The study of human intelligence was once dominated by symbolic approaches, but over the last 30 years an alternative approach has arisen. Symbols and processes that operate on them are often seen today as approximate characterizations of the emergent consequences of sub- or nonsymbolic processes, an ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
(Show Context)
The study of human intelligence was once dominated by symbolic approaches, but over the last 30 years an alternative approach has arisen. Symbols and processes that operate on them are often seen today as approximate characterizations of the emergent consequences of sub- or nonsymbolic processes, and a wide range of constructs in cognitive science can be understood as emergents. These include representational constructs (units, structures, rules), architectural constructs (central executive, declarative memory), and developmental processes and outcomes (stages, sensitive periods, neurocognitive modules, developmental disorders). The greatest achievements of human cognition may be largely emergent phenomena. It remains a challenge for the future to learn more about how these greatest achievements arise and to emulate them in artificial systems.
Locating object knowledge in the brain: Comments on Bowers's (2009) . . .
, 2010
"... According to Bowers (2009), the finding that there are neurons with highly selective responses to familiar stimuli supports theories positing localist representations over approaches positing the type of distributed representations typically found in parallel distributed processing (PDP) models. How ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
According to Bowers (2009), the finding that there are neurons with highly selective responses to familiar stimuli supports theories positing localist representations over approaches positing the type of distributed representations typically found in parallel distributed processing (PDP) models. However, his conclusions derive from an overly narrow view of the range of possible distributed representations and of the role that PDP models can play in exploring their properties. Although it is true that current distributed theories face challenges in accounting for both neural and behavioral data, the proposed localist account—to the extent that it is articulated at all—runs into more fundamental difficulties. Central to these difficulties is the problem of specifying the set of entities a localist unit represents.
Optimization and Quantization in Gradient Symbol Systems: A Framework for Integrating the Continuous and the Discrete in Cognition
"... Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical fram ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland & Rumelhart, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-‐‑Quantization, in which an optimization process favoring representations that satisfy well-‐‑formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-‐‑Diffusion Theory, to
Incorporating Rapid Neocortical Learning of New Schema-Consistent Information Into Complementary Learning Systems Theory
, 2013
"... The complementary learning systems theory of the roles of hippocampus and neocortex (McClelland, McNaughton, & O’Reilly, 1995) holds that the rapid integration of arbitrary new information into neocortical structures is avoided to prevent catastrophic interference with structured knowledge re ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
The complementary learning systems theory of the roles of hippocampus and neocortex (McClelland, McNaughton, & O’Reilly, 1995) holds that the rapid integration of arbitrary new information into neocortical structures is avoided to prevent catastrophic interference with structured knowledge representations stored in synaptic connections among neocortical neurons. Recent studies (Tse et al., 2007, 2011) showed that neocortical circuits can rapidly acquire new associations that are consistent with prior knowledge. The findings challenge the complementary learning systems theory as previously presented. However, new simulations extending those reported in McClelland et al. (1995) show that new information that is consistent with knowledge previously acquired by a putatively cortexlike artificial neural network can be learned rapidly and without interfering with existing knowledge; it is when inconsistent new knowledge is acquired quickly that catastrophic interference ensues. Several important features of the findings of Tse et al. (2007, 2011) are captured in these simulations, indicating that the neural network model used in McClelland et al. has characteristics in common with neocortical learning mechanisms. An additional simulation generalizes beyond the network model previously used, showing
Spreading activation in an attractor network with latching dynamics: Automatic semantic priming revisited
- Cognitive Science
, 2012
"... Abstract Localist models of spreading activation (SA) and models assuming distributed representations offer very different takes on semantic priming, a widely investigated paradigm in word recognition and semantic memory research. In this study, we implemented SA in an attractor neural network mode ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract Localist models of spreading activation (SA) and models assuming distributed representations offer very different takes on semantic priming, a widely investigated paradigm in word recognition and semantic memory research. In this study, we implemented SA in an attractor neural network model with distributed representations and created a unified framework for the two approaches. Our models assume a synaptic depression mechanism leading to autonomous transitions between encoded memory patterns (latching dynamics), which account for the major characteristics of automatic semantic priming in humans. Using computer simulations, we demonstrated how findings that challenged attractor-based networks in the past, such as mediated and asymmetric priming, are a natural consequence of our present model's dynamics. Puzzling results regarding backward priming were also given a straightforward explanation. In addition, the current model addresses some of the differences between semantic and associative relatedness and explains how these differences interact with stimulus onset asynchrony in priming experiments.
Emergence of wordlikeness in the mental lexicon: Language, population, and task effects in visual word recognition
, 2013
"... ..."
Outline of a new approach to the nature of mind
"... I propose a new approach to the constitutive problem of psychology ‘what is mind? ’ The first section introduces modifications of the received scope, methodology, and evaluation criteria of unified theories of cognition in accordance with the requirements of evolutionary compatibility and of a matur ..."
Abstract
- Add to MetaCart
I propose a new approach to the constitutive problem of psychology ‘what is mind? ’ The first section introduces modifications of the received scope, methodology, and evaluation criteria of unified theories of cognition in accordance with the requirements of evolutionary compatibility and of a mature science. The second section outlines the proposed theory. Its first part provides empirically verifiable conditions delineating the class of meaningful neural formations and modifies accordingly the traditional conceptions of meaning, concept and thinking. This analysis is part of a theory of communication in terms of inter-level systems of primitives that proposes the communication-understanding principle as a psychological invariance. It unifies a substantial amount of research by systematizing the notions of meaning, thinking, concept, belief, communication, and understanding and leads to a minimum vocabulary for this core system of mental phenomena. Its second part argues that written human language is the key characteristic of the artificially natural human mind. Overall, the theory both supports Darwin’s continuity hypothesis and proposes that the mental gap is within our own species.
Recognizing Sights, Smells, and Sounds with Gnostic
, 2013
"... Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine aud ..."
Abstract
- Add to MetaCart
(Show Context)
Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of ‘‘gnostic’ ’ neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski’s theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being