Results 1  10
of
80
Processing Capacity Defined by Relational Complexity: Implications for Comparative, Developmental, and Cognitive Psychology
, 1989
"... It is argued that working memory limitations are best defined in terms of the complexity of relations that can be processed in parallel. Relational complexity is related to processing loads in problem solving, and discriminates between higher animal species, as well as between children of differen ..."
Abstract

Cited by 176 (15 self)
 Add to MetaCart
It is argued that working memory limitations are best defined in terms of the complexity of relations that can be processed in parallel. Relational complexity is related to processing loads in problem solving, and discriminates between higher animal species, as well as between children of different ages. Complexity is defined by the number of dimensions, or sources of variation, that are related. A unary relation has one argument and one source of variation, because its argument can be instantiated in only one way at a time. A binary relation has two arguments, and two sources of variation, because two argument instantiations are possible at once. Similarly, a ternary relation is three dimensional, a quaternary relation is four dimensional, and so on. Dimensionality is related to number of chunks, because both attributes on dimensions and chunks are independent units of information of arbitrary size. Empirical studies of working memory limitations indicate a soft limit which corresponds to processing one quaternary relation in parallel. More complex concepts are processed by segmentation or conceptual chunking. Segmentation entails breaking tasks into components which do not exceed processing capacity, and which are processed serially. Conceptual chunking entails "collapsing" representations to reduce their dimensionality and consequently their processing load, but at the cost of making some relational information inaccessible. Parallel distributed processing implementations of relational representations show that relations with more arguments entail a higher computational cost, which corresponds to empirical observations of higher processing loads in humans. Empirical evidence is presented that relational complexity discriminates between higher species...
Approximating the Semantics of Logic Programs by Recurrent Neural Networks
"... In [18] we have shown how to construct a 3layered recurrent neural network that computes the fixed point of the meaning function TP of a given propositional logic program P, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a no ..."
Abstract

Cited by 62 (10 self)
 Add to MetaCart
In [18] we have shown how to construct a 3layered recurrent neural network that computes the fixed point of the meaning function TP of a given propositional logic program P, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a notion of approximation for interpretations and prove that there exists a 3layered feed forward neural network that approximates the calculation of TP for a given first order acyclic logic program P with an injective level mapping arbitrarily well. Extending the feed forward network by recurrent connections we obtain a recurrent neural network whose iteration approximates the fixed point of TP. This result is proven by taking advantage of the fact that for acyclic logic programs the function TP is a contraction mapping on a complete metric space defined by the interpretations of the program. Mapping this space to the metric space IR with Euclidean distance, a real valued function fP can be defined which corresponds to TP and is continuous as well as a contraction. Consequently it can be approximated by an appropriately chosen class of feed forward neural networks.
From Words to Understanding
 COMPUTING WITH LARGE RANDOM PATTERNS
"... As was discussed in section 22, language is central to a correct understanding of the mind. Compositional analytic models perform well in the domain and subject area they are developed for, but any extension is difficult and the models have incomplete psychological veracity. Here we explore how to c ..."
Abstract

Cited by 58 (16 self)
 Add to MetaCart
As was discussed in section 22, language is central to a correct understanding of the mind. Compositional analytic models perform well in the domain and subject area they are developed for, but any extension is difficult and the models have incomplete psychological veracity. Here we explore how to compute representations of meaning based on a lower level of abstraction and how to use the models for tasks that require some form of language understanding.
Vector symbolic architectures answer Jackendoff’s challenges for cognitive neuroscience
 International Conference on Cognitive Science
, 2003
"... Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and transformation of the compositional structures that are typica ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and transformation of the compositional structures that are typically taken to be the domain of symbolic processing. He contended that typical connectionist approaches fail to meet these challenges and that the dialogue between linguistic theory and cognitive neuroscience will be relatively unproductive until the importance of these problems is widely recognised and the challenges answered by some technical innovation in connectionist modelling. This paper claims that a littleknown family of connectionist models (Vector Symbolic Architectures) are able to meet Jackendoffs challenges.
Principles for an Integrated Connectionist/Symbolic Theory of Higher Cognition
, 1992
"... The main claim of this paper is that connectionism offers cognitive science a number of excellent opportunities for turning methodological, theoretical. and metatheoretica! schisms into powerfnl integrationsopportunities for forging constructive synergy out of the destructive interference whic ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
The main claim of this paper is that connectionism offers cognitive science a number of excellent opportunities for turning methodological, theoretical. and metatheoretica! schisms into powerfnl integrationsopportunities for forging constructive synergy out of the destructive interference which plagues the field. The paper begins with an analysis of the rifts in tile field and what it would take to overcome them. We argue that while connectionism ha,s often contributed to the deepexLing of these schisms, ]t is nonetheless possible to turn this trend aroundpossible for connectionism to play a central role in a unification of cognitive science. Essential o this process is the development of strong theoretical principles founded (in part) on connectionist computation; a main goal of this paper is to demonstrate that such principles are indeed within the reach of a connectionistgrounded theory of cognition. The enterprise rests on a willingness to entertain, analyze, and extend characterizations of cognitive problems, and hypothesized solutions, which are deliberately overly simple and generalin order to discover the insights they can offer through mathematical analyses which this simplicity and generality are makes possible. In this
Binary SpatterCoding of Ordered KTuples
 In
, 1996
"... Information with structure is traditionally organized into records with fields. For example, a medical record consisting of name, sex, age, and weight might look like (Joe, male, 66, 77). What 77 stands for is determined by its location in the record, so that this is an example of local representati ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
Information with structure is traditionally organized into records with fields. For example, a medical record consisting of name, sex, age, and weight might look like (Joe, male, 66, 77). What 77 stands for is determined by its location in the record, so that this is an example of local representation. The brain's wiring, and robustness under local damage, speak for the importance of distributed representations. The Holographic Reduced Representation (HRR) of Plate is a prime example based on real or complex vectors. This paper describes how spatter coding leads to binary HRRs, and how the fields of a record are encoded into a long binary word without fields and how they are extracted from such a word. 1 Introduction Nested compositional structure is fundamental to highlevel mental functions, such as language and analogy. Accordingly, modeling these functions with neural nets requires that the structures be represented in a form suitable for neural nets.
Multiplicative Binding, Representation Operators and Analogy
"... This paper introduces a novel implementation of the bind() operator that is simple, can be efficiently implemented, and highlights the relationship between retrieval queries and analogical mapping. A frame of role/filler bindings can easily be represented using bind() and bundle(). However, typical ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
This paper introduces a novel implementation of the bind() operator that is simple, can be efficiently implemented, and highlights the relationship between retrieval queries and analogical mapping. A frame of role/filler bindings can easily be represented using bind() and bundle(). However, typical binding systems are unable to adequately represent multiple frames and arbitrary nested compositional structures. A novel family of representational operators (called braid()) is introduced to address these problems. Other binding systems make the strong assumption that the roles and fillers are disjoint in order to avoid ambiguities inherent in their representational idioms. The braid() operator can be used to avoid this assumption. The new representational idiom suggests how the cognitive processes of bottomup and topdown object recognition might be implemented. These processes depend on analogical mapping to integrate disjoint representations and drive perceptual search. Analogical Inference by Systematic Substitution Analogical inference depends on systematic substitution of the components of compositional structures (Gentner, 1983; Halford et al., 1994; Holyoak & Thagard, 1989)
Connectionist sentence processing in perspective
 Cognitive Science
, 1999
"... The emphasis in the connectionist sentenceprocessing literature on distributed representation and emergence of grammar from such systems can easily obscure the often close relations between connectionist and symbolist systems. This paper argues that the Simple Recurrent Network (SRN) models propose ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
The emphasis in the connectionist sentenceprocessing literature on distributed representation and emergence of grammar from such systems can easily obscure the often close relations between connectionist and symbolist systems. This paper argues that the Simple Recurrent Network (SRN) models proposed by Jordan (1989) and Elman (1990) are more directly related to stochastic PartofSpeech (POS) Taggers than to parsers or grammars as such, while autoassociative memory models of the kind pioneered by Longuet–Higgins, Willshaw, Pollack and others may be useful for grammar induction from a networkbased conceptual structure as well as for structurebuilding. These observations suggest some interesting new directions for specifically connectionist sentence processing research, including more efficient representations for finite state machines, and acquisition devices based on a distinctively connectionist basis for grounded symbolist conceptual structure. I.
Analogy Retrieval and Processing With Distributed Vector Representations
, 1998
"... : Holographic Reduced Representations (HRRs) are a method for encoding nested relational structures in fixed width vector representations. HRRs encode relational structures as vector representations in such a way that the superficial similarity of the vectors reflects both superficial and structural ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
: Holographic Reduced Representations (HRRs) are a method for encoding nested relational structures in fixed width vector representations. HRRs encode relational structures as vector representations in such a way that the superficial similarity of the vectors reflects both superficial and structural similarity of the relational structures. HRRs also support a number of operations that could be very useful in psychological models of human analogy processing: fast estimation of superficial and structural similarity via a vector dotproduct; finding corresponding objects in two structures; and chunking of vector representations. Although similarity assessment and discovery of corresponding objects both theoretically take exponential time to perform fully and accurately, with HRRs one can obtain approximate solutions in constant time. The accuracy of these operations with HRRs mirrors patterns of human performance on analog retrieval and processing tasks. Keywords: neural networks, distributed representations, binding, analogy, analog retrieval, structure, chunking, systematicity 1
Connectionist Variable Binding
 Expert Systems
, 2000
"... : Variable binding has long been a challenge to connectionists. Attempts to perform variable binding using localist and distributed connectionist representations are discussed, and problems inherent in each type of representation are outlined. Keywords: Neural networks, localist representations, dis ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
: Variable binding has long been a challenge to connectionists. Attempts to perform variable binding using localist and distributed connectionist representations are discussed, and problems inherent in each type of representation are outlined. Keywords: Neural networks, localist representations, distributed representations, systematicity, variable binding, unification, inference. 3 1. Introduction In recent years research on intelligent systems has split into two paradigms, the classical symbolic artificial intelligence (AI) paradigm and the connectionist AI paradigm. Researchers working in symbolic AI maintain that the correct level at which to model intelligent systems (including the human mind) is that of the symbol. The symbol is an entity in a computational system that can have arbitrary designations and is used to refer to an entity in the outside world, and the main assumptions on which this paradigm rests were coherently outlined by Newell and Simon (Newell & Simon, 1980) und...