Results 1  10
of
66
Optimality Theory: Constraint interaction in Generative Grammar
, 1993
"... ~ ROA Version, 8/2002. Essentially identical to the Tech Report, with new pagination (but the same footnote and example numbering); correction of typos, oversights & outright errors; improved typography; and occasional smallscale clarificatory rewordings. Citation should include reference to t ..."
Abstract

Cited by 2092 (42 self)
 Add to MetaCart
(Show Context)
~ ROA Version, 8/2002. Essentially identical to the Tech Report, with new pagination (but the same footnote and example numbering); correction of typos, oversights & outright errors; improved typography; and occasional smallscale clarificatory rewordings. Citation should include reference to this version.
Toward a connectionist model of recursion in human linguistic performance
 Cognitive Science
, 1999
"... Naturally occurring speech contains only a limited amount of complex recursive structure, and this is reflected in the empirically documented difficulties that people experience when processing such structures. We present a connectionist model of human performance in processing recursive language st ..."
Abstract

Cited by 167 (20 self)
 Add to MetaCart
Naturally occurring speech contains only a limited amount of complex recursive structure, and this is reflected in the empirically documented difficulties that people experience when processing such structures. We present a connectionist model of human performance in processing recursive language structures. The model is trained on simple artificial languages. We find that the qualitative performance profile of the model matches human behavior, both on the relative difficulty of centerembedding and crossdependency, and between the processing of these complex recursive structures and rightbranching recursive constructions. We analyze how these differences in performance are reflected in the internal representations of the model by performing discriminant analyses on these representations both before and after training. Furthermore, we show how a network trained to process recursive structures can also generate such structures in a probabilistic fashion. This work suggests a novel explanation of people’s limited recursive performance, without assuming the existence of a mentally represented competence grammar allowing unbounded recursion. I.
Processing Capacity Defined by Relational Complexity: Implications for Comparative, Developmental, and Cognitive Psychology
, 1989
"... It is argued that working memory limitations are best defined in terms of the complexity of relations that can be processed in parallel. Relational complexity is related to processing loads in problem solving, and discriminates between higher animal species, as well as between children of differen ..."
Abstract

Cited by 151 (10 self)
 Add to MetaCart
It is argued that working memory limitations are best defined in terms of the complexity of relations that can be processed in parallel. Relational complexity is related to processing loads in problem solving, and discriminates between higher animal species, as well as between children of different ages. Complexity is defined by the number of dimensions, or sources of variation, that are related. A unary relation has one argument and one source of variation, because its argument can be instantiated in only one way at a time. A binary relation has two arguments, and two sources of variation, because two argument instantiations are possible at once. Similarly, a ternary relation is three dimensional, a quaternary relation is four dimensional, and so on. Dimensionality is related to number of chunks, because both attributes on dimensions and chunks are independent units of information of arbitrary size. Empirical studies of working memory limitations indicate a soft limit which corresponds to processing one quaternary relation in parallel. More complex concepts are processed by segmentation or conceptual chunking. Segmentation entails breaking tasks into components which do not exceed processing capacity, and which are processed serially. Conceptual chunking entails "collapsing" representations to reduce their dimensionality and consequently their processing load, but at the cost of making some relational information inaccessible. Parallel distributed processing implementations of relational representations show that relations with more arguments entail a higher computational cost, which corresponds to empirical observations of higher processing loads in humans. Empirical evidence is presented that relational complexity discriminates between higher species...
Syntactic Transformations on Distributed Representations
 Connection Science
, 1990
"... There has been much interest in the possibility of connectionist models whose representations can be endowed with compositional structure, and a variety of such models have been proposed. These models typically use distributed representations that arise from the functional composition of constituent ..."
Abstract

Cited by 131 (3 self)
 Add to MetaCart
There has been much interest in the possibility of connectionist models whose representations can be endowed with compositional structure, and a variety of such models have been proposed. These models typically use distributed representations that arise from the functional composition of constituent parts. Functional composition and decomposition alone, however, yield only an implementation of classical symbolic theories. This paper explores the possibility of moving beyond implementation by exploiting holistic structuresensitive operations on distributed representations. An experiment is performed using Pollack’s Recursive AutoAssociative Memory. RAAM is used to construct distributed representations of syntactically structured sentences. A feedforward network is then trained to operate directly on these representations, modeling syntactic transformations of the represented sentences. Successful training and generalization is obtained, demonstrating that the implicit structure present in these representations can be used for a kind of structuresensitive processing unique to the connectionist domain. 1
Natural Language Processing with Modular PDP Networks and Distributed Lexicon
 Cognitive Science
, 1991
"... An approach to connectionist natural language processing is proposed, which is based on hierarchically organized modular Parallel Distributed Processing (PDP) networks and a central lexicon of distributed input/output representations. The modules communicate using these representations, which are gl ..."
Abstract

Cited by 91 (14 self)
 Add to MetaCart
An approach to connectionist natural language processing is proposed, which is based on hierarchically organized modular Parallel Distributed Processing (PDP) networks and a central lexicon of distributed input/output representations. The modules communicate using these representations, which are global and publicly available in the system. The representations are developed automatically by all networks while they are learning their processing tasks. The resulting representations reflect the regularities in the subtasks, which facilitates robust processing in the face of noise and damage, supports improved generalization, and provides expectations about possible contexts. The lexicon can be extended by cloning new instances of the items, that is, by generating a number of items with known processing properties and distinct identities. This technique combinatorially increases the processing power of the system. The recurrent FGREP module, together with a central lexicon, is used as a ba...
A Recurrent Neural Network That Learns to Count
 CONNECTION SCIENCE
, 1999
"... ..."
(Show Context)
Logic Programs and Connectionist Networks
 Journal of Applied Logic
, 2004
"... One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic p ..."
Abstract

Cited by 60 (20 self)
 Add to MetaCart
(Show Context)
One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic programs can be computed by feedforward connectionist networks, and that the same semantic operators for firstorder normal logic programs can be approximated by feedforward connectionist networks. Turning the networks into recurrent ones allows one also to approximate the models associated with the semantic operators. Our methods depend on a wellknown theorem of Funahashi, and necessitate the study of when Funahasi's theorem can be applied, and also the study of what means of approximation are appropriate and significant.
Infinite Languages, Finite Minds: Connectionism, Learning and Linguistic Structure
, 1994
"... ..."
(Show Context)
Exploring the symbolic/subsymbolic continuum: A case study of raam
 The Symbolic and Connectionist Paradigms: Closing the Gap
, 1992
"... It is di cult to clearly de ne the symbolic and subsymbolic paradigms; each is usually described by its tendencies rather than any one de nitive property. Symbolic processing is generally characterized by hardcoded, explicit rules operating on discrete, static tokens, while subsymbolic processing i ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
It is di cult to clearly de ne the symbolic and subsymbolic paradigms; each is usually described by its tendencies rather than any one de nitive property. Symbolic processing is generally characterized by hardcoded, explicit rules operating on discrete, static tokens, while subsymbolic processing is associated with learned, fuzzy constraints a ecting continuous,
The Acquisition of Lexical Semantics for Spatial Terms: A Connectionist Model of Perceptual Categories
, 1992
"... This thesis describes a connectionist model which learns to perceive spatial events and relations in simple movies of 2dimensional objects, so as to name the events and relations as a speaker of a particular natural language would. Thus, the model learns perceptually grounded semantics for natura ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
This thesis describes a connectionist model which learns to perceive spatial events and relations in simple movies of 2dimensional objects, so as to name the events and relations as a speaker of a particular natural language would. Thus, the model learns perceptually grounded semantics for natural language spatial terms. Natural languages differ  sometimes dramatically  in the ways in which they structure space. The aim here has been to have the model be able to perform this learning task for terms from any natural language, and to have learning take place in the absence of explicit negative evidence, in order to rule out ad hoc solutions and to approximate the conditions under which children learn. The central focus of this thesis is a...