Results 1  10
of
106
Connectionist models of recognition memory: Constraints imposed by learning and forgetting functions
 Psychological Review
, 1990
"... Multilayer connectionist models of memory based on the encoder model using the backpropagation learning rule are evaluated. The models are applied to standard recognition memory procedures in which items are studied sequentially and then tested for retention. Sequential learning in these models lead ..."
Abstract

Cited by 102 (4 self)
 Add to MetaCart
Multilayer connectionist models of memory based on the encoder model using the backpropagation learning rule are evaluated. The models are applied to standard recognition memory procedures in which items are studied sequentially and then tested for retention. Sequential learning in these models leads to 2 major problems. First, welllearned information is forgotten rapidly as new information is learned. Second, discrimination between studied items and new items either decreases or is nonmonotonic as a function of learning. To address these problems, manipulations of the network within the multilayer model and several variants of the multilayer model were examined, including a model with prelearned memory and a context model, but none solved the problems. The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning. The first stage of the connectionist revolution in psychology is reaching maturity and perhaps drawing to an end. This stage has been concerned with the exploration of classes of models, and the criteria that have been used to evaluate the success of an application have been necessarily loose. In the early stages
Neural Associative Memories
 Biological Cybernetics
, 1993
"... Despite of processing elements which are thousands of times faster than the neurons in the brain, modern computers still cannot match quite a few processing capabilities of the brain, many of which we even consider trivial (such as recognizing faces or voices, or following a conversation). A common ..."
Abstract

Cited by 88 (12 self)
 Add to MetaCart
Despite of processing elements which are thousands of times faster than the neurons in the brain, modern computers still cannot match quite a few processing capabilities of the brain, many of which we even consider trivial (such as recognizing faces or voices, or following a conversation). A common principle for those capabilities lies in the use of correlations between patterns in order to identify patterns which are similar. Looking at the brain as an information processing mechanism with  maybe among others  associative processing capabilities together with the converse view of associative memories as certain types of artificial neural networks initiated a number of interesting results, ranging from theoretical considerations to insights in the functioning of neurons, as well as parallel hardware implementations of neural associative memories. This paper discusses three main aspects of neural associative memories: ffl theoretical investigations, e.g. on the information storage...
30 years of adaptive neural networks
, 1990
"... Fundamental developments in feedfonvard artificial neural networks from the past thirty years are reviewed. The central theme of this paper is a description of the history, origination, operating characteristics, and basic theory of several supervised neural network training algorithms including t ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
Fundamental developments in feedfonvard artificial neural networks from the past thirty years are reviewed. The central theme of this paper is a description of the history, origination, operating characteristics, and basic theory of several supervised neural network training algorithms including the Perceptron rule, the LMS algorithm, three Madaline rules, and the backpropagation technique. These methods were developed independently, but with the perspective of history they can a/ / be related to each other. The concept underlying these algorithms is the “minimal disturbance principle, ” which suggests that during training it is advisable to inject new information into a network in a manner that disturbs stored information to the smallest extent possible. I.
Hidden Patterns in Combined and Adaptive Knowledge Networks
 International Journal of Approximate Reasoning
, 1988
"... Uncertain causal knowledge is stored in fuzzy cognitive maps (FCMs). FCMs are fuzzy signed digraphs with feedback. The sign (+ or) of FCM edges indicates causal increase or causal decrease. The fuzzy degree of causality is indicated by a number in [ 1, 1]. FCMs learn by modifying their causal conn ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
Uncertain causal knowledge is stored in fuzzy cognitive maps (FCMs). FCMs are fuzzy signed digraphs with feedback. The sign (+ or) of FCM edges indicates causal increase or causal decrease. The fuzzy degree of causality is indicated by a number in [ 1, 1]. FCMs learn by modifying their causal connections in sign and magnitude, structurally analogous to the way in which neural networks learn. An appropriate causal learning law for inductively inferring FCMs from timeseries data is the differential Hebbian law, which modifies causal connections by correlating time derivatives of FCM node outputs. The differential Hebbian law contrasts with Hebbian outputcorrelation learning laws of adaptive neural networks. FCM nodes represent variable phenomena or fuzzy sets. An FCM node nonlinearly transforms weighted summed inputs into numerical output, again in analogy to a model neuron. Unlike expert systems, which are feedforward search trees, FCMs are nonlinear dynamical systems. FCM resonant states are limit cycles, or timevarying patterns. An FCM limit cycle or hidden pattern is an FCM inference. Experts construct FCMs by drawing causal pictures or digraphs. The corresponding connection matrices are used for inferencing. By additively combining augmented connection matrices, any number of FCMs can be naturally combined into a single knowledge network. The credibility wi in [0, 1] of the ith expert is included in this learning process by multiplying the ith expert's augmented FCM connection matrix by w i. Combining connection matrices is a simple type of adaptive inference. In general, connection matrices are modified by an unsupervised learning law, such as the
The Emergence and Evolution of Linguistic Structure: From Lexical to Grammatical Communication Systems
 Connection Science
, 2005
"... The paper discusses efforts to understand the selforganisation and evolution of language from a cognitive modeling point of view. It focuses in particular on efforts that use connectionist components to synthesise some of the major stages in the emergence of language and possible transitions betwee ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
The paper discusses efforts to understand the selforganisation and evolution of language from a cognitive modeling point of view. It focuses in particular on efforts that use connectionist components to synthesise some of the major stages in the emergence of language and possible transitions between stages. The paper does not introduce new technical results but discusses a number of dimensions for mapping out the research landscape. 1 1
Image Processing With Neural Networks  a Review
, 2002
"... We review more than two hundred applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feedforward neural networks, Kohonen feature maps and Hopfield neural networks. The various applications are categorised into a novel t ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We review more than two hundred applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feedforward neural networks, Kohonen feature maps and Hopfield neural networks. The various applications are categorised into a novel twodimensional taxonomy for image processing algorithms. One dimension specifies the type of task performed by the algorithm: preprocessing, data reduction/feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixellevel, local featurelevel, structurelevel, objectlevel, objectset level and scene characterisation. Each of the six types of tasks poses specific constraints to a neuralbased approach. These specific conditions are discussed in detail. A synthesis is made of unresolved problems related to application of pattern recognition techniques in image processing and specifically to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. Keywords: neural networks; digital image processing; invariant pattern recognition; preprocessing; feature extraction; image compression; segmentation; object recognition; image understanding; optimization. * Corresponding author. M. EgmontPetersen, Institute of Information and Computing Sciences, Utrecht University, P.O.B. 80.089, 3508 TB Utrecht, The Netherlands. Email: michael@cs.uu.nl. WWW: Http://www.cs.uu.nl/people/michael/nnreview.html.
Iterative Retrieval of Sparsely Coded Associative Memory Patterns
 Neural Networks
, 1995
"... We investigate the pattern completion performance of neural autoassociative memories composed of binary threshold neurons for sparsely coded binary memory patterns. Focussing on iterative retrieval, effective threshold control strategies are introduced. These are investigated by means of computer s ..."
Abstract

Cited by 24 (14 self)
 Add to MetaCart
We investigate the pattern completion performance of neural autoassociative memories composed of binary threshold neurons for sparsely coded binary memory patterns. Focussing on iterative retrieval, effective threshold control strategies are introduced. These are investigated by means of computer simulation experiments and analytical treatment. To evaluate the systems performance we consider the completion capacity C and the mean retrieval errors. The asymptotic completion capacity values for the recall of sparsely coded binary patterns in onestep retrieval is known to be ln 2=4 ß 17:32% for binary Hebbian learning, and 1=(8 ln2) ß 18% for additive Hebbian learning [Palm, 1988]. These values are accomplished with vanishing error probability and yet are higher than those obtained in other known neural memory models. Recent investigations on binary Hebbian learning have proved that iterative retrieval as a more refined retrieval method does not improve the asymptotic completion capacit...
A Taxonomy for Spatiotemporal Connectionist Networks Revisited: The Unsupervised Case
 Neural Computation
, 2003
"... Spatiotemporal connectionist networks (STCN's) comprise an important class of neural models that can deal with patterns distributed both in time and space. In this paper, we widen the application domain of the taxonomy for supervised STCN's recently proposed by Kremer (2001) to the unsupervised case ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Spatiotemporal connectionist networks (STCN's) comprise an important class of neural models that can deal with patterns distributed both in time and space. In this paper, we widen the application domain of the taxonomy for supervised STCN's recently proposed by Kremer (2001) to the unsupervised case. This is possible through a reinterpretation of the state vector as a vector of latent (hidden) variables, as proposed by Meinicke (2000). The goal of this generalized taxonomy is then to provide a nonlinear generative framework for describing unsupervised spatiotemporal networks, making it easier to compare and contrast their representational and operational characteristics. Computational properties, representational issues and learning are also discussed and a number of references to the relevant source publications are provided. It is argued that the proposed approach is simple and more powerful than the previous attempts, from a descriptive and predictive viewpoint. We also discuss the relation of this taxonomy with automata theory and state space modeling, and suggest directions for further work.
Perspective Alignment in Spatial Language
 SPATIAL LANGUAGE AND DIALOGUE
, 2007
"... It is well known that perspective alignment plays a major role in the planning and interpretation of spatial language. In order to understand the role of perspective alignment and the cognitive processes involved, we have made precise complete cognitive models of situated embodied agents that selfo ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
It is well known that perspective alignment plays a major role in the planning and interpretation of spatial language. In order to understand the role of perspective alignment and the cognitive processes involved, we have made precise complete cognitive models of situated embodied agents that selforganise a communication system for dialoging about the position and movement of real world objects in their immediate surroundings. We show in a series of robotic experiments which cognitive mechanisms are necessary and sufficient to achieve successful spatial language and why and how perspective alignment can take place, either implicitly or based on explicit marking.
Complexity Issues in Discrete Hopfield Networks
, 1994
"... We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worstcase results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. Draft chapter for the forthcoming book The Computational and Learning Complexity of Neural Networks: Advanced Topics (ed. Ian Parberry).