Results 1  10
of
14
A Theory of Networks for Approximation and Learning
 Laboratory, Massachusetts Institute of Technology
, 1989
"... Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, t ..."
Abstract

Cited by 237 (25 self)
 Add to MetaCart
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. Wedevelop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods suchasParzen windows and potential functions and to several neural network algorithms, suchas Kanerva's associative memory,backpropagation and Kohonen's topology preserving map. They also haveaninteresting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
Object indexing using an iconic sparse distributed memory
, 1995
"... A generalpurpose object indexing technique is described that combines the virtues of principal component analysis with the favorable matching properties of highdimensional spaces to achieve high precision recognition. An object is represented by a set of highdimensional iconic feature vectors com ..."
Abstract

Cited by 65 (9 self)
 Add to MetaCart
(Show Context)
A generalpurpose object indexing technique is described that combines the virtues of principal component analysis with the favorable matching properties of highdimensional spaces to achieve high precision recognition. An object is represented by a set of highdimensional iconic feature vectors comprised of the responses of derivative of Gaussian filters at a range of orientations and scales. Since these filters can be shown to form the eigenvectors of arbitrary images containing both natural and manmade structures, they are wellsuited for indexing in disparate domains. The indexing algorithm uses an active vision system in conjunction with a modified form of Kanerva’s sparse distributed memory which facilitates interpolation between views and provides a convenient platform for learning the association between an object’s appearance and its identity. The robustness of the indexing method was experimentally confirmed by subjecting the method to a range of viewing conditions and the accuracy was verified using a wellknown model database containing a number of complex 3D objects under varying pose. 1
Sparse Distributed Memory and related models
 Associative Neural Memories
, 1993
"... This chapter describes one basic model of associative memory, called the sparse distributed memory, and relates it to other models and circuits: to ordinary computer memory, to correlationmatrix memories, to feedforward artificial neural nets, to neural circuits in the brain, and to associativeme ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
This chapter describes one basic model of associative memory, called the sparse distributed memory, and relates it to other models and circuits: to ordinary computer memory, to correlationmatrix memories, to feedforward artificial neural nets, to neural circuits in the brain, and to associativememory models of the cerebellum.
Pseudorecurrent connectionist networks: An approach to the "sensitivitystability" dilemma
 Connection Science
, 1997
"... In order to solve the "sensitivitystability" problem  and its immediate correlate, the problem of sequential learning  it is crucial to develop connectionist architectures that are simultaneously sensitive to, but not excessively disrupted by, new input. French (1992) suggested t ..."
Abstract

Cited by 48 (16 self)
 Add to MetaCart
(Show Context)
In order to solve the "sensitivitystability" problem  and its immediate correlate, the problem of sequential learning  it is crucial to develop connectionist architectures that are simultaneously sensitive to, but not excessively disrupted by, new input. French (1992) suggested that to alleviate a particularly severe form of this disruption, catastrophic forgetting, it was necessary for networks to dynamically separate their internal representations during learning. McClelland, McNaughton, & O'Reilly (1995) went even further. They suggested that nature's way of implementing this obligatory separation was the evolution of two separate areas of the brain, the hippocampus and the neocortex. In keeping with this idea of radical separation, a "pseudorecurrent" memory model is presented here that partitions a connectionist network into two functionally distinct, but continually interacting areas. One area serves as a finalstorage area for representations; the other is an e...
Natural basis functions and topographic memory for face recognition
 In Proc. of IJCAI
, 1995
"... Recent work regarding the statistics of natural images has revealed that the dominant eigenvectors of arbitrary natural images closely approximate various oriented derivativeofGaussian functions; these functions have also been shown to provide the best fit to the receptive field profiles of cells ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
Recent work regarding the statistics of natural images has revealed that the dominant eigenvectors of arbitrary natural images closely approximate various oriented derivativeofGaussian functions; these functions have also been shown to provide the best fit to the receptive field profiles of cells in the primate striate cortex. We propose a scheme for expressioninvariant face recognition that employs a fixed set of these "natural " basis functions to generate multiscale iconic representations of human faces. Using a fixed set of basis functions obviates the need for recomputing eigenvectors (a step that was necessary in some previous approaches employing principal component analysis (PCA) for recognition) while at the same time retaining the redundancyreducing properties of PCA. A face is represented by a set of iconic representations automatically extracted from an input image. The description thus obtained is stored in a topographicallyorganized sparse distributed memory that is based on a model of human longterm memory first proposed by Kanerva. We describe experimental results for an implementation of the method on a pipeline image processor that is capable of achieving near realtime recognition by exploiting the processor's framerate convolution capability for indexing purposes. 1
ConvergenceZone Episodic Memory: Analysis and Simulations
 NEURAL NETWORKS
, 1997
"... Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm storage within the neocortex. This paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern, which in turn reactivates the entire stored pattern. For many configurations of the model, a theoretical lower bound for the memory capacity can be derived, and it can be an order of magnitude or higher than the number of all units in the model, and several orders of magnitude higher than the number of bindinglayer units. Computational simulations further indicate that the average capacity is an order of magnitude larger than the theoretical lower bound, and making the connectivity between layers sparser causes an even further increase in capacity. Simulations also show that if more descriptive binding patterns are used, the errors tend to be more plausible (patterns are confused with other similar patterns), with a slight cost in capacity. The convergencezone episodic memory therefore accounts for the immediate storage and associative retrieval capability and large capacity of the hippocampal memory, and shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected t...
Dynamic models of simple judgments: II. Properties of selforganizing PAGAN (parallel, adaptive, generalized accumulator network) model for multichoice tasks
 NonLinear Dynamics, Psychology and Life Sciences
, 2000
"... This is the second of two papers comparing connectionist and traditional stochastic latency mechanisms with respect to their ability to account for simple judgments. In the ®rst, we reviewed evidence for a selfregulating accumulator module for two and threecategory discrimination. In this paper, ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
This is the second of two papers comparing connectionist and traditional stochastic latency mechanisms with respect to their ability to account for simple judgments. In the ®rst, we reviewed evidence for a selfregulating accumulator module for two and threecategory discrimination. In this paper, we examine established neural network models that have been applied to predicting response time measures, and discuss their representational and adaptational limitations. We go on to describe and evaluate the network implementation of a Parallel Adaptive Generalized Accumulator Network (PAGAN), based on the interconnection of a number of selfregulating, generalized accumulator modules. The enhancement of PAGAN through the incorporation of distributed connectionist representation is brie¯y discussed. KEY WORDS: connectionism; stochastic modeling; reaction time; identi®cation; adaptation.
Biologically Inspired Modular Neural Networks
, 2000
"... This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psy ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network. The artificial neural networks
The Capacity of ConvergenceZone Episodic Memory
, 1994
"... Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. This paper presents a neural network model of episodic memory inspired by Damasio 's idea of Conver ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. This paper presents a neural network model of episodic memory inspired by Damasio 's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern which in turn reactivates the entire stored pattern. A worstcase analysis shows that with realisticsize layers, the memory capacity of the model is several times larger than the number of units in the model, and could account for the large capacity of human episodic memory. Introduction Human memory system can be divided into semantic memory of facts, rules, and general knowledge, and episodic memory that records the individ...
The Capacity of ConvergenceZone Episodic Memory
"... Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. This paper presents a computational model of episodic memory inspired by Damasio's idea of Converge ..."
Abstract
 Add to MetaCart
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. This paper presents a computational model of episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern which in turn reactivates the entire stored pattern. A worstcase analysis shows that with realisticsize layers, the memory capacity of the model is several times larger than the number of units in the model, and could account for the large capacity of human episodic memory. I. Introduction Human episodic memory is characterized by an extremely high capacity. New memories are formed every few seconds, and many of those persist...