Results 1  10
of
53
Hopfield models as generalized random mean field models. Mathematical aspects of spin glasses and neural networks
 3–89, Progr. Probab., 41 Birkhäuser
, 1998
"... Abstract: We give a comprehensive selfcontained review on the rigorous analysis of the thermodynamics of a class of random spin systems of mean field type whose most prominent example is the Hopfield model. We focus on the low temperature phase and the analysis of the Gibbs measures with large devi ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
Abstract: We give a comprehensive selfcontained review on the rigorous analysis of the thermodynamics of a class of random spin systems of mean field type whose most prominent example is the Hopfield model. We focus on the low temperature phase and the analysis of the Gibbs measures with large deviation techniques. There is a very detailed and complete picture in the regime of “small α”; a particularly satisfactory result concerns a nontrivial regime of parameters in which we prove 1) the convergence of the local “mean fields ” to gaussian random variables with constant variance and random mean; the random means are from site to site independent gaussians themselves; 2) “propagation of chaos”, i.e. factorization of the extremal infinite volume Gibbs measures, and 3) the correctness of the “replica symmetric solution ” of Amit, Gutfreund and Sompolinsky [AGS]. This last result was first proven by M. Talagrand [T4], using different techniques.
ConvergenceZone Episodic Memory: Analysis and Simulations
 NEURAL NETWORKS
, 1997
"... Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm storage within the neocortex. This paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern, which in turn reactivates the entire stored pattern. For many configurations of the model, a theoretical lower bound for the memory capacity can be derived, and it can be an order of magnitude or higher than the number of all units in the model, and several orders of magnitude higher than the number of bindinglayer units. Computational simulations further indicate that the average capacity is an order of magnitude larger than the theoretical lower bound, and making the connectivity between layers sparser causes an even further increase in capacity. Simulations also show that if more descriptive binding patterns are used, the errors tend to be more plausible (patterns are confused with other similar patterns), with a slight cost in capacity. The convergencezone episodic memory therefore accounts for the immediate storage and associative retrieval capability and large capacity of the hippocampal memory, and shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected t...
Gibbs States Of The Hopfield Model In The Regime Of Perfect Memory
, 1994
"... : We study the thermodynamic properties of the Hopfield model of an autoassociative memory. If N denotes the number of neurons and M(N) the number of stored patterns, we prove the following results: If M N # 0 as N " 1, then there exists an infinite number of infinite volume Gibbs measures for ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
: We study the thermodynamic properties of the Hopfield model of an autoassociative memory. If N denotes the number of neurons and M(N) the number of stored patterns, we prove the following results: If M N # 0 as N " 1, then there exists an infinite number of infinite volume Gibbs measures for all temperatures T ! 1 concentrated on spin configurations that have overlap with exactly one specific pattern. Moreover, the measures induced on the overlap parameters are Dirac measures concentrated on a single point. If M N ! ff, as N " 1 for ff small enough, we show that for temperatures T smaller than some T (ff) ! 1, the induced measures can have support only on a disjoint union of balls around the previous points, but we cannot construct the infinite volume measures through convergent sequences of measures. Subject Classification Numbers: 60K35, 82B44, 82C32 # Work partially supported by the Commission of the European Communities under contract No. SC1CT910695 1 email: bovier@iaa...
Transform Invariant Recognition by Association in a Recurrent Network
 Neural Computation
, 1998
"... Objects can be recognised independently of the view they present, of their position on the retina, or their scale. It has been suggested that one basic mechanism that makes this possible is a memory effect, or a trace, that allows associations to be made between consecutive views of one object. In t ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Objects can be recognised independently of the view they present, of their position on the retina, or their scale. It has been suggested that one basic mechanism that makes this possible is a memory effect, or a trace, that allows associations to be made between consecutive views of one object. In this work we explore the possibility that this memory trace is provided by sustained activity of neurons in layers of the visual pathway produced by an extensive recurrent connectivity. We describe a model that contains this high recurrent connectivity and synaptic efficacies built with contributions from associations between pairs of views that is simple enough to be treated analytically. The main result is that there is a change of behavior Permanent address: Departamento de F'isica Te'orica CXI, Ciudad Universitaria de Cantoblanco, Universidad Aut'onoma de Madrid, 28049 Madrid, Spain as the strength of the association between views of the same object, relative to the association with...
Analysis of Synfire Chains
 Network
, 1995
"... The biological implications of synfire chain neural networks are explored by studying two idealized models. In the first a network model is proposed with binary firing neurons and parallel updating. This model can be solved exactly in the thermodynamic limit using mean field theory. An explicit e ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
The biological implications of synfire chain neural networks are explored by studying two idealized models. In the first a network model is proposed with binary firing neurons and parallel updating. This model can be solved exactly in the thermodynamic limit using mean field theory. An explicit equation for the capacity is obtained. In the second model the synchrony of the pulse of activity along a synfire chain is investigated in the context of simple integrateandfire neurons. It is found that under natural assumptions a near synchronous wave of activity can stably propagate along a synfire chain. The relevance of this result to real systems is discussed. 1 Introduction Synfire chains have been proposed by Abeles [1, 2] as a model of cortical function. Interest in them has grown because they provide an explanation for otherwise mysterious measurements of precise spike timing. A synfire chain consists of small pools of neurons linked together in a feedforward chain so that ...
Matching Performance of Binary Correlation Matrix Memories
"... We introduce a theoretical framework for estimating the matching performance of binary correlation matrices acting as heteroassociative memories. The framework is applicable to nonrecursive, fullyconnected systems with binary (0,1) Hebbian weights and hardlimited threshold. It can handle both fu ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
We introduce a theoretical framework for estimating the matching performance of binary correlation matrices acting as heteroassociative memories. The framework is applicable to nonrecursive, fullyconnected systems with binary (0,1) Hebbian weights and hardlimited threshold. It can handle both full and partial matching of single or multiple data items in nonsquare memories. Theoretical development takes place under a probability theory framework. Inherent uncertainties in the matching process are accommodated by the use of probability distributions to describe the numbers of correct and incorrect neuron responses during retrieval. Theoretical predictions are verified experimentally for mediumsized memories and used to aid the design of larger systems. The results highlight the Matching Performance of CMMs 2 fact that correlationbased models can act as highly efficient memories provided a small probability of retrieval error is accepted. Keywords Neural Associative Memories, Co...
Learning symmetry groups with hidden units: Beyond the perceptron
 Physica
, 1986
"... Learning to recognize mirror, rotational and translational symmetries is a difficult problem for massivelyparallel network models. These symmetries cannot be learned by firstorder perceptrons or Hopfield networks, which have no means for incorporating additional adaptive units that are hidden from ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Learning to recognize mirror, rotational and translational symmetries is a difficult problem for massivelyparallel network models. These symmetries cannot be learned by firstorder perceptrons or Hopfield networks, which have no means for incorporating additional adaptive units that are hidden from the input and output layers. We demonstrate that the Boltzmann learning algorithm is capable of finding sets of weights which turn hidden units into useful higherorder feature detectors capable of solving symmetry problems. 1.
Constructive Learning Techniques for Designing Neural Network Systems
, 1997
"... Contents 1. Introduction. 2. Classification. 2.1 Introduction. 2.2 The Pocket algorithm. 2.3 Tower and Cascade architectures. 2.4 Tree architectures: the Upstart algorithm. 2.5 Constructing tree and cascade architectures using dichotomies. 2.6 Constructing neural networks with a single hidden layer. ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Contents 1. Introduction. 2. Classification. 2.1 Introduction. 2.2 The Pocket algorithm. 2.3 Tower and Cascade architectures. 2.4 Tree architectures: the Upstart algorithm. 2.5 Constructing tree and cascade architectures using dichotomies. 2.6 Constructing neural networks with a single hidden layer. 2.7 Summary. 3. Regression. 3.1 Introduction. 3.2 The Cascade Correlation Algorithm. 3.3 Node creation and node splitting algorithms. 3.4 Constructing RBF networks. 3.5 Summary. 4. Constructing Modular Architectures. 4.1 Introduction. 4.2 Neural Decision Trees. 4.3 Other approaches to constructing modular networks. 5. Reducing Network Complexity. 5.1 Introduction. 5.2 Pruning Procedures. 5.3 Summary. 6. Conclusion. 7. Appendix: algorithms for singlenode learning. 1 1 Introduction Neural networks have been applied to a wide range of application domains such as control, telecommun
Attractors In Recurrent Behavior Networks
, 1997
"... If behavior networks, which use spreading activation to select actions, are analogous to connectionist methods of pattern recognition, then recurrent behavior networks, which use energy minimization, are analogous to Hopfield networks. Hopfield networks memorize patterns by making them attractors. S ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
If behavior networks, which use spreading activation to select actions, are analogous to connectionist methods of pattern recognition, then recurrent behavior networks, which use energy minimization, are analogous to Hopfield networks. Hopfield networks memorize patterns by making them attractors. Similarly, each behavior of a recurrent behavior network should be an attractor of the network, to inhibit fruitless, repeated switching between different behaviors in response to small changes in the environment and in motivations. I overcome two major objections to this view, and demonstrate that the performance in a test domain of the Do the Right Thing recurrent behavior network is improved by redesigning it to create desirable attractors and basins of attraction. I further show that this performance increase is correlated with an increase in persistence and a decrease in undesirable behaviorswitching. On a more general level, this work encourages the study of action selection as a dynam...
Dynamics of memory representations in networks with noveltyfacilitated synaptic plasticity
 Neuron
, 2006
"... The ability to associate some stimuli while differentiating between others is an essential characteristic of biological memory. Theoretical models identify memories as attractors of neural network activity, with learning based on Hebblike synaptic modifications. Our analysis shows that when network ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
The ability to associate some stimuli while differentiating between others is an essential characteristic of biological memory. Theoretical models identify memories as attractors of neural network activity, with learning based on Hebblike synaptic modifications. Our analysis shows that when network inputs are correlated, this mechanism results in overassociations, even up to several memories ‘‘merging’ ’ into one. To counteract this tendency, we introduce a learning mechanism that involves noveltyfacilitated modifications, accentuating synaptic changes proportionally to the difference between network input and stored memories. This mechanism introduces a dependency of synaptic modifications on previously acquired memories, enabling a wide spectrum of memory associations, ranging from absolute discrimination to complete merging. The model predicts that memory representations should be sensitive to learning order, consistent with recent psychophysical studies of face recognition and electrophysiological experiments on hippocampal place cells. The proposed mechanism is compatible with a recent biological model of noveltyfacilitated learning in hippocampal circuitry.