Results 1  10
of
61
Nonlinear Neural Networks: Principles, Mechanisms, and Architectures
, 1988
"... An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentiethcentury scientific movements. The nonlinear, nonstatio ..."
Abstract

Cited by 181 (20 self)
 Add to MetaCart
An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentiethcentury scientific movements. The nonlinear, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network researchthe binary, linear, and continuousnonlinear modelsare noted. The remainder of the article describes results about continuousnonlinear models: Many models of contentaddressable memory are shown to be special cases of the CohenGrossberg model and global Liapunov function, including the additive, brainstateinabox, McCullochPitts, Boltzmann machine, HartlineRatliffMillet; shunting, maskingfield, bidirectional associative memory, VolterraLotka, GilpinAyala, and EigenSchuster models. A Liapunov functional method is described for proving global limit or oscillation theorems for nonlinear competitive systems when their decision schemes are globally consistent or inconsistent, respectively. The former case is illustrated by a model of a globally stable economic market, and the latter case is illustrated by a model of the voting paradox. Key properties of shunting competitive feedback networks are summarized, including the role of sigmoid signalling, automatic gain control, competitive choice and quantization, tunable filtering, total activity normalization, and noise suppression in pattern transformation and memory storage applications. Connections to models of competitive learning, vector quantization, and categorical perception are noted. Adaptive resonance
Bidirectional Associative Memories
 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS
, 1988
"... Stability and encoding properties of twolayer nonlinear feedback neural networks are examined. Bidirectionality, forward and backard information flow, is introduced in neural nets to produce twoway associative search for stored associations (A, B, ). Passing information through M gives one directi ..."
Abstract

Cited by 155 (3 self)
 Add to MetaCart
Stability and encoding properties of twolayer nonlinear feedback neural networks are examined. Bidirectionality, forward and backard information flow, is introduced in neural nets to produce twoway associative search for stored associations (A, B, ). Passing information through M gives one direction; passing it through its transpose M r gives the other. A bidirectional associative memory. (BAM) behaves as a hetero associative content addressable memory (CAM), storing and recalling the vector pairs (A1, Bi),..,(Am Bin) , where .4 {0,1}"and B We prove that every nbyp matrix M is a bidirectionally stable heteroas sociative CAM for both binary/bipolar and continuous neurons a, and hi. When the BAM neurons are activated, the network quickly evolves to a stable state of twopattern reverberation, or resonance. The stable reverberation corresponds to a system energy local minimum. Heteroassociafive inlormation is encoded iu a BAM by summing correlation matrices. The BAM storage capact .ty for reliable recall is roughly m < niin(n, p). No more heteroassociafive pairs can be 'reliably stored and recalled than the lesser of the dimensions of the pattern spaces (0,1 }"and 0,1 } P. The Appendix shos that it is better on average to use bipolar { 1,i} coding than binary. {0,1 } coding of heteroassociative pairs (.4, B,). BAM encoding and decoding are combined in the adaptive BAM, which extends global bidirectional stabflit), to realtime unsupervised learning. Temporal patterns (AE,., A,,) are represented as ordered lists of binary/bipolar vectors and stored in a temporal associative memory (TAM) nby matrix M as a limit cycle of the dynamical system. Forward recall proceeds through M, backward recall through M r . Temporal patterns are stored by summing contiguous bipolar...
Hopfield models as generalized random mean field models. Mathematical aspects of spin glasses and neural networks
 3–89, Progr. Probab., 41 Birkhäuser
, 1998
"... Abstract: We give a comprehensive selfcontained review on the rigorous analysis of the thermodynamics of a class of random spin systems of mean field type whose most prominent example is the Hopfield model. We focus on the low temperature phase and the analysis of the Gibbs measures with large devi ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
Abstract: We give a comprehensive selfcontained review on the rigorous analysis of the thermodynamics of a class of random spin systems of mean field type whose most prominent example is the Hopfield model. We focus on the low temperature phase and the analysis of the Gibbs measures with large deviation techniques. There is a very detailed and complete picture in the regime of “small α”; a particularly satisfactory result concerns a nontrivial regime of parameters in which we prove 1) the convergence of the local “mean fields ” to gaussian random variables with constant variance and random mean; the random means are from site to site independent gaussians themselves; 2) “propagation of chaos”, i.e. factorization of the extremal infinite volume Gibbs measures, and 3) the correctness of the “replica symmetric solution ” of Amit, Gutfreund and Sompolinsky [AGS]. This last result was first proven by M. Talagrand [T4], using different techniques.
ConvergenceZone Episodic Memory: Analysis and Simulations
 NEURAL NETWORKS
, 1997
"... Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm storage within the neocortex. This paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern, which in turn reactivates the entire stored pattern. For many configurations of the model, a theoretical lower bound for the memory capacity can be derived, and it can be an order of magnitude or higher than the number of all units in the model, and several orders of magnitude higher than the number of bindinglayer units. Computational simulations further indicate that the average capacity is an order of magnitude larger than the theoretical lower bound, and making the connectivity between layers sparser causes an even further increase in capacity. Simulations also show that if more descriptive binding patterns are used, the errors tend to be more plausible (patterns are confused with other similar patterns), with a slight cost in capacity. The convergencezone episodic memory therefore accounts for the immediate storage and associative retrieval capability and large capacity of the hippocampal memory, and shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected t...
Computational Complexity Of Neural Networks: A Survey
, 1994
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computationneural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classescomplexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks...
Complexity Issues in Discrete Hopfield Networks
, 1994
"... We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worstcase results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. Draft chapter for the forthcoming book The Computational and Learning Complexity of Neural Networks: Advanced Topics (ed. Ian Parberry).
On the Computational Complexity of Analyzing Hopfield Nets
 Complex Systems
, 1989
"... We prove that the problem of counting the number of stable states in a given Hopfield net is #Pcomplete, and the problem of computing the size of the attraction domain of a given stable state is NPhard. 1 Introduction A binary associative memory network, or "Hopfield net" [6], consists of n fully ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We prove that the problem of counting the number of stable states in a given Hopfield net is #Pcomplete, and the problem of computing the size of the attraction domain of a given stable state is NPhard. 1 Introduction A binary associative memory network, or "Hopfield net" [6], consists of n fully interconnected threshold logic units, or "neurons". Associated to each pair of neurons i; j is an interconnection weight w ij , and to each neuron i a threshold value t i . At any given moment a neuron i can be in one of two states, x i = 1 or x i = \Gamma1. Its state at the next moment depends on the current states of the other neurons and the interconnection weights; if sgn( P n j=1 w ij x j \Gamma t i ) 6= x i , the neuron may switch to the opposite state. (Here sgn is the signum function, sgn(x) = 1 for x 0, and sgn(x) = \Gamma1 for x ! 0.) Whether the state change actually occurs depends on whether the neuron is selected for updating at this moment. In the synchronous update rule, al...
Catastrophic forgetting and the pseudorehearsal solution in Hopfieldtype networks
 Connection Science
, 1998
"... Pseudorehearsal is a mechanism proposed by Robins which alleviates catastrophic forgetting in multilayer perceptron networks. In this paper, we extend the exploration of pseudorehearsal to a Hop ® eldtype net. The same general principles apply: old information can be rehearsed if it is available, ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Pseudorehearsal is a mechanism proposed by Robins which alleviates catastrophic forgetting in multilayer perceptron networks. In this paper, we extend the exploration of pseudorehearsal to a Hop ® eldtype net. The same general principles apply: old information can be rehearsed if it is available, and if it is not available, then generating and rehearsing approximations of old information that `map ’ the behaviour of the network can also be eŒective at preser ving the actual old information itself. The details of the pseudorehearsal mechanism, however, bene ® t from being adapted to the dynamics of Hop ® eld nets so as to exploit the extra attractors created in state space during learning. These attractors are usually described as `spurious ’ or `crosstalk’, and regarded as undesirable, interfering with the retention of the trained population items. Our simulations have shown that, in another sense, such attractors can in fact be useful in preser ving the learned population. In general terms, a solution to the catastrophic forgetting problem enables the ongoing or sequential learning of information in arti ® cial neural networks, and consequently also provides a framework for the modelling of lifelong learning/developmental eŒects in cognition.
Attractors In Recurrent Behavior Networks
, 1997
"... If behavior networks, which use spreading activation to select actions, are analogous to connectionist methods of pattern recognition, then recurrent behavior networks, which use energy minimization, are analogous to Hopfield networks. Hopfield networks memorize patterns by making them attractors. S ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
If behavior networks, which use spreading activation to select actions, are analogous to connectionist methods of pattern recognition, then recurrent behavior networks, which use energy minimization, are analogous to Hopfield networks. Hopfield networks memorize patterns by making them attractors. Similarly, each behavior of a recurrent behavior network should be an attractor of the network, to inhibit fruitless, repeated switching between different behaviors in response to small changes in the environment and in motivations. I overcome two major objections to this view, and demonstrate that the performance in a test domain of the Do the Right Thing recurrent behavior network is improved by redesigning it to create desirable attractors and basins of attraction. I further show that this performance increase is correlated with an increase in persistence and a decrease in undesirable behaviorswitching. On a more general level, this work encourages the study of action selection as a dynam...