Results 1  10
of
107
Nonlinear Neural Networks: Principles, Mechanisms, and Architectures
, 1988
"... An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentiethcentury scientific movements. The nonlinear, nonstatio ..."
Abstract

Cited by 232 (21 self)
 Add to MetaCart
An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentiethcentury scientific movements. The nonlinear, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network researchthe binary, linear, and continuousnonlinear modelsare noted. The remainder of the article describes results about continuousnonlinear models: Many models of contentaddressable memory are shown to be special cases of the CohenGrossberg model and global Liapunov function, including the additive, brainstateinabox, McCullochPitts, Boltzmann machine, HartlineRatliffMillet; shunting, maskingfield, bidirectional associative memory, VolterraLotka, GilpinAyala, and EigenSchuster models. A Liapunov functional method is described for proving global limit or oscillation theorems for nonlinear competitive systems when their decision schemes are globally consistent or inconsistent, respectively. The former case is illustrated by a model of a globally stable economic market, and the latter case is illustrated by a model of the voting paradox. Key properties of shunting competitive feedback networks are summarized, including the role of sigmoid signalling, automatic gain control, competitive choice and quantization, tunable filtering, total activity normalization, and noise suppression in pattern transformation and memory storage applications. Connections to models of competitive learning, vector quantization, and categorical perception are noted. Adaptive resonance
Hopfield models as generalized random mean field models. Mathematical aspects of spin glasses and neural networks
 3–89, Progr. Probab., 41 Birkhäuser
, 1998
"... Abstract: We give a comprehensive selfcontained review on the rigorous analysis of the thermodynamics of a class of random spin systems of mean field type whose most prominent example is the Hopfield model. We focus on the low temperature phase and the analysis of the Gibbs measures with large devi ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
Abstract: We give a comprehensive selfcontained review on the rigorous analysis of the thermodynamics of a class of random spin systems of mean field type whose most prominent example is the Hopfield model. We focus on the low temperature phase and the analysis of the Gibbs measures with large deviation techniques. There is a very detailed and complete picture in the regime of “small α”; a particularly satisfactory result concerns a nontrivial regime of parameters in which we prove 1) the convergence of the local “mean fields ” to gaussian random variables with constant variance and random mean; the random means are from site to site independent gaussians themselves; 2) “propagation of chaos”, i.e. factorization of the extremal infinite volume Gibbs measures, and 3) the correctness of the “replica symmetric solution ” of Amit, Gutfreund and Sompolinsky [AGS]. This last result was first proven by M. Talagrand [T4], using different techniques.
ConvergenceZone Episodic Memory: Analysis and Simulations
 NEURAL NETWORKS
, 1997
"... Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm storage within the neocortex. This paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern, which in turn reactivates the entire stored pattern. For many configurations of the model, a theoretical lower bound for the memory capacity can be derived, and it can be an order of magnitude or higher than the number of all units in the model, and several orders of magnitude higher than the number of bindinglayer units. Computational simulations further indicate that the average capacity is an order of magnitude larger than the theoretical lower bound, and making the connectivity between layers sparser causes an even further increase in capacity. Simulations also show that if more descriptive binding patterns are used, the errors tend to be more plausible (patterns are confused with other similar patterns), with a slight cost in capacity. The convergencezone episodic memory therefore accounts for the immediate storage and associative retrieval capability and large capacity of the hippocampal memory, and shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected t...
Computational Complexity Of Neural Networks: A Survey
, 1994
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computationneural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classescomplexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks...
Learning pattern classification  A survey
 IEEE TRANS. INFORM. THEORY
, 1998
"... Classical and recent results in statistical pattern recognition and learning theory are reviewed in a twoclass pattern classification setting. This basic model best illustrates intuition and analysis techniques while still containing the essential features and serving as a prototype for many applic ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Classical and recent results in statistical pattern recognition and learning theory are reviewed in a twoclass pattern classification setting. This basic model best illustrates intuition and analysis techniques while still containing the essential features and serving as a prototype for many applications. Topics discussed include nearest neighbor, kernel, and histogram methods, Vapnik–Chervonenkis theory, and neural networks. The presentation and the large (thogh nonexhaustive) list of references is geared to provide a useful overview of this field for both specialists and nonspecialists.
Complexity Issues in Discrete Hopfield Networks
, 1994
"... We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worstcase results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. Draft chapter for the forthcoming book The Computational and Learning Complexity of Neural Networks: Advanced Topics (ed. Ian Parberry).
On the Computational Complexity of Analyzing Hopfield Nets
 Complex Systems
, 1989
"... We prove that the problem of counting the number of stable states in a given Hopfield net is #Pcomplete, and the problem of computing the size of the attraction domain of a given stable state is NPhard. 1 Introduction A binary associative memory network, or "Hopfield net" [6], consists ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We prove that the problem of counting the number of stable states in a given Hopfield net is #Pcomplete, and the problem of computing the size of the attraction domain of a given stable state is NPhard. 1 Introduction A binary associative memory network, or "Hopfield net" [6], consists of n fully interconnected threshold logic units, or "neurons". Associated to each pair of neurons i; j is an interconnection weight w ij , and to each neuron i a threshold value t i . At any given moment a neuron i can be in one of two states, x i = 1 or x i = \Gamma1. Its state at the next moment depends on the current states of the other neurons and the interconnection weights; if sgn( P n j=1 w ij x j \Gamma t i ) 6= x i , the neuron may switch to the opposite state. (Here sgn is the signum function, sgn(x) = 1 for x 0, and sgn(x) = \Gamma1 for x ! 0.) Whether the state change actually occurs depends on whether the neuron is selected for updating at this moment. In the synchronous update rule, al...
Bayesian Retrieval in Associative Memories with Storage Errors
 IEEE Trans. Neural Networks
, 1998
"... It is well known that for finitesized networks, onestep retrieval in the autoassociative Willshaw net is a suboptimal way to extract the information stored in the synapses. Iterative retrieval strategies are much better, but have hitherto only had heuristic justification. We show how they emerge ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
It is well known that for finitesized networks, onestep retrieval in the autoassociative Willshaw net is a suboptimal way to extract the information stored in the synapses. Iterative retrieval strategies are much better, but have hitherto only had heuristic justification. We show how they emerge naturally from considerations of probabilistic inference under conditions of noisy and partial input and a corrupted weight matrix. We start from the conditional probability distribution over possible patterns for retrieval. This contains all possible information that is available to an observer of the network and the initial input. Since this distribution is over exponentially many patterns, we use it to develop two approximate, but tractable, iterative retrieval methods. One performs maximum likelihood inference to find the single most likely pattern, using the (negative log of the) conditional probability as a Lyapunov function for retrieval. In physics terms, if storage errors are present, then the modified iterative update equations contain an additional antiferromagnetic interaction term and site dependent threshold values. The second method makes a mean field assumption to optimize a tractable estimate of the full conditional probability distribution. This leads to iterative mean field equations which can be interpreted in terms of a network of neurons with sigmoidal responses but with the same interactions and thresholds as in the maximum likelihood update equations. In the absence of storage errors, both models become very similiar to the Willshaw model, where standard retrieval is iterated using a particular form of linear threshold strategy.