Results 1  10
of
17
Bidirectional Associative Memories
 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS
, 1988
"... Stability and encoding properties of twolayer nonlinear feedback neural networks are examined. Bidirectionality, forward and backard information flow, is introduced in neural nets to produce twoway associative search for stored associations (A, B, ). Passing information through M gives one directi ..."
Abstract

Cited by 155 (3 self)
 Add to MetaCart
Stability and encoding properties of twolayer nonlinear feedback neural networks are examined. Bidirectionality, forward and backard information flow, is introduced in neural nets to produce twoway associative search for stored associations (A, B, ). Passing information through M gives one direction; passing it through its transpose M r gives the other. A bidirectional associative memory. (BAM) behaves as a hetero associative content addressable memory (CAM), storing and recalling the vector pairs (A1, Bi),..,(Am Bin) , where .4 {0,1}"and B We prove that every nbyp matrix M is a bidirectionally stable heteroas sociative CAM for both binary/bipolar and continuous neurons a, and hi. When the BAM neurons are activated, the network quickly evolves to a stable state of twopattern reverberation, or resonance. The stable reverberation corresponds to a system energy local minimum. Heteroassociafive inlormation is encoded iu a BAM by summing correlation matrices. The BAM storage capact .ty for reliable recall is roughly m < niin(n, p). No more heteroassociafive pairs can be 'reliably stored and recalled than the lesser of the dimensions of the pattern spaces (0,1 }"and 0,1 } P. The Appendix shos that it is better on average to use bipolar { 1,i} coding than binary. {0,1 } coding of heteroassociative pairs (.4, B,). BAM encoding and decoding are combined in the adaptive BAM, which extends global bidirectional stabflit), to realtime unsupervised learning. Temporal patterns (AE,., A,,) are represented as ordered lists of binary/bipolar vectors and stored in a temporal associative memory (TAM) nby matrix M as a limit cycle of the dynamical system. Forward recall proceeds through M, backward recall through M r . Temporal patterns are stored by summing contiguous bipolar...
Computational Complexity Of Neural Networks: A Survey
, 1994
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computationneural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classescomplexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks...
Analog Computation with Dynamical Systems
 Physica D
, 1997
"... This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete th ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete theory we develop fundamentals of computational complexity for dynamical systems, discrete or continuous in time, on the basis of an intrinsic time scale of the system. Dissipative dynamical systems are classified into the computational complexity classes P d , CoRP d , NP d
Matching Performance of Binary Correlation Matrix Memories
"... We introduce a theoretical framework for estimating the matching performance of binary correlation matrices acting as heteroassociative memories. The framework is applicable to nonrecursive, fullyconnected systems with binary (0,1) Hebbian weights and hardlimited threshold. It can handle both fu ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
We introduce a theoretical framework for estimating the matching performance of binary correlation matrices acting as heteroassociative memories. The framework is applicable to nonrecursive, fullyconnected systems with binary (0,1) Hebbian weights and hardlimited threshold. It can handle both full and partial matching of single or multiple data items in nonsquare memories. Theoretical development takes place under a probability theory framework. Inherent uncertainties in the matching process are accommodated by the use of probability distributions to describe the numbers of correct and incorrect neuron responses during retrieval. Theoretical predictions are verified experimentally for mediumsized memories and used to aid the design of larger systems. The results highlight the Matching Performance of CMMs 2 fact that correlationbased models can act as highly efficient memories provided a small probability of retrieval error is accepted. Keywords Neural Associative Memories, Co...
Complexity Issues in Discrete Hopfield Networks
, 1994
"... We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worstcase results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. Draft chapter for the forthcoming book The Computational and Learning Complexity of Neural Networks: Advanced Topics (ed. Ian Parberry).
The Connectivity of the Brain: MultiLevel Quantitative Analysis
 Biological Cybernetics
, 1995
"... We develop a mathematical formalism for calculating connectivity volumes generated by specific topologies with various physical packing strategies. We consider four topologies (full, random, nearest neighbor, and modular connectivity) and three physical models: (i) interior packing, where neurons a ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We develop a mathematical formalism for calculating connectivity volumes generated by specific topologies with various physical packing strategies. We consider four topologies (full, random, nearest neighbor, and modular connectivity) and three physical models: (i) interior packing, where neurons and connection fibers are intermixed, (ii) sheeted packing where neurons are located on a sheet with fibers running underneath, and (iii) exterior packing where the neurons are located at the surfaces of a cube or sphere with fibers taking up the internal volume. By extensive crossreferencing of available human neuroanatomical data we produce a consistent set of parameters for the whole brain, the cerebral cortex, and the cerebellar cortex. By comparing these inferred values with those predicted by the expressions, we draw the following general conclusions for the human brain, cortex, cerebellum: (i) Interior packing is less efficient than exterior packing (in a sphere). (ii) Fully and rando...
Bayesian Retrieval in Associative Memories with Storage Errors
 IEEE Trans. Neural Networks
, 1998
"... It is well known that for finitesized networks, onestep retrieval in the autoassociative Willshaw net is a suboptimal way to extract the information stored in the synapses. Iterative retrieval strategies are much better, but have hitherto only had heuristic justification. We show how they emerge ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
It is well known that for finitesized networks, onestep retrieval in the autoassociative Willshaw net is a suboptimal way to extract the information stored in the synapses. Iterative retrieval strategies are much better, but have hitherto only had heuristic justification. We show how they emerge naturally from considerations of probabilistic inference under conditions of noisy and partial input and a corrupted weight matrix. We start from the conditional probability distribution over possible patterns for retrieval. This contains all possible information that is available to an observer of the network and the initial input. Since this distribution is over exponentially many patterns, we use it to develop two approximate, but tractable, iterative retrieval methods. One performs maximum likelihood inference to find the single most likely pattern, using the (negative log of the) conditional probability as a Lyapunov function for retrieval. In physics terms, if storage errors are present, then the modified iterative update equations contain an additional antiferromagnetic interaction term and site dependent threshold values. The second method makes a mean field assumption to optimize a tractable estimate of the full conditional probability distribution. This leads to iterative mean field equations which can be interpreted in terms of a network of neurons with sigmoidal responses but with the same interactions and thresholds as in the maximum likelihood update equations. In the absence of storage errors, both models become very similiar to the Willshaw model, where standard retrieval is iterated using a particular form of linear threshold strategy.
A General Model for Bidirectional Associative Memories
 degrees in Electrical Engineering from Korea Advanced Institute of Science and Technology in 1993 and
, 1998
"... This paper proposes a general model for bidirectional associative memories that associate patterns between the Xspace and the Yspace. The general model does not require the usual assumption that the interconnection weight from a neuron in the Xspace to a neuron in the Yspace is the same as the on ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper proposes a general model for bidirectional associative memories that associate patterns between the Xspace and the Yspace. The general model does not require the usual assumption that the interconnection weight from a neuron in the Xspace to a neuron in the Yspace is the same as the one from the Yspace to the Xspace. We start by defining a supporting function to measure how well a state supports another state in a general bidirectional associative memory (GBAM). We then use the supporting function to formulate the associative recalling process as a dynamic system, explore its stability and asymptotic stability conditions, and develop an algorithm for learning the asymptotic stability conditions using the Rosenblatt perceptron rule. The effectiveness of the proposed model for recognition of noisy patterns and the performance of the model in terms of storage capacity, attraction, and spurious memories are demonstrated by some outstanding experimental results. Keywords...
Information Capacity of Binary Weights Associative Memories
 Neurocomputing
, 1996
"... We study the amount of information stored in the fixed points of random instances of two binary weights associative memory models: the Willshaw Model (WM) and the Inverted Neural Network (INN). For these models, we show divergences between the information capacity (IC) as defined by AbuMostafa and ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study the amount of information stored in the fixed points of random instances of two binary weights associative memory models: the Willshaw Model (WM) and the Inverted Neural Network (INN). For these models, we show divergences between the information capacity (IC) as defined by AbuMostafa and Jacques, and information calculated from the standard notion of storage capacity by Palm and Grossman respectively. We prove that the WM has asymptotically optimal IC for nearly the full range of threshold values, the INN likewise for constant threshold values, and both over all degrees of sparseness of the stored vectors. This is contrasted with the result by Palm, which required stored random vectors to be logarithmically sparse to achieve good storage capacity for the WM, and with that of Grossman, which showed that the INN has poor storage capacity for random vectors. We propose Qstate versions of the WM and the INN, and show that they retain asymptotically optimal IC while guaranteein...
Optimal Decay Rate of Connection Weights in Covariance Learning
, 1992
"... Associative memory of neural networks can not store items more than its memory capacity. When new items are given one after another, connection weights should be decayed so that the number of stored items does not exceed the memory capacity. This paper presents the optimal decay rate that maximizes ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Associative memory of neural networks can not store items more than its memory capacity. When new items are given one after another, connection weights should be decayed so that the number of stored items does not exceed the memory capacity. This paper presents the optimal decay rate that maximizes the number of stored items, using the method of statistical dynamics. 3 Mathematical Informatics Section 1 Introduction This paper addresses the memory capacity of an associative memory model of neural networks with weight decay. Neural network is an adaptive system that is trained with sample items that are given from outer environment. It can store items up to some number, which is called memory capacity. We consider the online type learning scheme (cf. batch type learning) where the learning is proceeded every time a new item is provided. This learning has the advantage that needed memory is a little (memory for all items is not necessary) and it can adapts itself well to the change ...