Results 1  10
of
20
A Neural Network Primer
, 1994
"... Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in parallel) the information provided by its sy ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in parallel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron, and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notio...
Computational Complexity Of Neural Networks: A Survey
, 1994
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computationneural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classescomplexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks...
Approximating Maximum Clique with a Hopfield Network
 IEEE Trans. Neural Networks
, 1995
"... In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAXCLIQUE is the optimization problem of finding the largest clique in a given graph, and is NPhard, even to approximate well. Several realworld and theory problems can be modeled as MAXCLIQUE. In this paper, ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAXCLIQUE is the optimization problem of finding the largest clique in a given graph, and is NPhard, even to approximate well. Several realworld and theory problems can be modeled as MAXCLIQUE. In this paper, we efficiently approximate MAXCLIQUE in a special case of the Hopfield Network whose stable states are maximal cliques. We present several energydescent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well known greedy algorithms for approximating MAXCLIQUE. We report on detailed empirical comparisons on random graphs. MeanField Annealingan efficient approximation to Simulated Annealingand a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic. 1 Cliques and Maximum Clique In a graph with undirected edges, a cliq...
Computing with Truly Asynchronous Threshold Logic Networks
 THEORETICAL COMPUTER SCIENCE
, 1995
"... We present simulation mechanisms by which any network of threshold logic units with either symmetric or asymmetric interunit connections (i.e., a symmetric or asymmetric "Hopfield net") can be simulated on a network of the same type, but without any a priori constraints on the order of updates of th ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We present simulation mechanisms by which any network of threshold logic units with either symmetric or asymmetric interunit connections (i.e., a symmetric or asymmetric "Hopfield net") can be simulated on a network of the same type, but without any a priori constraints on the order of updates of the units. Together with earlier constructions, the results show that the truly asynchronous network model is computationally equivalent to the seemingly more powerful models with either ordered sequential or fully parallel updates.
Complexity Issues in Discrete Hopfield Networks
, 1994
"... We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worstcase results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. Draft chapter for the forthcoming book The Computational and Learning Complexity of Neural Networks: Advanced Topics (ed. Ian Parberry).
Neural Networks and Complexity Theory
 In Proc. 17th International Symposium on Mathematical Foundations of Computer Science
, 1992
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. While much of ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks directly from examples of their desired inputoutput behavior, it is nevertheless important to pay attention also to the complexity issues: firstly, what kinds of functions are computable by networks of a given type and size, and secondly, what is the complexity of the synthesis problems considered. In fact, inattention to these issues was a significant factor in the demise of the first stage of neural networks research in the late 60's, under the criticism of Minsky and Papert [51]. The intent of this paper is to survey some of the centra...
The Computational Power of Discrete Hopfield Nets with Hidden Units
 Neural Computation
, 1996
"... We prove that polynomial size discrete Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks wi ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
We prove that polynomial size discrete Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks with polynomially bounded interconnection weights compute exactly the class of functions P/poly, i.e., the class computed by polynomial timebounded nonuniform Turing machines.
On the Computational Power of Discrete Hopfield Nets
 In: Proc. 20th International Colloquium on Automata, Languages, and Programming
, 1993
"... . We prove that polynomial size discrete synchronous Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also th ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
. We prove that polynomial size discrete synchronous Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks with polynomially bounded interconnection weights compute exactly the class of functions P/poly. 1 Background Recurrent, or cyclic, neural networks are an intriguing model of massively parallel computation. In the recent surge of research in neural computation, such networks have been considered mostly from the point of view of two types of applications: pattern classification and associative memory (e.g. [16, 18, 21, 24]), and combinatorial optimization (e.g. [1, 7, 20]). Nevertheless, recurrent networks are capable also of more general types of computation, and issues of what exactly such networks can compute, and how they should be programmed, are becoming increasingly topica...
On the storage capacity of nonlinear neural networks
 Neural Networks
, 1997
"... We consider arti�cial neural networks �ANN � of the Hop�eld type. Under some assumptions on the nonlinear synaptic function G we give a rigorous lower bound for the storage capacity of the �ANN�. That is, given an �ANN � with n 2 IN neurons and m = m�n � 2 IN patterns to be stored, we show that retr ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider arti�cial neural networks �ANN � of the Hop�eld type. Under some assumptions on the nonlinear synaptic function G we give a rigorous lower bound for the storage capacity of the �ANN�. That is, given an �ANN � with n 2 IN neurons and m = m�n � 2 IN patterns to be stored, we show that retrieval without errors occurs when m�n �!1 as n!1in such away that m�n � � �n=2 log�n��qG, where 0 �qG:= �E�NG�N� � 2 =E�G�N � 2 � � � 1, and N denotes a standard normal random variable.
Group Updates and Multiscaling: An Efficient Neural Network Approach to Combinatorial Optimization
 IEEE Transactions on Systems, Man, and Cybernetics  Part B: Cybernetics
, 1996
"... A multiscale method is described in the context of binary Hopfieldtype neural networks. The appropriateness of the proposed technique for solving several classes of optimization problems is established by means of the notion of group update which is introduced here and investigated in relation to ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
A multiscale method is described in the context of binary Hopfieldtype neural networks. The appropriateness of the proposed technique for solving several classes of optimization problems is established by means of the notion of group update which is introduced here and investigated in relation to the properties of multiscaling. The method has been tested in the solution of partitioning and covering problems, for which an original mapping to Hopfieldtype neural networks has been developed. Experimental results indicate that the multiscale approach is very effective in exploring the statespace of the problem and providing feasible solutions of acceptable quality, while at the same it offers a significant acceleration. 1 Introduction The Hopfield neural network model [7, 8] and closely related models such as the Boltzmann Machine [3, 1] have proved effective in dealing with hard optimization problems and yield nearoptimal solutions with polynomial time complexity [6, 20]. The basic ...