Results 1  10
of
16
Computational Complexity Of Neural Networks: A Survey
, 1994
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computationneural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classescomplexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks...
Computing with Truly Asynchronous Threshold Logic Networks
 THEORETICAL COMPUTER SCIENCE
, 1995
"... We present simulation mechanisms by which any network of threshold logic units with either symmetric or asymmetric interunit connections (i.e., a symmetric or asymmetric "Hopfield net") can be simulated on a network of the same type, but without any a priori constraints on the order of upd ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We present simulation mechanisms by which any network of threshold logic units with either symmetric or asymmetric interunit connections (i.e., a symmetric or asymmetric "Hopfield net") can be simulated on a network of the same type, but without any a priori constraints on the order of updates of the units. Together with earlier constructions, the results show that the truly asynchronous network model is computationally equivalent to the seemingly more powerful models with either ordered sequential or fully parallel updates.
Complexity Issues in Discrete Hopfield Networks
, 1994
"... We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfi ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We survey some aspects of the computational complexity theory of discretetime and discretestate Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worstcase results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. Draft chapter for the forthcoming book The Computational and Learning Complexity of Neural Networks: Advanced Topics (ed. Ian Parberry).
Neural Networks and Complexity Theory
 In Proc. 17th International Symposium on Mathematical Foundations of Computer Science
, 1992
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. Whi ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks directly from examples of their desired inputoutput behavior, it is nevertheless important to pay attention also to the complexity issues: firstly, what kinds of functions are computable by networks of a given type and size, and secondly, what is the complexity of the synthesis problems considered. In fact, inattention to these issues was a significant factor in the demise of the first stage of neural networks research in the late 60's, under the criticism of Minsky and Papert [51]. The intent of this paper is to survey some of the centra...
The Computational Power of Continuous Time Neural Networks
 In Proc. SOFSEM'97, the 24th Seminar on Current Trends in Theory and Practice of Informatics, Lecture Notes in Computer Science
, 1995
"... We investigate the computational power of continuoustime neural networks with Hopfieldtype units. We prove that polynomialsize networks with saturatedlinear response functions are at least as powerful as polynomially spacebounded Turing machines. 1 Introduction In a paper published in 1984 [11 ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
We investigate the computational power of continuoustime neural networks with Hopfieldtype units. We prove that polynomialsize networks with saturatedlinear response functions are at least as powerful as polynomially spacebounded Turing machines. 1 Introduction In a paper published in 1984 [11], John Hopfield introduced a continuoustime version of the neural network model whose discretetime variant he had discussed in his seminal 1982 paper [10]. The 1984 paper also contains an electronic implementation scheme for the continuoustime networks, and an argument showing that for sufficiently largegain nonlinearities, these behave similarly to the discretetime ones, at least when used as associative memories. The power of Hopfield's discretetime networks as generalpurpose computational devices was analyzed in [17, 18]. In this paper we conduct a similar analysis for networks consisting of Hopfield's continuoustime units; however we are at this stage able to analyze only the gen...
On the Computational Complexity of Analyzing Hopfield Nets
 Complex Systems
, 1989
"... We prove that the problem of counting the number of stable states in a given Hopfield net is #Pcomplete, and the problem of computing the size of the attraction domain of a given stable state is NPhard. 1 Introduction A binary associative memory network, or "Hopfield net" [6], consists ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We prove that the problem of counting the number of stable states in a given Hopfield net is #Pcomplete, and the problem of computing the size of the attraction domain of a given stable state is NPhard. 1 Introduction A binary associative memory network, or "Hopfield net" [6], consists of n fully interconnected threshold logic units, or "neurons". Associated to each pair of neurons i; j is an interconnection weight w ij , and to each neuron i a threshold value t i . At any given moment a neuron i can be in one of two states, x i = 1 or x i = \Gamma1. Its state at the next moment depends on the current states of the other neurons and the interconnection weights; if sgn( P n j=1 w ij x j \Gamma t i ) 6= x i , the neuron may switch to the opposite state. (Here sgn is the signum function, sgn(x) = 1 for x 0, and sgn(x) = \Gamma1 for x ! 0.) Whether the state change actually occurs depends on whether the neuron is selected for updating at this moment. In the synchronous update rule, al...
The Computational Power of Discrete Hopfield Nets with Hidden Units
 Neural Computation
, 1996
"... We prove that polynomial size discrete Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks wi ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
We prove that polynomial size discrete Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks with polynomially bounded interconnection weights compute exactly the class of functions P/poly, i.e., the class computed by polynomial timebounded nonuniform Turing machines.
An Overview Of The Computational Power Of Recurrent Neural Networks
 Proceedings of the 9th Finnish AI Conference STeP 2000{Millennium of AI, Espoo, Finland (Vol. 3: &quot;AI of Tomorrow&quot;: Symposium on Theory, Finnish AI Society
, 2000
"... INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. His ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. Historically, the brain theory interest was primary [32], but with the advances in computer technology, the application potential of the statistical modeling techniques has shifted the balance. 1 The study of neural networks as general computational devices does not strictly follow this division of interests: rather, it provides a general framework outlining the limitations and possibilities aecting both research domains. The prime historic example here is obviously Minsky's and Papert's 1969 study of the computational limitations of singlelayer perceptrons [34], which was a major inuence in turning away interest from neural network learning to symbolic AI techniques for more
On the Computational Power of Discrete Hopfield Nets
 In: Proc. 20th International Colloquium on Automata, Languages, and Programming
, 1993
"... . We prove that polynomial size discrete synchronous Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also th ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
. We prove that polynomial size discrete synchronous Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks with polynomially bounded interconnection weights compute exactly the class of functions P/poly. 1 Background Recurrent, or cyclic, neural networks are an intriguing model of massively parallel computation. In the recent surge of research in neural computation, such networks have been considered mostly from the point of view of two types of applications: pattern classification and associative memory (e.g. [16, 18, 21, 24]), and combinatorial optimization (e.g. [1, 7, 20]). Nevertheless, recurrent networks are capable also of more general types of computation, and issues of what exactly such networks can compute, and how they should be programmed, are becoming increasingly topica...
Science and Engineering of Large Scale SocioTechnical Simulations
 Proceedings of the 1st International Conference on Grand Challenges in Simulations, held as part of the Western Simulation Conference
, 2002
"... Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation and composition of interdependent local mappings. When certain local mappings and their interdependencies can be related to particular real world objects and interdependencies, it is common to compute the interactions to derive a symbolic model of the global system made up of the corresponding interdependent objects. The formal mathematical and computational account of the simulation provides a particular kind of theoretical explanation of the global system properties and, therefore, insight into how to engineer a complex system to exhibit those properties. This paper considers the mathematical foundations and engineering principles necessary for building large scale simulations of sociotechnical systems. Examples of such systems are urban regional transportation systems, the national electrical power markets and grid, the worldwide Internet, vaccine design and deployment, theater war, etc. These systems are composed of large numbers of interacting human, physical and technological components. Some components adapt and learn, exhibit perception, interpretation, reasoning, deception, cooperation and noncooperation, and economic motives as well as the usual physical properties of interaction. The systems themselves are large and the behavior of sociotechnical systems is tremendously complex.