Results 1  10
of
14
Computational Complexity Of Neural Networks: A Survey
, 1994
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computationneural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classescomplexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks...
Computing with Truly Asynchronous Threshold Logic Networks
 THEORETICAL COMPUTER SCIENCE
, 1995
"... We present simulation mechanisms by which any network of threshold logic units with either symmetric or asymmetric interunit connections (i.e., a symmetric or asymmetric "Hopfield net") can be simulated on a network of the same type, but without any a priori constraints on the order of upd ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
We present simulation mechanisms by which any network of threshold logic units with either symmetric or asymmetric interunit connections (i.e., a symmetric or asymmetric "Hopfield net") can be simulated on a network of the same type, but without any a priori constraints on the order of updates of the units. Together with earlier constructions, the results show that the truly asynchronous network model is computationally equivalent to the seemingly more powerful models with either ordered sequential or fully parallel updates.
Applications and Variations of Domination in Graphs
, 2000
"... In a graph G =(V,E), S ⊆ V is a dominating set of G if every vertex is either in S or joined by an edge to some vertex in S. Many different types of domination have been researched extensively. This dissertation explores some new variations and applications of dominating sets. We first introduce the ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
In a graph G =(V,E), S ⊆ V is a dominating set of G if every vertex is either in S or joined by an edge to some vertex in S. Many different types of domination have been researched extensively. This dissertation explores some new variations and applications of dominating sets. We first introduce the concept of Roman domination. A Roman dominating function is a function f: V →{0, 1, 2} such that every vertex v for which f(v) =0hasa neighbor w with f(w) = 2. This corresponds to a problem in army placement where every region is either defended by its own army or has a neighbor with two armies, in which case one of the two armies can be sent to the undefended region if a conflict breaks out. The weight of a Roman dominating function f is f(V) = � v∈V f(v), and we are interested in finding Roman dominating functions of minimum weight. We explore the graph theoretic, algorithmic, and complexity issues of Roman domination, including algorithms for finding minimum weight Roman dominating functions for trees and grids.
Neural Networks and Complexity Theory
 In Proc. 17th International Symposium on Mathematical Foundations of Computer Science
, 1992
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. Whi ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. 1 Introduction The recently revived field of computation by "neural" networks provides the complexity theorist with a wealth of fascinating research topics. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks directly from examples of their desired inputoutput behavior, it is nevertheless important to pay attention also to the complexity issues: firstly, what kinds of functions are computable by networks of a given type and size, and secondly, what is the complexity of the synthesis problems considered. In fact, inattention to these issues was a significant factor in the demise of the first stage of neural networks research in the late 60's, under the criticism of Minsky and Papert [51]. The intent of this paper is to survey some of the centra...
GeneralPurpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus rec ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winnertakeall, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.
An Overview Of The Computational Power Of Recurrent Neural Networks
 Proceedings of the 9th Finnish AI Conference STeP 2000{Millennium of AI, Espoo, Finland (Vol. 3: &quot;AI of Tomorrow&quot;: Symposium on Theory, Finnish AI Society
, 2000
"... INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. His ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. Historically, the brain theory interest was primary [32], but with the advances in computer technology, the application potential of the statistical modeling techniques has shifted the balance. 1 The study of neural networks as general computational devices does not strictly follow this division of interests: rather, it provides a general framework outlining the limitations and possibilities aecting both research domains. The prime historic example here is obviously Minsky's and Papert's 1969 study of the computational limitations of singlelayer perceptrons [34], which was a major inuence in turning away interest from neural network learning to symbolic AI techniques for more
The Computational Power of Discrete Hopfield Nets with Hidden Units
 Neural Computation
, 1996
"... We prove that polynomial size discrete Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks wi ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
We prove that polynomial size discrete Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks with polynomially bounded interconnection weights compute exactly the class of functions P/poly, i.e., the class computed by polynomial timebounded nonuniform Turing machines.
Captive Cellular Automata
"... Abstract. We introduce a natural class of cellular automata characterised by a property of the local transition law without any assumption on the states set. We investigate some algebraic properties of the class and show that it contains intrinsically universal cellular automata. In addition we show ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a natural class of cellular automata characterised by a property of the local transition law without any assumption on the states set. We investigate some algebraic properties of the class and show that it contains intrinsically universal cellular automata. In addition we show that Rice’s theorem for limit sets is no longer true for that class, although infinitely many properties of limit sets are still undecidable. Cellular automata (ca for short) are discrete dynamical systems capable of producing a wide class of different behaviours. They consist of a large collection of simple identical components (the cells) with uniform local interactions. As such they provide an idealistic model to study complex systems observed in nature. Despite the simplicity of the model, most of the richness of behaviours they exhibit is still to be understood. Moreover, many interesting and natural properties are undecidable. To that extent it is meaningful to consider classes of ca obtained by structural assumptions on the local transition law with the
On the Computational Power of Discrete Hopfield Nets
 In: Proc. 20th International Colloquium on Automata, Languages, and Programming
, 1993
"... . We prove that polynomial size discrete synchronous Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also th ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
. We prove that polynomial size discrete synchronous Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial spacebounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks with polynomially bounded interconnection weights compute exactly the class of functions P/poly. 1 Background Recurrent, or cyclic, neural networks are an intriguing model of massively parallel computation. In the recent surge of research in neural computation, such networks have been considered mostly from the point of view of two types of applications: pattern classification and associative memory (e.g. [16, 18, 21, 24]), and combinatorial optimization (e.g. [1, 7, 20]). Nevertheless, recurrent networks are capable also of more general types of computation, and issues of what exactly such networks can compute, and how they should be programmed, are becoming increasingly topica...
General Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (fee ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size ( nite nets vs. in  nite families), computation type (deterministic vs. probabilistic), etc.