Results 1  10
of
12
An Overview Of The Computational Power Of Recurrent Neural Networks
 Proceedings of the 9th Finnish AI Conference STeP 2000{Millennium of AI, Espoo, Finland (Vol. 3: &quot;AI of Tomorrow&quot;: Symposium on Theory, Finnish AI Society
, 2000
"... INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. His ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. Historically, the brain theory interest was primary [32], but with the advances in computer technology, the application potential of the statistical modeling techniques has shifted the balance. 1 The study of neural networks as general computational devices does not strictly follow this division of interests: rather, it provides a general framework outlining the limitations and possibilities aecting both research domains. The prime historic example here is obviously Minsky's and Papert's 1969 study of the computational limitations of singlelayer perceptrons [34], which was a major inuence in turning away interest from neural network learning to symbolic AI techniques for more
A ContinuousTime Hopfield Net Simulation of Discrete Neural Networks
, 2000
"... We investigate the computational power of continuoustime symmetric Hopfield nets. As is well known, such networks have very constrained, Liapunovfunction controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent fully p ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We investigate the computational power of continuoustime symmetric Hopfield nets. As is well known, such networks have very constrained, Liapunovfunction controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent fully parallel computation by a network of n discretetime binary neurons, with in general asymmetric interconnections, can be simulated by a symmetric continuoustime Hopfield net containing only 14n + 6 units using the saturatedlinear sigmoid activation function. In terms of standard discrete computation models this result implies that any polynomially spacebounded Turing machine can be simulated by a polynomially sizeincreasing sequence of continuoustime Hopfield nets.
The Emergent Computational Potential of Evolving Artificial Living Systems
, 2002
"... The computational potential of artificial living systems can be studied without knowing the algorithms that govern their behavior. Modeling single organisms by means of socalled cognitive transducers, we will estimate the computational power of AL systems by viewing them as conglomerates of such org ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The computational potential of artificial living systems can be studied without knowing the algorithms that govern their behavior. Modeling single organisms by means of socalled cognitive transducers, we will estimate the computational power of AL systems by viewing them as conglomerates of such organisms. We describe a scenario in which an artificial living (AL) system is involved in a potentially infinite, unpredictable interaction with an active or passive environment, to which it can react by learning and adjusting its behaviour. By making use of sequences of cognitive transducers one can also model the evolution of AL systems caused by `architectural' changes. Among the examples are `communities of agents', i.e. by communities of mobile, interactive cognitive transducers.
ContinuousTime Symmetric Hopfield Nets Are Computationally Universal
"... We establish a fundamental result in the theory of computation by continuoustime dynamical systems, by showing that systems corresponding to so called continuoustime symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained, Liapunovfunction ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We establish a fundamental result in the theory of computation by continuoustime dynamical systems, by showing that systems corresponding to so called continuoustime symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained, Liapunovfunction controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent synchronous fully parallel computation by a recurrent network of n discretetime binary neurons, with in general asymmetric coupling weights, can be simulated by a symmetric continuoustime Hopfield net containing only 18n+7 units employing the saturatedlinear activation function. Moreover, if the asymmetric network has maximum integer weight size w_max and converges in discrete time t*, then the corresponding Hopfield net can be designed to operate in continuous time &Theta;(t*/&epsilon;), for any &epsilon; > 0...
EnergyBased Computation with Symmetric Hopfield Nets
"... We propose a unifying approach to the analysis of computational aspects of symmetric Hopfield nets which is based on the concept of "energy source". Within this framework we present different results concerning the computational power of various Hopfield model classes. It is shown that ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a unifying approach to the analysis of computational aspects of symmetric Hopfield nets which is based on the concept of "energy source". Within this framework we present different results concerning the computational power of various Hopfield model classes. It is shown that polynomialtime computations by nondeterministic Turing machines can be reduced to the process of minimizing the energy in Hopfield nets (the MIN ENERGY problem). Furthermore, external and internal sources of energy are distinguished. The external sources include e.g. energizing inputs from socalled Hopfield languages, and also certain external oscillators that prove finite analog Hopfield nets to be computationally Turing universal. On the other hand, the internal source of energy can be implemented by a symmetric clock subnetwork producing an exponential number of oscillations which are used to energize the simulation of convergent asymmetric networks by Hopfield nets. This shows that infinite families of polynomialsize Hopfield nets compute the complexity class PSPACE/poly. A special attention is paid to generalizing these results for analog states and continuous time to point out alternative sources of efficient computation. 1
A Computational Taxonomy and Survey of Neural Network Models
 of Numbers and Symbols. (BS 1749:1985) London: British Standards Institution
, 2001
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their computational characteristics. The criteria of classification include e.g. the architecture of the network (feedforward vs ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their computational characteristics. The criteria of classification include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size (finite nets vs. infinite families), computation type (deterministic vs. probabilistic), etc. The underlying results concerning the computational power of perceptron, RBF, winnertakeall, and spiking neural networks are briey surveyed, with pointers to the relevant literature.
General Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (fee ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size ( nite nets vs. in  nite families), computation type (deterministic vs. probabilistic), etc.
REVIEW Communicated by Wolfgang Maass GeneralPurpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
"... We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus rec ..."
Abstract
 Add to MetaCart
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winnertakeall, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.
Robust RBF Finite Automata
, 2004
"... The computational power of recurrent RBF (radial basis functions) networks is investigated. A recurrent network which consists of O( m log m) RBF units with maximum norm employing any activation function that has different values in at least two nonnegative points, is constructed so as to implement ..."
Abstract
 Add to MetaCart
The computational power of recurrent RBF (radial basis functions) networks is investigated. A recurrent network which consists of O( m log m) RBF units with maximum norm employing any activation function that has different values in at least two nonnegative points, is constructed so as to implement a given deterministic nite automaton with m states. The underlying simulation proves to be robust with regard to bounded analog noise for a large class of smooth activation functions with a special type of inflexion.
The Computational Theory of Neural Networks
, 2000
"... In the present paper a detailed taxonomy of neural network models with various restrictions is presented with respect to their computational properties. The criteria of classification include e.g. feedforward and recurrent architectures, discrete and continuous time, binary and analog states, symmet ..."
Abstract
 Add to MetaCart
In the present paper a detailed taxonomy of neural network models with various restrictions is presented with respect to their computational properties. The criteria of classification include e.g. feedforward and recurrent architectures, discrete and continuous time, binary and analog states, symmetric and asymmetric weights, finite size and infinite families of networks, deterministic and probabilistic models, etc. The underlying results concerning the computational power of perceptron, RBF, winnertakeall, and spiking neural networks are briey surveyed and completed by relevant references.