Results 1  10
of
14
A Survey of ContinuousTime Computation Theory
 Advances in Algorithms, Languages, and Complexity
, 1997
"... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists o ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
(Show Context)
Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists on the general theory of continuoustime models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental...
An Overview Of The Computational Power Of Recurrent Neural Networks
 Proceedings of the 9th Finnish AI Conference STeP 2000{Millennium of AI, Espoo, Finland (Vol. 3: &quot;AI of Tomorrow&quot;: Symposium on Theory, Finnish AI Society
, 2000
"... INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. His ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. Historically, the brain theory interest was primary [32], but with the advances in computer technology, the application potential of the statistical modeling techniques has shifted the balance. 1 The study of neural networks as general computational devices does not strictly follow this division of interests: rather, it provides a general framework outlining the limitations and possibilities aecting both research domains. The prime historic example here is obviously Minsky's and Papert's 1969 study of the computational limitations of singlelayer perceptrons [34], which was a major inuence in turning away interest from neural network learning to symbolic AI techniques for more
Absence of Cycles in Symmetric Neural Networks
 Advances in Neural Information Processing Systems (NIPS) 8
, 1995
"... For a given recurrent neural network, a discretetime model may have asymptotic dynamics different from the one of a related continuoustime model. In this paper, we consider a discretetime model that discretizes the continuoustime leaky integrator model and study its parallel and sequential dynam ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
For a given recurrent neural network, a discretetime model may have asymptotic dynamics different from the one of a related continuoustime model. In this paper, we consider a discretetime model that discretizes the continuoustime leaky integrator model and study its parallel and sequential dynamics for symmetric networks. We provide sufficient (and necessary in many cases) conditions for the discretized model to have the same cyclefree dynamics of the corresponding continuoustime model in symmetric networks. 1 INTRODUCTION For an nneuron recurrent network, a muchstudied and widelyused continuoustime (CT) model is the leaky integrator model (Hertz, et al., 1991; Hopfield, 1984), given by a system of nonlinear differential equations: ø i dx i dt = \Gammax i + oe i ( n X j=1 w ij x j + I i ); t 0; i = 1; :::; n; (1) and a related discretetime (DT) version is the sigmoidal model (Hopfield, 1982; Marcus & Westervelt, 1989), specified by a system of nonlinear difference e...
EnergyBased Computation with Symmetric Hopfield Nets
"... We propose a unifying approach to the analysis of computational aspects of symmetric Hopfield nets which is based on the concept of "energy source". Within this framework we present different results concerning the computational power of various Hopfield model classes. It is shown that ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a unifying approach to the analysis of computational aspects of symmetric Hopfield nets which is based on the concept of "energy source". Within this framework we present different results concerning the computational power of various Hopfield model classes. It is shown that polynomialtime computations by nondeterministic Turing machines can be reduced to the process of minimizing the energy in Hopfield nets (the MIN ENERGY problem). Furthermore, external and internal sources of energy are distinguished. The external sources include e.g. energizing inputs from socalled Hopfield languages, and also certain external oscillators that prove finite analog Hopfield nets to be computationally Turing universal. On the other hand, the internal source of energy can be implemented by a symmetric clock subnetwork producing an exponential number of oscillations which are used to energize the simulation of convergent asymmetric networks by Hopfield nets. This shows that infinite families of polynomialsize Hopfield nets compute the complexity class PSPACE/poly. A special attention is paid to generalizing these results for analog states and continuous time to point out alternative sources of efficient computation. 1
A Computational Taxonomy and Survey of Neural Network Models
 of Numbers and Symbols. (BS 1749:1985) London: British Standards Institution
, 2001
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their computational characteristics. The criteria of classification include e.g. the architecture of the network (feedforward vs ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their computational characteristics. The criteria of classification include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size (finite nets vs. infinite families), computation type (deterministic vs. probabilistic), etc. The underlying results concerning the computational power of perceptron, RBF, winnertakeall, and spiking neural networks are briey surveyed, with pointers to the relevant literature.
General Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (fee ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size ( nite nets vs. in  nite families), computation type (deterministic vs. probabilistic), etc.
Using TimeDiscrete Recurrent Neural Networks in Nonlinear Control
, 1996
"... We introduce a type of fully connected Recurrent Neural Networks (RNN) with special mathematical features which allows us to determine its qualitative dynamical behaviour. Based on this family of RNNs we describe a learning framework for the generation of trajectories with which we are able to solve ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We introduce a type of fully connected Recurrent Neural Networks (RNN) with special mathematical features which allows us to determine its qualitative dynamical behaviour. Based on this family of RNNs we describe a learning framework for the generation of trajectories with which we are able to solve adaptive control problems which is illustrated by the realization of adaptive leg control of a sixlegged walking machine.
A RNN based Control Architecture for Generating Periodic Action Sequences
"... . We introduce a type of fully connected Recurrent Neural Networks (RNN) with special mathematical features which allows us to determine its qualitative dynamical behaviour. Using these properties we describe a learning framework for the generation of sequences to be applied to nonlinear control ..."
Abstract
 Add to MetaCart
. We introduce a type of fully connected Recurrent Neural Networks (RNN) with special mathematical features which allows us to determine its qualitative dynamical behaviour. Using these properties we describe a learning framework for the generation of sequences to be applied to nonlinear control problems. The potential of this approach is demonstrated by applying the learning framework to the adaptive leg control of the sixlegged walking machine LAURON II. 1. Introduction During the last years the application of RNNs gained constantly growing attention. The first attempt to formulate a learning algorithm for fully connected RNNs with deterministic discrete time dynamics (BPTT) dates back to one of the original publications of the backpropagation algorithm [12]. For practical applications convergence of this algorithm proved to be too slow and could not be guaranteed. So one favorite starting point to handle the classification or generation of time series was the application of ...
On the Computational Complexity of Binary and Analog Symmetric Hopfield Nets
"... We investigate the computational properties of finite binary and analogstate discretetime symmetric Hopfield nets. For binary networks, we obtain a simulation of convergent asymmetric networks by symmetric networks with only a linear increase in network size and computation time. Then we analyze t ..."
Abstract
 Add to MetaCart
We investigate the computational properties of finite binary and analogstate discretetime symmetric Hopfield nets. For binary networks, we obtain a simulation of convergent asymmetric networks by symmetric networks with only a linear increase in network size and computation time. Then we analyze the convergence time of Hopfield nets in terms of the length of their bit representations. Here we construct an analog symmetric network whose convergence time exceeds the convergence time of any binary Hopfield net with the same representation length. Further, we prove that the MIN ENERGY problem for analog Hopfield nets is NPhard, and provide a polynomial time approximation algorithm for this problem in the case of binary nets. Finally, we show that symmetric analog nets with an external clock are computationally Turing universal.