Results 1  10
of
13
Iteration, Inequalities, and Differentiability in Analog Computers
, 1999
"... Shannon's General Purpose Analog Computer (GPAC) is an elegant model of analog computation in continuous time. In this paper, we consider whether the set G of GPACcomputable functions is closed under iteration, that is, whether for any function f(x) 2 G there is a function F (x; t) 2 G s ..."
Abstract

Cited by 35 (16 self)
 Add to MetaCart
Shannon's General Purpose Analog Computer (GPAC) is an elegant model of analog computation in continuous time. In this paper, we consider whether the set G of GPACcomputable functions is closed under iteration, that is, whether for any function f(x) 2 G there is a function F (x; t) 2 G such that F (x; t) = f t (x) for nonnegative integers t. We show that G is not closed under iteration, but a simple extension of it is. In particular, if we relax the definition of the GPAC slightly to include unique solutions to boundary value problems, or equivalently if we allow functions x k (x) that sense inequalities in a dierentiable way, the resulting class, which we call G + k , is closed under iteration. Furthermore, G + k includes all primitive recursive functions, and has the additional closure property that if T (x) is in G+k , then any function of x computable by a Turing machine in T (x) time is also.
A Survey of ContinuousTime Computation Theory
 Advances in Algorithms, Languages, and Complexity
, 1997
"... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists o ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
(Show Context)
Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists on the general theory of continuoustime models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental...
An Overview Of The Computational Power Of Recurrent Neural Networks
 Proceedings of the 9th Finnish AI Conference STeP 2000{Millennium of AI, Espoo, Finland (Vol. 3: &quot;AI of Tomorrow&quot;: Symposium on Theory, Finnish AI Society
, 2000
"... INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. His ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. Historically, the brain theory interest was primary [32], but with the advances in computer technology, the application potential of the statistical modeling techniques has shifted the balance. 1 The study of neural networks as general computational devices does not strictly follow this division of interests: rather, it provides a general framework outlining the limitations and possibilities aecting both research domains. The prime historic example here is obviously Minsky's and Papert's 1969 study of the computational limitations of singlelayer perceptrons [34], which was a major inuence in turning away interest from neural network learning to symbolic AI techniques for more
Upper and Lower Bounds on ContinuousTime Computation
"... We consider various extensions and modifications of Shannon's General Purpose Analog Computer, which is a model of computation by differential equations in continuous time. We show that several classical computation classes have natural analog counterparts, including the primitive recursive fun ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider various extensions and modifications of Shannon's General Purpose Analog Computer, which is a model of computation by differential equations in continuous time. We show that several classical computation classes have natural analog counterparts, including the primitive recursive functions, the elementary functions, the levels of the Grzegorczyk hierarchy, and the arithmetical and analytical hierarchies.
An analog characterization of the subrecursive functions
 PROC. 4TH CONFERENCE ON REAL NUMBERS AND COMPUTERS
, 2000
"... We study a restricted version of Shannon’s General Purpose Analog Computer in which we only allow the machine to solve linear differential equations. This corresponds to only allowing local feedback in the machine’s variables. We show that if this computer is allowed to sense inequalities in a dif ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We study a restricted version of Shannon’s General Purpose Analog Computer in which we only allow the machine to solve linear differential equations. This corresponds to only allowing local feedback in the machine’s variables. We show that if this computer is allowed to sense inequalities in a differentiable way, then it can compute exactly the elementary functions. Furthermore, we show that if the machine has access to an oracle which computes a function f(x) with a suitable growth as x goes to infinity, then it can compute functions on any given level of the Grzegorczyk hierarchy. More precisely, we show that the model contains exactly the nth level of the Grzegorczyk hierarchy if it is allowed to solve n − 3 nonlinear differential equations of a certain kind. Therefore, we claim that there is a close connection between analog complexity classes, and the dynamical systems that compute them, and classical sets of subrecursive functions.
A ContinuousTime Hopfield Net Simulation of Discrete Neural Networks
, 2000
"... We investigate the computational power of continuoustime symmetric Hopfield nets. As is well known, such networks have very constrained, Liapunovfunction controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent fully p ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We investigate the computational power of continuoustime symmetric Hopfield nets. As is well known, such networks have very constrained, Liapunovfunction controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent fully parallel computation by a network of n discretetime binary neurons, with in general asymmetric interconnections, can be simulated by a symmetric continuoustime Hopfield net containing only 14n + 6 units using the saturatedlinear sigmoid activation function. In terms of standard discrete computation models this result implies that any polynomially spacebounded Turing machine can be simulated by a polynomially sizeincreasing sequence of continuoustime Hopfield nets.
ContinuousTime Symmetric Hopfield Nets Are Computationally Universal
"... We establish a fundamental result in the theory of computation by continuoustime dynamical systems, by showing that systems corresponding to so called continuoustime symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained, Liapunovfunction ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We establish a fundamental result in the theory of computation by continuoustime dynamical systems, by showing that systems corresponding to so called continuoustime symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained, Liapunovfunction controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent synchronous fully parallel computation by a recurrent network of n discretetime binary neurons, with in general asymmetric coupling weights, can be simulated by a symmetric continuoustime Hopfield net containing only 18n+7 units employing the saturatedlinear activation function. Moreover, if the asymmetric network has maximum integer weight size w_max and converges in discrete time t*, then the corresponding Hopfield net can be designed to operate in continuous time &Theta;(t*/&epsilon;), for any &epsilon; > 0...
EnergyBased Computation with Symmetric Hopfield Nets
"... We propose a unifying approach to the analysis of computational aspects of symmetric Hopfield nets which is based on the concept of "energy source". Within this framework we present different results concerning the computational power of various Hopfield model classes. It is shown that ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a unifying approach to the analysis of computational aspects of symmetric Hopfield nets which is based on the concept of "energy source". Within this framework we present different results concerning the computational power of various Hopfield model classes. It is shown that polynomialtime computations by nondeterministic Turing machines can be reduced to the process of minimizing the energy in Hopfield nets (the MIN ENERGY problem). Furthermore, external and internal sources of energy are distinguished. The external sources include e.g. energizing inputs from socalled Hopfield languages, and also certain external oscillators that prove finite analog Hopfield nets to be computationally Turing universal. On the other hand, the internal source of energy can be implemented by a symmetric clock subnetwork producing an exponential number of oscillations which are used to energize the simulation of convergent asymmetric networks by Hopfield nets. This shows that infinite families of polynomialsize Hopfield nets compute the complexity class PSPACE/poly. A special attention is paid to generalizing these results for analog states and continuous time to point out alternative sources of efficient computation. 1
A Computational Taxonomy and Survey of Neural Network Models
 of Numbers and Symbols. (BS 1749:1985) London: British Standards Institution
, 2001
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their computational characteristics. The criteria of classification include e.g. the architecture of the network (feedforward vs ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their computational characteristics. The criteria of classification include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size (finite nets vs. infinite families), computation type (deterministic vs. probabilistic), etc. The underlying results concerning the computational power of perceptron, RBF, winnertakeall, and spiking neural networks are briey surveyed, with pointers to the relevant literature.
General Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (fee ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We survey and summarize the existing literature on the computational aspects of neural network models, by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classi cation include e.g. the architecture of the network (feedforward vs. recurrent), time model (discrete vs. continuous), state type (binary vs. analog), weight constraints (symmetric vs. asymmetric), network size ( nite nets vs. in  nite families), computation type (deterministic vs. probabilistic), etc.