Results 1  10
of
14
A Survey of ContinuousTime Computation Theory
 Advances in Algorithms, Languages, and Complexity
, 1997
"... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists o ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists on the general theory of continuoustime models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental...
Analog Computation with Dynamical Systems
 Physica D
, 1997
"... This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete th ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete theory we develop fundamentals of computational complexity for dynamical systems, discrete or continuous in time, on the basis of an intrinsic time scale of the system. Dissipative dynamical systems are classified into the computational complexity classes P d , CoRP d , NP d
A theory of complexity for continuous time systems
 Journal of Complexity
, 2002
"... We present a model of computation with ordinary differential equations (ODEs) which converge to attractors that are interpreted as the output of a computation. We introduce a measure of complexity for exponentially convergent ODEs, enabling an algorithmic analysis of continuous time flows and their ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We present a model of computation with ordinary differential equations (ODEs) which converge to attractors that are interpreted as the output of a computation. We introduce a measure of complexity for exponentially convergent ODEs, enabling an algorithmic analysis of continuous time flows and their comparison with discrete algorithms. We define polynomial and logarithmic continuous time complexity classes and show that an ODE which solves the maximum network flow problem has polynomial time complexity. We also analyze a simple flow that solves the Maximum problem in logarithmic time. We conjecture that a subclass of the continuous P is equivalent to the classical P. 2001 Elsevier Science (USA) Key Words: theory of analog computation; dynamical systems.
The Computational Power of Continuous Time Neural Networks
 In Proc. SOFSEM'97, the 24th Seminar on Current Trends in Theory and Practice of Informatics, Lecture Notes in Computer Science
, 1995
"... We investigate the computational power of continuoustime neural networks with Hopfieldtype units. We prove that polynomialsize networks with saturatedlinear response functions are at least as powerful as polynomially spacebounded Turing machines. 1 Introduction In a paper published in 1984 [11 ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
We investigate the computational power of continuoustime neural networks with Hopfieldtype units. We prove that polynomialsize networks with saturatedlinear response functions are at least as powerful as polynomially spacebounded Turing machines. 1 Introduction In a paper published in 1984 [11], John Hopfield introduced a continuoustime version of the neural network model whose discretetime variant he had discussed in his seminal 1982 paper [10]. The 1984 paper also contains an electronic implementation scheme for the continuoustime networks, and an argument showing that for sufficiently largegain nonlinearities, these behave similarly to the discretetime ones, at least when used as associative memories. The power of Hopfield's discretetime networks as generalpurpose computational devices was analyzed in [17, 18]. In this paper we conduct a similar analysis for networks consisting of Hopfield's continuoustime units; however we are at this stage able to analyze only the gen...
Computations via experiments with kinematic systems
, 2004
"... Consider the idea of computing functions using experiments with kinematic systems. We prove that for any set A of natural numbers there exists a 2dimensional kinematic system BA with a single particle P whose observable behaviour decides n ∈ A for all n ∈ N. The system is a bagatelle and can be des ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Consider the idea of computing functions using experiments with kinematic systems. We prove that for any set A of natural numbers there exists a 2dimensional kinematic system BA with a single particle P whose observable behaviour decides n ∈ A for all n ∈ N. The system is a bagatelle and can be designed to operate under (a) Newtonian mechanics or (b) Relativistic mechanics. The theorem proves that valid models of mechanical systems can compute all possible functions on discrete data. The proofs show how any information (coded by some A) can be embedded in the structure of a simple kinematic system and retrieved by simple observations of its behaviour. We reflect on this undesirable situation and argue that mechanics must be extended to include a formal theory for performing experiments, which includes the construction of systems. We conjecture that in such an extended mechanics the functions computed by experiments are precisely those computed by algorithms. We set these theorems and ideas in the context of the literature on the general problem “Is physical behaviour computable? ” and state some open problems.
Grounding Analog Computers
 Think
, 1993
"... Although analog computation was eclipsed by digital computation in the second half of the twentieth century, it is returning as an important alternative computing technology. Indeed, as explained in this report, theoretical results imply that analog computation can escape from the limitations of dig ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
Although analog computation was eclipsed by digital computation in the second half of the twentieth century, it is returning as an important alternative computing technology. Indeed, as explained in this report, theoretical results imply that analog computation can escape from the limitations of digital computation. Furthermore, analog computation has emerged as an important theoretical framework for discussing computation in the brain and other natural systems. The report (1) summarizes the fundamentals of analog computing, starting with the continuous state space and the various processes by which analog computation can be organized in time; (2) discusses analog computation in nature, which provides models and inspiration for many contemporary uses of analog computation, such as neural networks; (3) considers generalpurpose analog computing, both from a theoretical perspective and in terms of practical generalpurpose analog computers; (4) discusses the theoretical power of
An Overview Of The Computational Power Of Recurrent Neural Networks
 Proceedings of the 9th Finnish AI Conference STeP 2000{Millennium of AI, Espoo, Finland (Vol. 3: "AI of Tomorrow": Symposium on Theory, Finnish AI Society
, 2000
"... INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. His ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. Historically, the brain theory interest was primary [32], but with the advances in computer technology, the application potential of the statistical modeling techniques has shifted the balance. 1 The study of neural networks as general computational devices does not strictly follow this division of interests: rather, it provides a general framework outlining the limitations and possibilities aecting both research domains. The prime historic example here is obviously Minsky's and Papert's 1969 study of the computational limitations of singlelayer perceptrons [34], which was a major inuence in turning away interest from neural network learning to symbolic AI techniques for more
Probabilistic analysis of a differential equation for linear programming
 Journal of Complexity
, 2003
"... ARTICLE IN PRESS ..."
ContinuousTime Symmetric Hopfield Nets Are Computationally Universal
"... We establish a fundamental result in the theory of computation by continuoustime dynamical systems, by showing that systems corresponding to so called continuoustime symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained, Liapunovfunction ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We establish a fundamental result in the theory of computation by continuoustime dynamical systems, by showing that systems corresponding to so called continuoustime symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained, Liapunovfunction controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent synchronous fully parallel computation by a recurrent network of n discretetime binary neurons, with in general asymmetric coupling weights, can be simulated by a symmetric continuoustime Hopfield net containing only 18n+7 units employing the saturatedlinear activation function. Moreover, if the asymmetric network has maximum integer weight size w_max and converges in discrete time t*, then the corresponding Hopfield net can be designed to operate in continuous time Θ(t*/ε), for any ε > 0...
Computing with ContinuousTime Liapunov Systems
"... We establish a fundamental result in the theory of computation by continuoustime dynamical systems, by showing that systems corresponding to so called continuoustime symmetric Hopfield nets are capable of general computation. More precisely, we prove that any function computed by a discretetime a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We establish a fundamental result in the theory of computation by continuoustime dynamical systems, by showing that systems corresponding to so called continuoustime symmetric Hopfield nets are capable of general computation. More precisely, we prove that any function computed by a discretetime asymmetric recurrent network of n threshold gates can also be computed by a continuoustime symmetricallycoupled Hopfield system of dimension 18n + 7. Moreover, if the threshold logic network has maximum weight wmax and converges in discrete time t , then the corresponding Hopfield system can be designed to operate in continuous time (t ="), for any value 0 < " < 0:0025 such that wmax2 3n "2 1=" . The result appears at rst sight counterintuitive, because the dynamics of any symmetric Hopfield system is constrained by a Liapunov, or energy function defined on its state space. In particular, such a system always converges from any initial state towards some stable equilibrium state, and hence cannot exhibit nondamping oscillations, i.e. strictly speaking cannot simulate even a single alternating bit. However, we show that if one only considers terminating computations, then the Liapunov constraint can be overcome, and one can in fact embed arbitrarily complicated computations in the dynamics of Liapunov systems with only a modest cost in the system's dimensionality. In terms of standard discrete computation models, our result implies that any polynomially spacebounded Turing machine can be simulated by a family of polynomialsize continuoustime symmetric Hopfield nets.