Results 1 
6 of
6
A Survey of ContinuousTime Computation Theory
 Advances in Algorithms, Languages, and Complexity
, 1997
"... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists o ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists on the general theory of continuoustime models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental...
A theory of complexity for continuous time systems
 Journal of Complexity
, 2002
"... We present a model of computation with ordinary differential equations (ODEs) which converge to attractors that are interpreted as the output of a computation. We introduce a measure of complexity for exponentially convergent ODEs, enabling an algorithmic analysis of continuous time flows and their ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We present a model of computation with ordinary differential equations (ODEs) which converge to attractors that are interpreted as the output of a computation. We introduce a measure of complexity for exponentially convergent ODEs, enabling an algorithmic analysis of continuous time flows and their comparison with discrete algorithms. We define polynomial and logarithmic continuous time complexity classes and show that an ODE which solves the maximum network flow problem has polynomial time complexity. We also analyze a simple flow that solves the Maximum problem in logarithmic time. We conjecture that a subclass of the continuous P is equivalent to the classical P. 2001 Elsevier Science (USA) Key Words: theory of analog computation; dynamical systems.
Probabilistic analysis of a differential equation for linear programming
 Journal of Complexity
, 2003
"... ARTICLE IN PRESS ..."
Biochemistry Department,
, 2003
"... proposed running head: probabilistic analysis of a differential equation for LP ..."
Abstract
 Add to MetaCart
proposed running head: probabilistic analysis of a differential equation for LP
Asa BenHur a,b, Joshua Feinberg c,d, Shmuel Fishman d,e
, 2001
"... In this paper we address the complexity of solving linear programming problems with a set of differential equations that converge to a fixed point that represents the optimal solution. Assuming a probabilistic model, where the inputs are i.i.d. Gaussian variables, we compute the distribution of the ..."
Abstract
 Add to MetaCart
In this paper we address the complexity of solving linear programming problems with a set of differential equations that converge to a fixed point that represents the optimal solution. Assuming a probabilistic model, where the inputs are i.i.d. Gaussian variables, we compute the distribution of the convergence rate to the attracting fixed point. Using the framework of Random Matrix Theory, we derive a simple expression for this distribution in the asymptotic limit of large problem size. In this limit, we find that the distribution of the convergence rate is a scaling function, namely it is a function of one variable that is a combination of three parameters: the number of variables, the number of constraints and the convergence rate, rather than a function of these parameters separately. We also estimate numerically the distribution of computation times, namely the time required to reach a vicinity of the attracting fixed point, and find that it is also a scaling function. Using the problem size dependence of the distribution functions, we derive high probability bounds on the convergence rates and on the computation times.
www.elsevier.com/locate/physd Analogsymbolic memory that tracks via reconsolidation
, 2008
"... A fundamental part of a computational system is its memory, which is used to store and retrieve data. Classical computer memories rely on the static approach and are very different from human memories. Neural network memories are based on autoassociative attractor dynamics and thus provide a high l ..."
Abstract
 Add to MetaCart
A fundamental part of a computational system is its memory, which is used to store and retrieve data. Classical computer memories rely on the static approach and are very different from human memories. Neural network memories are based on autoassociative attractor dynamics and thus provide a high level of pattern completion. However, they are not used in general computation since there are practically no algorithms to load an arbitrary landscape of attractors into them. In this sense neural network memory models cannot communicate well with symbolic and prior knowledge. We propose the design of a new memory based on localist attractor dynamics with reconsolidation called Reconsolidation Attractor Network (RAN). RAN combines symbolic and subsymbolic features in a very attractive way: it is based on attractors; enables pattern classification under missing data; and demonstrates dynamic reconsolidation, which is very useful for tracking changing concepts. The perception RAN enables is somewhat reminiscent of human perception due to its context sensitivity. Furthermore, it enables an immediate and clear interface with symbolic memories, including loading of attractors by means of trivial wiring, updating attractors, and retrieving them faster without waiting for full convergence. It also scales to any number of concepts. This provides a useful counterpoint to more conventional memory systems, such as random access memory and autoassociative neural networks.