Results 1  10
of
100
Universal Computation and Other Capabilities of Hybrid and Continuous Dynamical Systems
, 1995
"... We explore the simulation and computational capabilities of hybrid and continuous dynamical systems. The continuous dynamical systems considered are ordinary differential equations (ODEs). For hybrid systems we concentrate on models that combine ODEs and discrete dynamics (e.g., finite automata). We ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
We explore the simulation and computational capabilities of hybrid and continuous dynamical systems. The continuous dynamical systems considered are ordinary differential equations (ODEs). For hybrid systems we concentrate on models that combine ODEs and discrete dynamics (e.g., finite automata). We review and compare four such models from the literature. Notions of simulation of a discrete dynamical system by a continuous one are developed. We show that hybrid systems whose equations can describe a precise binary timing pulse (exact clock) can simulate arbitrary reversible discrete dynamical systems defined on closed subsets of R n . The simulations require continuous ODEs in R 2n with the exact clock as input. All four hybrid systems models studied here can implement exact clocks. We also prove that any discrete dynamical system in Z n can be simulated by continuous ODEs in R 2n+1 . We use this to show that smooth ODEs in R 3 can simulate arbitrary Turing machines, and henc...
Learning in Linear Neural Networks: a Survey
 IEEE Transactions on neural networks
, 1995
"... Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure of the error function landscape; (2) the temporal evolution of generalization; (3) unsupervised learning algorithms and their properties. The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions. A few new results are also spread across the paper, including an analysis of the effect of noise on backpropagation networks and a unified view of all unsupervised algorithms. Keywords linear networks, supervised and unsupervised learning, Hebbian learning, principal components, generalization, local minima, selforganisation I. Introduction This paper addresses the problems of supervise...
Numerical solution of isospectral flows
 Math. of Comp
, 1997
"... Abstract. In this paper we are concerned with the problem of solving numerically isospectral flows. These flows are characterized by the differential equation L ′ =[B(L),L], L(0) = L0, where L0 is a d × d symmetric matrix, B(L) is a skewsymmetric matrix function of L and [B, L] is the Lie bracket ..."
Abstract

Cited by 50 (23 self)
 Add to MetaCart
Abstract. In this paper we are concerned with the problem of solving numerically isospectral flows. These flows are characterized by the differential equation L ′ =[B(L),L], L(0) = L0, where L0 is a d × d symmetric matrix, B(L) is a skewsymmetric matrix function of L and [B, L] is the Lie bracket operator. We show that standard Runge–Kutta schemes fail in recovering the main qualitative feature of these flows, that is isospectrality, since they cannot recover arbitrary cubic conservation laws. This failure motivates us to introduce an alternative approach and establish a framework for generation of isospectral methods of arbitrarily high order. 1. Background and notation 1.1. Introduction. The interest in solving isospectral flows is motivated by their relevance in a wide range of applications, from molecular dynamics to micromagnetics to linear algebra. The general form of an isospectral flow is the differential
Inverse eigenvalue problems
 SIAM Rev
, 1998
"... Abstract. A collection of inverse eigenvalue problems are identi ed and classi ed according to their characteristics. Current developments in both the theoretic and the algorithmic aspects are summarized and reviewed in this paper. This exposition also reveals many open questions that deserves furth ..."
Abstract

Cited by 43 (7 self)
 Add to MetaCart
Abstract. A collection of inverse eigenvalue problems are identi ed and classi ed according to their characteristics. Current developments in both the theoretic and the algorithmic aspects are summarized and reviewed in this paper. This exposition also reveals many open questions that deserves further study. An extensive bibliography of pertinent literature is attached.
A Survey of ContinuousTime Computation Theory
 Advances in Algorithms, Languages, and Complexity
, 1997
"... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists o ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists on the general theory of continuoustime models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental...
A Theory for Learning by Weight Flow on StiefelGrassman Manifold
 Neural Computation
, 2001
"... Recently we introduced the concept of neural networks learning on StiefelGrassman manifold for MLPlike networks. Contributions of other authors have also appeared in the scientific literature about this topic. The aim of this paper is to present a general theory for it, and to illustrate how e ..."
Abstract

Cited by 26 (13 self)
 Add to MetaCart
Recently we introduced the concept of neural networks learning on StiefelGrassman manifold for MLPlike networks. Contributions of other authors have also appeared in the scientific literature about this topic. The aim of this paper is to present a general theory for it, and to illustrate how existing theories may be explained within the general framework proposed here. 1 1 Introduction In a multilayerperceptronlike network formed by the interconnection of basic neurons, whose only adjustable part consists of weightvectors, learning the optimal set of connection patterns may be interpreted as selecting the best directions among all possible ones in the space that the weightvectors belong to (Fyfe, 1995). This interpretation is very useful, in that if a learning error criterion is defined over the weightspace, it measures how much interesting directions are, so that ultimately the rule with which network learns may be conceived as a searching procedure allowing to find out...
CONSENSUS OPTIMIZATION ON MANIFOLDS
 VOL. 48, NO. 1, PP. 56–76 C ○ 2009 SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS
, 2009
"... The present paper considers distributed consensus algorithms that involve N agents evolving on a connected compact homogeneous manifold. The agents track no external reference and communicate their relative state according to a communication graph. The consensus problem is formulated in terms of th ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
The present paper considers distributed consensus algorithms that involve N agents evolving on a connected compact homogeneous manifold. The agents track no external reference and communicate their relative state according to a communication graph. The consensus problem is formulated in terms of the extrema of a cost function. This leads to efficient gradient algorithms to synchronize (i.e., maximizing the consensus) or balance (i.e., minimizing the consensus) the agents; a convenient adaptation of the gradient algorithms is used when the communication graph is directed and timevarying. The cost function is linked to a specific centroid definition on manifolds, introduced here as the induced arithmetic mean, that is easily computable in closed form and may be of independent interest for a number of manifolds. The special orthogonal group SO(n) andthe Grassmann manifold Grass(p, n) are treated as original examples. A link is also drawn with the many existing results on the circle.
The method of iterated commutators for ordinary differential equations on Lie groups
, 1996
"... We construct numerical methods to integrate ordinary differential equations that evolve on Lie groups. These schemes are based on exponentials and iterated commutators, they are explicit and their order analysis is relatively simple. Thus we can construct groupinvariant integrators of arbitraril ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
We construct numerical methods to integrate ordinary differential equations that evolve on Lie groups. These schemes are based on exponentials and iterated commutators, they are explicit and their order analysis is relatively simple. Thus we can construct groupinvariant integrators of arbitrarily high order. Among other applications we show that this approach can be used to obtain new symplectic schemes when applied to Hamiltonian problems. Some numerical experiments are presented.
Analog Computation with Dynamical Systems
 Physica D
, 1997
"... This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete th ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete theory we develop fundamentals of computational complexity for dynamical systems, discrete or continuous in time, on the basis of an intrinsic time scale of the system. Dissipative dynamical systems are classified into the computational complexity classes P d , CoRP d , NP d