Results 1 -
6 of
6
2011 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC)
"... Consensus on nonlinear spaces and graph coloring ..."
(Show Context)
Continuous-time systems that solve computational problems
"... The concept of using continuous-time dynamical systems (described by ordinary differential equations) in order to solve computational problems is discussed, with an emphasis on convergence analysis and design procedures. The continuous-time approach is illustrated on concrete examples related to the ..."
Abstract
- Add to MetaCart
(Show Context)
The concept of using continuous-time dynamical systems (described by ordinary differential equations) in order to solve computational problems is discussed, with an emphasis on convergence analysis and design procedures. The continuous-time approach is illustrated on concrete examples related to the computation of eigenvalues and eigenvectors of matrices. Key words: continuous-time systems, ordinary differential equations, continuous-time algorithms, gradient flows, Rayleigh quotient gradient flow, double-bracket flow, eigenvalue problem 1
Computation with continuous-time dynamical systems
, 2005
"... In this note we review the concept of using continuous-time dynamical systems (described by ordinary differential equations) to solve computational problems. Many scientific computing problems (such as weather prediction, structural analysis, electrical networks analysis) strongly rely on matrix com ..."
Abstract
- Add to MetaCart
In this note we review the concept of using continuous-time dynamical systems (described by ordinary differential equations) to solve computational problems. Many scientific computing problems (such as weather prediction, structural analysis, electrical networks analysis) strongly rely on matrix computation algorithms (linear system solving, eigenvalue decomposition, singular value decomposition, matrix nearness problems, balancing of linear systems, joint diagonalization of matrices...). These algorithms often assume the form of successive iteration, x(k + 1) = G(x(k)), (1) which can be viewed as a dynamical system, where the state x depends on the “time ” k that takes integer values. Equation (1) is thus a discrete-time (DT) system. A sequence of points {x(k)} ∞ k=− ∞ satisfying (1) is called the orbit of G based at x(0). A simple example of DT dynamical system is the power method, x(k + 1) = Ax(k), (2) which computes the dominant eigenvector of the matrix A, i.e., the orbit x(k) converges to an eigendirection of A as k goes to infinity. This and other iterations for matrix computation
unknown title
, 2006
"... This paper studies the relations between the local minima of a cost function f and the stable equilibria of the gradient descent flow of f. In particular, it is shown that, under the assumption that f is real analytic, local minimality is necessary and sufficient for stability. Under the weaker assu ..."
Abstract
- Add to MetaCart
(Show Context)
This paper studies the relations between the local minima of a cost function f and the stable equilibria of the gradient descent flow of f. In particular, it is shown that, under the assumption that f is real analytic, local minimality is necessary and sufficient for stability. Under the weaker assumption that f is indefinitely continuously differentiable, local minimality is neither necessary nor sufficient for stability. Key words. Gradient flow, Lyapunov stability, cost function, local minimum.