Results 1  10
of
18
Liegroup methods
 ACTA NUMERICA
, 2000
"... Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having ..."
Abstract

Cited by 96 (18 self)
 Add to MetaCart
Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having introduced requisite elements of differential geometry, this paper surveys the novel theory of numerical integrators that respect Liegroup structure, highlighting theory, algorithmic issues and a number of applications.
On the Global Error of Discretization Methods for HighlyOscillatory Ordinary Differential Equations
, 2000
"... Commencing from a globalerror formula, originally due to Henrici, we investigate the accumulation of global error in the numerical solution of linear highlyoscillating systems of the form y 00 + g(t)y = 0, where g(t) t!1 \Gamma! 1. Using WKB analysis we derive an explicit form of the globalerror ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Commencing from a globalerror formula, originally due to Henrici, we investigate the accumulation of global error in the numerical solution of linear highlyoscillating systems of the form y 00 + g(t)y = 0, where g(t) t!1 \Gamma! 1. Using WKB analysis we derive an explicit form of the globalerror envelope for RungeKutta and Magnus methods. Our results are closely matched by numerical experiments. Motivated by the superior performance of Liegroup methods, we present a modification of the Magnus expansion which displays even better longterm behaviour in the presence of oscillations.
Improved high order integrators based on the Magnus expansion
 BIT
, 1999
"... We build high order efficient numerical integration methods for solving the linear differential equation X = A(t)X based on Magnus expansion. These methods ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We build high order efficient numerical integration methods for solving the linear differential equation X = A(t)X based on Magnus expansion. These methods
Numerical Methods for Strong Solutions of Stochastic Dierential Equations: an Overview
, 2003
"... This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations (SDEs). We give a brief survey of the area focusing on a number of application areas where approximations to strong solution ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations (SDEs). We give a brief survey of the area focusing on a number of application areas where approximations to strong solutions are important, with a particular focus on computational biology applications (section 1), and give the necessary analytical tools for understanding some of the important concepts associated with stochastic processes (section 2). In section 3 we present the stochastic Taylor series expansion as the fundamental mechanism for constructing effective numerical methods, give general results that relate local and global order of convergence and mention the Magnus expansion as a mechanism for designing methods which preserve the underlying structure of the problem. In sections 4 and 5 we present various classes of explicit and implicit methods for strong solutions, based on the underlying structure of the problem. Finally, in section 6 we discuss implementation issues relating to maintaining the Brownian path, efficient simulation of stochastic integrals and variable stepsize implementations based on various types of control.
The Magnus expansion and some of its applications
, 2008
"... Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as TimeDependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial resummation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related nonperturbative
Quadrature Methods Based on the Cayley Transform
, 1999
"... Integration of Lie type equations on matrix Lie groups using Magnus series methods has recently been proposed by Iserles and Nørsett. The methods use the exponential mapping, whose computation maybevery costly. In this paper a smaller class of Lie groups is considered, for which the exponential mapp ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Integration of Lie type equations on matrix Lie groups using Magnus series methods has recently been proposed by Iserles and Nørsett. The methods use the exponential mapping, whose computation maybevery costly. In this paper a smaller class of Lie groups is considered, for which the exponential mapping can be replaced by e.g. the Cayley transform or the diagonal Padé approximants. Particular methods are being derived, and numerical experiments that illustrate and verify properties of the new methods are included.
High order optimized geometric integrators for linear differential equations
, 2000
"... In this paper new integration algorithms for linear differential equations up to eighth order are obtained. Starting from Magnus expansion, methods based on Cayley transformation and Fer expansion are also built. The structure of the exact solution is retained while the computational cost is reduced ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In this paper new integration algorithms for linear differential equations up to eighth order are obtained. Starting from Magnus expansion, methods based on Cayley transformation and Fer expansion are also built. The structure of the exact solution is retained while the computational cost is reduced compared to similar methods. Their relative performance is tested on some illustrative examples.
Interpolation in Lie groups and homogeneous spaces
 SIAM J. NUMER. ANAL
, 1998
"... We consider interpolation in Lie groups and homogeneous spaces. Based on points on the manifold together with tangent vectors at (some of) these points, we construct Hermite interpolation polynomials. If the points and tangent vectors are produced in the process of integrating an ordinary differenti ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We consider interpolation in Lie groups and homogeneous spaces. Based on points on the manifold together with tangent vectors at (some of) these points, we construct Hermite interpolation polynomials. If the points and tangent vectors are produced in the process of integrating an ordinary differential equation on a Lie group or a homogeneous space, we use the truncated inverse of the differential of the exponential mapping and the truncated BakerCampbellHausdorff formula to relatively cheaply construct an interpolation polynomial. Much effort has lately been put into research on geometric integration, i.e. the process of integrating a differential equation in such away that the configuration space is respected by the numerical solution. Some of these methods may be viewed as generalizations of classical methods, and we investigate the construction of intrinsic dense output devices as generalizations of the continuous RungeKutta methods.
Accurate and Efficient Simulation of Rigid Body Rotations
 J. of Comp. Phys
, 2000
"... This paper introduces efficient and accurate algorithms for simulating the rotation of a threedimensional rigid object and compares them to several prior methods. The paper considers algorithms which exactly preserve angular momentum and either closely preserve or exactly conserve energy. First, we ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper introduces efficient and accurate algorithms for simulating the rotation of a threedimensional rigid object and compares them to several prior methods. The paper considers algorithms which exactly preserve angular momentum and either closely preserve or exactly conserve energy. First, we introduce a secondorder accurate method that incorporates a thirdorder correction; then a thirdorder accurate method; and finally a fourthorder accurate method. These methods are singlestep and the update operation is only a single rotation. The algorithms are derived in a general Lie group setting. Second, we introduce a nearoptimal energycorrection method which allows exact conservation of energy. This algorithm is faster and easier to implement than implicit methods for exact energyconservation. Our thirdorder method with energy conservation is experimentally seen to act better than a fourthorder accurate method. These new methods are superior to naive RungeKutta or predictorcorrector methods, which are only secondorder accurate for spherevalued functions. They are also superior to the explicit methods of SimoWong. The secondorder symplectic McLachlanReich methods are observed to be excellent at approximate energyconservation for extended periods of time, but are not as good at longterm accuracy as our best methods. Finally we present comparisons with fourthorder accurate symplectic methods, which have good accuracy but higher computational cost.