Results 1  10
of
183
Liegroup methods
 ACTA NUMERICA
, 2000
"... Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having ..."
Abstract

Cited by 154 (24 self)
 Add to MetaCart
Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having introduced requisite elements of differential geometry, this paper surveys the novel theory of numerical integrators that respect Liegroup structure, highlighting theory, algorithmic issues and a number of applications.
A software package for computing matrix exponentials
 ACM Trans. Math. Software
, 1998
"... Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone ..."
Abstract

Cited by 139 (1 self)
 Add to MetaCart
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of Krylov subspace projection methods (Arnoldi and Lanczos processes) and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
COMPUTING SEMICLASSICAL QUANTUM DYNAMICS WITH HAGEDORN
"... Abstract. We consider the approximation of multiparticle quantum dynamics in the semiclassical regime by Hagedorn wavepackets, which are products of complex Gaussians with polynomials that form an orthonormal L 2 basis and preserve their type under propagation in Schrödinger equations with quadrati ..."
Abstract

Cited by 136 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the approximation of multiparticle quantum dynamics in the semiclassical regime by Hagedorn wavepackets, which are products of complex Gaussians with polynomials that form an orthonormal L 2 basis and preserve their type under propagation in Schrödinger equations with quadratic potentials. We build a fully explicit, timereversible timestepping algorithm to approximate the solution of the Hagedorn wavepacket dynamics. The algorithm is based on a splitting between the kinetic and potential part of the Hamiltonian operator, as well as on a splitting of the potential into its local quadratic approximation and the remainder. The algorithm is robust in the semiclassical limit. It reduces to the Strang splitting of the Schrödinger equation in the limit of the full basis set, and it advances positions and momenta by the Störmer–Verlet method for the classical equations of motion. The algorithm allows for the treatment of multiparticle problems by thinning out the basis according to a hyperbolic cross approximation, and of highdimensional problems by Hartreetype approximations in a moving coordinate frame.
Exponential integrators for large systems of differential equations,
 SIAM J. Sci. Comput.
, 1998
"... ..."
(Show Context)
Projective methods for stiff differential equations: problems with gaps in their eigenvalue spectrum
 SIAM J. SCI. COMP
, 2001
"... We show that there exist classes of explicit numerical integration methods that can handle very stiff problems if the eigenvalues are separated into two clusters, one containing the "stiff", or fast components, and one containing the slow components. These methods have large average step s ..."
Abstract

Cited by 90 (22 self)
 Add to MetaCart
We show that there exist classes of explicit numerical integration methods that can handle very stiff problems if the eigenvalues are separated into two clusters, one containing the "stiff", or fast components, and one containing the slow components. These methods have large average step sizes relative to the fast components. Conventional implicit methods involve the solution of nonlinear equations at each step, which for large problems requires significant communication between processors on a multiprocessor machine. For such problems the methods proposed here have significant potential for speed improvement.
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 85 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Exponential integrators
, 2010
"... In this paper we consider the construction, analysis, implementation and application of exponential integrators. The focus will be on two types of stiff problems. The first one is characterized by a Jacobian that possesses eigenvalues with large negative real parts. Parabolic partial differential eq ..."
Abstract

Cited by 68 (5 self)
 Add to MetaCart
(Show Context)
In this paper we consider the construction, analysis, implementation and application of exponential integrators. The focus will be on two types of stiff problems. The first one is characterized by a Jacobian that possesses eigenvalues with large negative real parts. Parabolic partial differential equations and their spatial discretization are typical examples. The second class consists of highly oscillatory problems with purely imaginary eigenvalues of large modulus. Apart from motivating the construction of exponential integrators for various classes of problems, our main intention in this article is to present the mathematics behind these methods. We will derive error bounds that are independent of stiffness or highest frequencies in the system. Since the implementation of exponential integrators requires the evaluation of the product of a matrix function with a vector, we will briefly discuss some possible approaches as well. The paper concludes with some applications, in
A restarted Krylov subspace method for the evaluation of matrix functions
 SIAM J. Numer. Anal
"... Abstract. We show how the Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations. The resulting restarted algorithm reduces to other known algorithms for the recipro ..."
Abstract

Cited by 58 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We show how the Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations. The resulting restarted algorithm reduces to other known algorithms for the reciprocal and the exponential functions. We further show that the restarted algorithm inherits the superlinear convergence property of its unrestarted counterpart for entire functions and present the results of numerical experiments.
A new iterative method for solving largescale Lyapunov matrix equations
 SIAM J. Sci. Comput
"... Abstract. In this paper we propose a new projection method to solve largescale continuoustime Lyapunov matrix equations. The new method projects the problem onto a much smaller approximation space, generated as a combination of Krylov subspaces in A and A −1. The reduced problem is then solved by ..."
Abstract

Cited by 55 (6 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we propose a new projection method to solve largescale continuoustime Lyapunov matrix equations. The new method projects the problem onto a much smaller approximation space, generated as a combination of Krylov subspaces in A and A −1. The reduced problem is then solved by means of a direct Lyapunov scheme based on matrix factorizations. The reported numerical results show the competitiveness of the new method, compared to a stateoftheart approach based on the factorized Alternating Direction Implicit (ADI) iteration. 1. Introduction. Given