Results 1  10
of
128
A software package for computing matrix exponentials
 ACM Trans. Math. Software
, 1998
"... Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone ..."
Abstract

Cited by 139 (1 self)
 Add to MetaCart
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of Krylov subspace projection methods (Arnoldi and Lanczos processes) and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
Fourthorder time stepping for stiff PDEs
 SIAM J. SCI. COMPUT
, 2005
"... A modification of the exponential timedifferencing fourthorder Runge–Kutta method for solving stiff nonlinear PDEs is presented that solves the problem of numerical instability in the scheme as proposed by Cox and Matthews and generalizes the method to nondiagonal operators. A comparison is made ..."
Abstract

Cited by 94 (3 self)
 Add to MetaCart
A modification of the exponential timedifferencing fourthorder Runge–Kutta method for solving stiff nonlinear PDEs is presented that solves the problem of numerical instability in the scheme as proposed by Cox and Matthews and generalizes the method to nondiagonal operators. A comparison is made of the performance of this modified exponential timedifferencing (ETD) scheme against the competing methods of implicitexplicit differencing, integrating factors, timesplitting, and Fornberg and Driscoll’s “sliders ” for the KdV, Kuramoto–Sivashinsky, Burgers, and Allen–Cahn equations in one space dimension. Implementation of the method is illustrated by short Matlab programs for two of the equations. It is found that for these applications with fixed time steps, the modified ETD scheme is the best.
Exponential integrators
, 2010
"... In this paper we consider the construction, analysis, implementation and application of exponential integrators. The focus will be on two types of stiff problems. The first one is characterized by a Jacobian that possesses eigenvalues with large negative real parts. Parabolic partial differential eq ..."
Abstract

Cited by 68 (5 self)
 Add to MetaCart
In this paper we consider the construction, analysis, implementation and application of exponential integrators. The focus will be on two types of stiff problems. The first one is characterized by a Jacobian that possesses eigenvalues with large negative real parts. Parabolic partial differential equations and their spatial discretization are typical examples. The second class consists of highly oscillatory problems with purely imaginary eigenvalues of large modulus. Apart from motivating the construction of exponential integrators for various classes of problems, our main intention in this article is to present the mathematics behind these methods. We will derive error bounds that are independent of stiffness or highest frequencies in the system. Since the implementation of exponential integrators requires the evaluation of the product of a matrix function with a vector, we will briefly discuss some possible approaches as well. The paper concludes with some applications, in
The Magnus expansion and some of its applications
, 2008
"... Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as TimeDependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial resummation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related nonperturbative
Fourth order timestepping for low dispersion Kortewegde Vries and nonlinear Schrödinger equation
 27 T.P. Liu, Development of singularities in the
, 2008
"... Abstract. Purely dispersive equations, such as the Kortewegde Vries and the nonlinear Schrödinger equations in the limit of small dispersion, have solutions to Cauchy problems with smooth initial data which develop a zone of rapid modulated oscillations in the region where the corresponding dispers ..."
Abstract

Cited by 30 (18 self)
 Add to MetaCart
(Show Context)
Abstract. Purely dispersive equations, such as the Kortewegde Vries and the nonlinear Schrödinger equations in the limit of small dispersion, have solutions to Cauchy problems with smooth initial data which develop a zone of rapid modulated oscillations in the region where the corresponding dispersionless equations have shocks or blowup. Fourth order timestepping in combination with spectral methods is beneficial to numerically resolve the steep gradients in the oscillatory region. We compare the performance of several fourth order methods for the Kortewegde Vries and the focusing and defocusing nonlinear Schrödinger equations in the small dispersion limit: an exponential timedifferencing fourthorder RungeKutta method as proposed by Cox and Matthews in the implementation by Kassam and Trefethen, integrating factors, timesplitting, Fornberg and Driscoll’s ‘sliders’, and an ODE solver in Matlab.
A new investigation of the extended Krylov subspace method for matrix function evaluations
 Numer. Linear Algebra Appl
, 2010
"... Abstract. For large square matrices A and functions f, the numerical approximation of the action of f(A) to a vector v has received considerable attention in the last two decades. In this paper we investigate the Extended Krylov subspace method, a technique that was recently proposed to approximate ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
(Show Context)
Abstract. For large square matrices A and functions f, the numerical approximation of the action of f(A) to a vector v has received considerable attention in the last two decades. In this paper we investigate the Extended Krylov subspace method, a technique that was recently proposed to approximate f(A)v for A symmetric. We provide a new theoretical analysis of the method, which improves the original result for A symmetric, and gives a new estimate for A nonsymmetric. Numerical experiments confirm that the new error estimates correctly capture the linear asymptotic convergence rate of the approximation. By using recent algorithmic improvements, we also show that the method is computationally competitive with respect to other enhancement techniques.
ROWMAP  a ROWcode with Krylov techniques for large stiff ODEs
 Appl. Numer. Math
, 1997
"... We present a KrylovWcode ROWMAP for the integration of stiff initial value problems. It is based on the ROWmethods of the code ROS4 of Hairer and Wanner and uses Krylov techniques for the solution of linear systems. A special multiple Arnoldi process ensures order p = 4 already for fairly low dim ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
We present a KrylovWcode ROWMAP for the integration of stiff initial value problems. It is based on the ROWmethods of the code ROS4 of Hairer and Wanner and uses Krylov techniques for the solution of linear systems. A special multiple Arnoldi process ensures order p = 4 already for fairly low dimensions of the Krylov subspaces independently of the dimension of the differential equations. Numerical tests and comparisons with the multistep code VODPK illustrate the efficiency of ROWMAP for large stiff systems. Furthermore, the application to nonautonomous systems is discussed in more detail. Key words. ROWmethods, stiff initial value problems, Krylov subspaces, multiple Arnoldi process AMS(MOS) subject classifications. 65L06, 65F10 1 Introduction For the numerical solution of stiff initial value problems y 0 (t) = f(t; y(t)) y(t 0 ) = y 0 2 R n ; (1.1) implicit or linearly implicit methods have to be used due to stability requirements. For large dimensions n these methods sp...
Evaluating matrix functions for exponential integrators via Carathéodory–Fejér approximation and contour integrals
 ELECTRONIC TRANSACTIONS ON NUMERICAL ANALYSIS
, 2007
"... Among the fastest methods for solving stiff PDE are exponential integrators, which require the evaluation of, where is a negative semidefinite matrix and is the exponential function or one of the related “ functions ” such as. Building on previous work by Trefethen and Gutknecht, Minchev, and Lu, w ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Among the fastest methods for solving stiff PDE are exponential integrators, which require the evaluation of, where is a negative semidefinite matrix and is the exponential function or one of the related “ functions ” such as. Building on previous work by Trefethen and Gutknecht, Minchev, and Lu, we propose two methods for the fast evaluation of that are especially useful when shifted systems #"$& % can be solved efficiently, e.g. by a sparse direct solver. The first method is based on best rational approximations to on the negative real axis computed via the CarathéodoryFejér procedure. Rather than using optimal poles we approximate the functions in a set of common poles, which speeds up typical computations by a factor ' of (*) + to. The second method is an application of the trapezoid rule on a Talbottype contour.